var/home/core/zuul-output/0000755000175000017500000000000015136330221014521 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015136337600015475 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000274203615136337527020300 0ustar corecoreWyikubelet.log_o[;r)Br'o b-n(!9t%Cs7}g/غIs,r.k9GfD ~63ڋI_翪|mvşo#oVݏKf+ovpZjB% oo/q3m^]/o?8.7oW}ʋghewx/mX,ojŻ ^Tb3b#׳:}=p7뼝ca㑔`e0I1Q!&ѱ[/o^{W-{t3_U|6 x)K#/5ΌR"ggóisR)N %emOQ/Ϋ[oa0vs68/Jʢ ܚʂ9ss3+aô٥J}{37FEbп3 FKX1QRQlrTvb)E,s)Wɀ;$#LcdHM%vz_. o~I|3j dF{ "IΩ?PF~J~ ` 17ׅwڋًM)$Fiqw7Gt7L"u 0V9c  ˹dvYļU[ Z.׿-h QZ*U1|t5wKOؾ{mk b2 ܨ;RJK!b>JR*kl|+"N'C_#a7]d]sJg;;>Yp׫,w`ɚ'd$ecwŻ^~7EpQС3DCS[Yʧ?DDS aw߾)VxX帟AB}nyи0stĈCo.:wAZ{sy:7qsWctx{}n-+ZYsI{/.Ra9XcђQ0FK@aEDO2es ׇN# ZF͹b,*YVi+$<QMGhC}^}?BqG!(8l K3T[<~6]90}(*T7siv'=k 9Q2@vN ( R['>v*;o57sp$3ncx!>t®W>]tF-iܪ%GYbaRvHa}dkD̶*';ک|s_}8yj,('GrgTZ'U鋊TqOſ * /Ijo!՟8`"j}zӲ$k3jS|C7;A)͎V.r?t\WU1ojjr<~Tq> `=tJ!aݡ=h6Yݭw}?lѹ`f_" J9w4ts7NG GGG]ҡgc⌝M b/Ζlpah E ur C&`XR JcwB~R2EL9j7e\(Uё$׿atyХ?*t5z\+`/ErVQUxMҔ&ۈt.3;eg_O ξL1KiYLizpV:C5/=v-}҅"o ']쌕|tϓX8nJ*A*%J[T2pI1Je;s_[,Ҩ38_ь ͰM0ImY/MiVJ5&jNgBt90v߁R:~U jځU~oN9xԞ~J|dݤ߯R> kH&Y``:"s ayiBq)u%'4 yܽ yW0 -i̭uJ{KưЖ@+UBj -&JO x@}DS.€>3T0|9ē7$3z^.I< )9qf e%dhy:O40n'c}c1XҸuFiƠIkaIx( +")OtZ l^Z^CQ6tffEmDφǽ{QiOENG{P;sHz"G- >+`قSᔙD'Ad ѭj( ہO r:91v|ɛr|٦/o{C Ӹ!uWȳ)gjw&+uߕt*:͵UMQrN@fYDtEYZb4-UCqK٪L.2teB ˛"ո{Gci`du듎q+;C'16FgVlWaaB)"F,u@30YQg˾_YҊŏ#_f^ TD=VAKNl4Kš4GScѦa0 J ()w ״ʻʞX6ýcsT z`q 0C?41- _n^ylSO2|'W'BOTLl-9Ja [$3BV2DC4l!TO C*Mrii1f5 JA *#jv߿Imy%u LOL8c3ilLJ!Ip,2(( *%KGj   %*e5-wFp"a~fzqu6tY,d,`!qIv꜒"T[1!I!NwL}\|}.b3oXR\(L _nJB/_xY.# ſԸv}9U}'/o uSH<:˷tGLS0l/LKcQ.os2% t)Eh~2p cL1%'4-1a_`[Zz㧦|k˭c ĚOρ_} Ewt3th?tvͪ{~;J0= |JUԍ;Iw}/9nh7l%>'ct Հ}a>-:(QxPyA Z UcÖgڌ:8cΗ|U1,-N9 dI [@3YN%:ò6PT:”QVay 77ĐrX(K&Y5+$wL#ɽ 4d-bbdAJ?w:P>n^2] e}gjFX@&avF묇cTy^}m .Ŏ7Uֻ󂊹P-\!3^.Y9[XԦo Έ')Ji.VՕH4~)(kKC&wfm#Y~!%rpWMEWMjbn(ek~iQ)à/2,?O Y]Q4`Iz_*2coT'ƟlQ.Ff!bpRw@\6"yr+i37Z_j*YLfnYJ~Z~okJX ?A?gU3U;,ד1t7lJ#wՆ;I|p"+I4ˬZcն a.1wXhxDI:;.^m9W_c.4/!~x]y7D7@u邗`unn_ư-a9t_/.9tTo]r8-X{TMYtt =0AMUk}G9^UA,;Tt,"Dxl DfA\w; &`Ͱ٢x'H/jh7hM=~ ֟y[dI~fHIqC۶1Ik\)3 5Ķ']?SؠC"j_6Ÿ9؎]TTjm\D^x6ANbC ]tVUKe$,\ܺI `Qز@UӬ@B {~6caR!=A>\+܁<lW Gϸ}^w'̅dk  C 7fbU{3Se[s %'!?xL 2ڲ]>i+m^CM&WTj7ȗE!NC6P}H`k(FUM gul)b ;2n6'k}ˍ[`-fYX_pL +1wu(#'3"fxsuҮױdy.0]?ݽb+ uV4}rdM$ѢIA$;~Lvigu+]NC5ÿ nNჶT@~ܥ 7-mU,\rXmQALglNʆ P7k%v>"WCyVtnV K`pC?fE?~fjBwU&'ᚡilRї`m] leu]+?T4v\% ;qF0qV(]pP4W =d#t ru\M{Nj.~27)p|Vn60֭l$4԰vg`i{ 6uwŇctyX{>GXg&[ņzP8_ "J~7+0_t[%XU͍ &dtO:odtRWon%*44JٵK+Woc.F3 %N%FF"HH"\$ۤ_5UWd̡bh塘ZRI&{3TUFp/:4TƳ5[۲yzz+ 4D.Ճ`!TnPFp':.4dMFN=/5ܙz,4kA<:z7y0^} "NqK$2$ Ri ?2,ᙌEK@-V3ʱd:/4Kwm2$'dW<qIE2Ľ)5kJҼMЌ DR3csf6rRSr[I߽ogCc;S5ׂdKZ=M3դ#F;SYƘK`K<<ƛ G׌MU.APf\M*t*vw]xo{:l[n=`smFQµtxx7/W%g!&^=SzDNew(æ*m3D Bo.hI"!A6:uQզ}@j=Mo<}nYUw1Xw:]e/sm lˣaVۤkĨdԖ)RtS2 "E I"{;ōCb{yex&Td >@).p$`XKxnX~E膂Og\IGֻq<-uˮ◶>waPcPw3``m- } vS¢=j=1 W=&;JW(7b ?Q.|K,ϩ3g)D͵Q5PBj(h<[rqTɈjM-y͢FY~p_~O5-֠kDNTͷItI1mk"@$AǏ}%S5<`d+0o,AրcbvJ2O`gA2Ȏp@Z#"U4Xk1G;7#m eji'ĒGIqB//(O &1I;svHd=mJW~ړUCOīpAiB^MP=MQ`=JB!"]b6Ƞi]ItЀ'Vf:yo=K˞r:( n72-˒#K9T\aVܩO "^OF1%e"xm뻱~0GBeFO0ޑ]w(zM6j\v00ׅYɓHڦd%NzT@gID!EL2$%Ӧ{(gL pWkn\SDKIIKWi^9)N?[tLjV}}O͌:&c!JC{J` nKlȉW$)YLE%I:/8)*H|]}\E$V*#(G;3U-;q7KǰfξC?ke`~UK mtIC8^P߼fub8P銗KDi'U6K×5 .]H<$ ^D'!" b1D8,?tT q lKxDȜOY2S3ҁ%mo(YT\3}sѦoY=-- /IDd6Gs =[F۴'c,QAIٰ9JXOz);B= @%AIt0v[Ƿ&FJE͙A~IQ%iShnMІt.޿>q=$ts,cJZڗOx2c6 .1zҪR "^Q[ TF )㢥M-GicQ\BL(hO7zNa>>'(Kgc{>/MoD8q̒vv73'9pM&jV3=ɹvYƛ{3iψI4Kp5 d2oOgd||K>R1Qzi#f>夑3KմԔ萴%|xyr>ķx>{E>Z4Ӥ͋#+hI{hNZt 9`b˝`yB,Ȍ=6Z" 8L O)&On?7\7ix@ D_P"~GijbɠM&HtpR:4Si גt&ngb9%islԃ)Hc`ebw|Ī Zg_0FRYeO:F)O>UD;;MY,2ڨi"R"*R2s@AK/u5,b#u>cY^*xkJ7C~pۊ ~;ɰ@ՙ.rT?m0:;}d8ۈ ݨW>.[Vhi̒;̥_9$W!p.zu~9x۾vC;kN?WƟ+fx3SuKQqxST Ζ2%?T74a{N8;lr`$pZds=3jwlL Eڲ t|*n8[#yN SrA GYb8ZIaʼn8 #fg3i`F#5N 3q_M]j 8E!@1vցP7!|+R@;HspSI]ڻCZUcg5pDcIϹ,oN-_XI,3\j ]ٟ5~' SuipA!C厐$&k7dmhz/#"݃,YqCL$ڲ`"MUbeT>Xuv~4Le͢ }UVM)[A`b}mcE]LCEg=2ȴcmZ?E*-8nhױ1xR2ϫCya` A y!?h!9yL%VLU2gr26A!4vbSG ]ꧧWp/ &ee *w$-`J\ ptǣC^p#_`{ К8EW>*(D{ٛ,[fnY𱹞M=6&$<,"lX-Ǐ_whaE 98 (oѢ/Р΅ 7ցl6618ł_1/=fu).s¯?.S[{'g=Ҥ):d8h\y6]t1T7IUV:;.1& ,5΀j:<< +Y?58In'bXIǣO{&V\DŽ0,9f O_"[l:h¢8wݓ19\:f6:+ .3}=uvKc ٹeS<>ij(o'ciS<{1$E[nP b?8E'xv[K+E{,Qƙ1*dcs_Z'407|qBOgYU|U--sG8`u! qGYܷw;ȌCPc_|(RaIBKb+{P.T! =ĦiTob d<>SHr][KqWs7ѝBYǭ~RR"p9dFg|K- obY_vM 4>/]e/dy,8!xŋ5 R<^mYo 3c9(F?hr,8>7upO`b NC0%Ն R}$_ EV a"҅4|T!Ddp-n Ǚ.™5v,VZ{[g./ +n䤓dF>:֓[@ QPltsHQ$J=>O!;*>ohǖa[|Wnya+Cs6K!x^>$ N7 l 2JZ=0]Sה(*CjaS:p/N6I*Mx8"EȿQa[1 ŶD3u8j`B59qU]ג`upHЍE_fNTU*q%b1! `ʗrǚ8ce~yWNqXC٩ȦD\!~s7[NRC˔d X1t3։:F_magB-Z%}ލީ׵Oj|Yb:.͘C4z6qmJ6`~#Eh3ŕS,|HrVQ7~ۮ 馋SVL l)}Yg%1C+t;_'|Y8Wd*:hUvг˙r-'^  [Cr?}W3Q#vc]ll>ŰAVG Y%.9VndЗ? ǫ>*Hk6>!8l7> c7!8bdEˊx9y:9244ANb n\"X>Y`bb*h%1(*Dra^sh6"BzƾH( ."e1B QhmvKlXtӈx92aI`"Ǒm O\B!,ZDbjKM%q%E](>Hm 2z=E`^LRф%V Ng2Kh}`ot.GSGd ڧoE+!B{'Nb!{SEpk%L1OUaY얹aZVnDZfW{os&ȑ|X!|i*JTkgjվ,$'qo%HWc\46%-D1Vga>@'@—>+@o:e"l |dv;2۽k%x90ݙAOe n}nHf[+.4<#/5߂ݛǪ0q(z7De/!; 瓠 Li% z ]ɯ"O-]J`sdN$@"J`Y1t3K/9`VTElsX|D^c%֯U][$z%u[1O'CXʘ9bu.A#O18B`.aN:ǖ9dɹ>U nASaSK1OOȩ<+Mȩ*'3IC~LG,?.?}ӷBYpmWg.~>3ڄ 5[C&-Wg}}_jVo,?s w*n\7[cpMY<~/"˘oV܉T6mn \_ߋV_}Z=k-nn sn.*upw pX\_ U-C_wS!|q?E-S_w$-#9?wh:R 4+%ݽs&Z&em-ld b.E1բ${]]Nj"䁖%5#3dCY%HAK1/FnRL3XɯEr^v,bfbIJ'@hX!<[@ ,&,]$*բk+E$dwS:֢̆ Uh``%NĀVecK[ld-'“5XIυU0؋6\h%1GK(-Yv% 'mQ; GdZ%gI-XE]V f#]bCD6b&!9VWnʂI-|i*'yW='6m$omB,޳X$Ic>EJ# 15ۑO2Jh)8Vgl0/NEU"Ik dR9mBu-)/; N Ɩcv{Xn|$̇ld`>1Ljn떚F+B٫jeTa+Cmίw+:ÈW\Xby;Zbt Ŗz6H5k5 9V/ O<UJfdc06JZW?V g| R 0\jWu~}Ѥ 9U|A ЃE}T} Tnp7h_A[)=V qy)U cigsN>Lй Wq1T$mBqZbRT2e8V ScMȱ˿ύ-03cu0:U[p^vm|YhiklU&Z>֨řoҒ"HJX v6„=zwҌ5+E_1;ƇUn&O1^'CplM)0\nM/ή ?Cֲ4Ckcu6/!_Ɩ} 8$ TkRy2Эv!ؒRKfs%(1Lhrٵ L.]s?I,HBԢ[b G-lMG+@_$c%* _jR|\Zdc5u= A`U`eUc\˔` րj&*ߗEЍ0U#X) b0E'+]1&! 7ɜc:x@dl}W|$mDWx"sX4*SxKVuV+!lj@_R;IQ8ŢތPDlOZ< (1ZRÜ:OUݚM/v{'jYXE4S+#7ޚc0ф5ҁ#x:.L!NUyL6i,+Bg#[`pO^>eoB4F\jtc \h)Zcnp2L>6^ʞXnlwìRXYJk`UZMW?CU0E [%U%nl xِ3܎y,<ٸ-$)q7-Ո:tC\?c%7\):W_¸% >Ę_"+BLu>'Ɩ=xɮ[⠋X [(6I"z)2c zp&m?e8 "Q8W3up˳ A¦^ʮLW's%eJ `uv6%OE-56 0-v/Xŷ%r׽nl-ߑst2S%tTڪ?>>{2])|Ը>U;Mѹ .Vfz0Ïd0>7. ]|>TT%69dp-*VVK=$l&~g۷&i"Ì{rQk壹n\殸,˛_uu.Jssu/*47U0)l?R_^Uon̝f-nnZTeuu nn/*0׷տ·sHH?Eд= _IcoWF}|ŧa`5+ n."t<׃#xuA0YGNC0%ڋAPev/aFc%z,T6?h6 E:lUc|T=ƽJ1t|Vm'!m8N$@"Yҫ\r2aR|=(L X1|wrO_g ux1ƱP+${˪^yq>Elq*E< ^X9ۧ@Z +z7$ "i8U 7bSeo'ki?I+/{W6_ޗP}0;'NA`4MϼC*%N*&adWu4bQ4Qe';JP&o]$8T"š*IHCU20Pu B"JE^Dؾvhʱ4UQJr"ZoH`b_eofIr< <$CU凛U( x`-ţ28TXuqH`5ۂ8T2?܋g_VBcVf2ȯ > PGW%8`9޼qx?uPur,Ho/r1".lxIm8VL&w=zKDe278k]?F8cubIyV;Թkxz}Ianj,bq=d˨Ua2Áe 6`h;xJ^dmTQ'Zh:Bu Kt54^(S Y2_8!<8h'8YduO ƞO&ju#S -,⌳xGi2-o `<"K _l= J 8-Bl6R5kJ%Big(D SQ9YPbTγ:tQ5.(c_-)[x4b4|9AIrҬ%oU{#(̊Qk1RlSAZIlHb%=,J? 3gE00oYX_`x)c`!lb%dj/q/m,^MEiкBp[. . myxX87P0RGXo`g ŒݎZBxy4";=W=[]sQx5[J, N;AD,2hDVy%bJx[9ڝOsLO[s x5G CMOۥ#g(m +ąX.c+3I 4BL+P>~~Yp88>"b2 ET-O`x8@+M1'8>:i@)?0x{q{S{zUeN+E?r**(woPxymM4)JD0DMV<t}JH$&Y,c˯qWETEývf328i4 5x~xoh7u\E'wG,scc-:,ߌfuPqÖp༮88YZb$ Ƨbqل+G-ih΢s?=`UQdT}CeʒGl`)_8.n$v[ y+JMCh"w:@+SY]pACh< W%x,Ó桹ޥ݆«ΣoT^0߰vۻ_Ӆwi.mAI<YA m= eru2mgY%،¶z:I(ݫi*旒%cg?&;hBpb}P" z능mT@qUjiu+@CdJK,cA#UqqMa"8m?'GrPD(65tjunEpzWy62}bӏ)ɗ=,dǨj9ȰsY 1[kTO!M9A%3$aƓwlFY?:Ǡq%O.$/2>0K`i%Ur(%[#D7.O>$iQz:"ҖHڒ4>Q2i;-`vT(_@T 8S[M'R3eԲ:PV*Lk;LMtmF Rנ2Ge͔moXcAkuh,SJ{eOuLs,2+z7\57Oy)r m)htmtK8};yv|ȁ'G?M"]EUw,e8ȵO(|b=IGW]E<ykAeA%47CU9᤽0ڶ\ApHdW7JUɚ%m`6N)f~6Z+ Pj@lJ3\雩o; ~xH=gIޑm?OYAó\()M-> t({H趾 /9zTR.! "du\˜mX2 v$>=+_[K"aYPv:Lp]*XZF{̔9;D U{L,ҷQa0_ǂ!B[y'޴J4KB `N1L\EqZh*s -46t,(Q=LCap,)a!"EZ' Ðm/0eե"G.BZ2dݜE %Qk,1Ë+lJ7E?J;MO9{+x!-ty/Ֆ-jVV+8! 5lg/U{eE)l6,tQ)uģ!Kܐx-WlTsط,õ ]Skx.tWRD_-*H bjݙ+b%k<+"5UkG?TL" @̯iSoq,2jJF0Ss@&4u]Ns]%_0m*p&jmʳ,\g 8Ymh;muu[SЪ%p܋H6r X%+'u=솗 C8NXR.qƊk!H>J r8[buc=C#*5Ө5V6Hۋpsmk@C!A ˯j5c˔-7beڻ!hcmK,Ԕ#p5Ok;t{W[vg!/^(XoJm'_wCO> ; \=dy;&z( ٲ`K(pxw lE%$[Z%ẋeц-ӹ5d/YWi-Cꐒrci ce$)iS"(5EP!:]иP>0謝Ё%;kS4bU4D۶w$zTrGѴkJ]}~vdz!]Y>zzR[p}Yrv쟼,Qir-[JξOa,R]om$ ;Y]:jW)8ϒ|X!,"("`8㸏VC waU.'_tqdi{hU9X݌㣦|Ot @ TWRkcnXr7US  a07i`Fvkxm=t}(#5ɱHC{O32q؃`>La4e{ali)ͬPy`FiWR ؽ8Bs4P$`{,p~vJ@&IwFxR%»9e_}VF- !y/n>Q5P--_qTO۾.=–~Gyz:!l囬xW =鼞N]ЏI1!q]G6c>Ia>Յ>b1TB ~͆Քlv)/TyOҸL ap(FETqF0a!:aC$K;6[sڸYG,"k'"a(ۉoD-3"_>٦TvDa"~xH`dnG B?*z%Gt-2A,Iކ@dEp~32 ѫ"#G?,1ݓidT77l}:`%o~wq'ѤMoi }'R SvmxŨ42yu ZX+'~^(V84p 81  ^Ј> 4/[eq8qE$X.C.>Zfq:A#J*u#FTEzXԶ gzE-Zo!c4e~pYj12́;rtׅI D+iG׬BQ4f`AbY3 4HX[=w!fCH/&1Iɛ,lV&[5>s1@P7接6u0LQK$L Rf l[)Sz6V`ӟ#ɹ/9_E-]zۯ>"DfɔА LT"CjW߃_5[Wr}审5q( >:;&c[tȼ{aD`~ZM(LIMt`zni'⪹Ы+iA[Mj4M[iعޢ@5O.._R;t}Ʊ0;\zt|0( nic:,p\pp渮.P.}'ou`7\CIt][ xe\ħmq 遙 &/>ŝCMF~}}A~Z"P\rO'F`v3UdW^Pd膼댯o/7bރVxX`v-ׇM@l倂ūFyQx=YӋ_IZǯ9p6$'*o@O+3|tttV3w%9n|vyR@PCG­Kx8[gɨwhiT8N_pV^w`^}w߾O54K ޚZ~ \I GT "$KȹdX…B8]uԫmu胎Heo0V|r- ;QF#'WӵarTy}~ MϮv`4+,ľ+n'ΗrnK q$"=:jmԷ$|ej L>0VX6"hu) oTmko6G:X[O˹֓+*u1A=_Zcb5(`IT@pQ e J [J'>P#tBWn3tAOUҭuu`]cn3lZl{ZHe;Rʶ [&ۂPo{BH|P|{B;ʷ T,* TlOxbGBeB O#TH܂PP B F#˄[lOh4B 4\&4܂p{BçHh]9C'h-,+j|/nBTH[~mۣs`UCz9).p:QRLUQk]>.3\h:;]Tڑ#*gT3gvAlշN`(Sֲ:MQD6qSw{^?5cP5`x$`ZUF~'*pCGW QK^4([ Q9Y?ͨr[櫻xz9ceNc#iY$,TS=ԒCE?/,şe^s[-3ڪ 'n5M5CUfS[f-xWXoUc=G}lG#7@[|CEY2# <T9 {Qp^G~5~5Γl+5^}C(8u.Z<^aj1"5 ]nv}8)Yُ=5;,:WrPv0O/(a뮰LYS˂͇6kn^@x{3e8^޴HU?ExVHs苹 MN7WpOrxw1xɆGfu]߾ U\,ߖkΪu<"_X97a A4n2N2x@wOβd7 i0;-d/=~OB5 @슠 m|~ՃUox +6AU3U`,p ^ <4 *k\DW א-tR/̚EEzM;wCnM=n7&xp\xZz_z?%ȳ671;7\S=QP$L5)V`9]jML M^^rrdϮ:~kFzi1׷{qU^Ptлlߏaf'f4SW9}iе78zeɷ! Tf*pwxdmPEѰ`=P@И+TDɐzN+XS7 ̋W* |X2bD>BwX+@L;pk{`0?La|ַYECuj|ϫm\MDSHO5 a(Ê:`Ly/nA/qX^'ྒྷ;ѾG:.*އ;U>KRWbځ_e6ŭ}gR{{W s}]F*߽w#2.2ǰ:,"]sy!շӿ1753r \eƵCaxCۑꊰ8Ө`%}yNyI#'؞o[Mcyٔs+֌QUqy|k!/ =Cfȴ!|s37 #(cj%ۚ@G] Z$aǍ%@d~w"׽{v=C9Y 0XC)ia0X#.o+HƷb!MF%=N*KJ 1'x*r;8r:-&wqbX*lja 8uꃛxj)#{e&,$O'ZCBcx'7So?vvKK! 4UN7 7~j1g#z4|١#;[T"wڅXG!H׆ڕ6 /t{3^(=33bv>}/L:($X\%h_5B% !5Ռf/kURzMETY ΣAI )%ԋQwCxx41Nt{l.~J&!!UIM{GgZup^V:mU1e50zMD*@ 7_"u*O5y $'jF/qúTgou0ٮ2_ WVv(OI\F6gk\! p~de^jN18Oc4K{leEnc_suZ5ǪlզCub|b˖ljbNQ9uu*33kzN,@igFd.Y'3j92Pk(`녣DbƯJ4Vw$*c , khQLr1θ ;40ȞG]N=nXwJU#6%;[6ݲ>FruPx"kB`*D(/l6o*.x8mһGVzS]tpD:p:{ٮZ@75AC|8 u*ץ<s|}nKIӛtړpJ)7o`o005_3ٜx\Pszy|nc?~ԂL*?1h)W?4/na)>k7OKߛ(my=͟'+&P鏛c;5Mӌ/1-|VF8c̙3#o`䏭zigXL,Ox^ӴNM 9OLalNJd.ι9{hz­vMssK̄6-1d4:wgsx~ĭ2E\-o!a+yvJ_Rbqs.`~5=L뻖z&)il'ܐXORA3A$ 7:ՔLGc^a0^uPUc\  xx6 -wKp u'PbDf!soךd3_p1a׾!9vWtҌ](s5Q{YJBHh+b N2H^G7G<@qłBA*,!$)fKِkx1a`W#+5i+!Lj[O(1k>:bp9 G@;wwEQ#t$w,nfX'N@+X׬bT49@\Af:kNKWWad(>w@36֓|뚃B(/DU4dW--2 A~Qg|1VrZ!F]A18ÿ"hTb|_8ruT7Kpo{䋻?X,+{'tJC1ltQb}i~qxF~st ]y2O% N#ݐAFY-i>bgԣ]p]ȴn5!YkD|BfreL6pX:m;+-9+h^{+p OÑb Ga:vэ'.DTВrFGP}<.<&f)zK3iwh{^mo=BEyYs-*#R9b)#jQɨzuCt@y푹Tǯ{xU֚PsR. 'yTג1Ri;kXlu^Ľݮ71A41obc(kRDH}Z' `Ba‡߮npz֨EYFB*RMꠒ7 t3t! `F"V+Q,.8]wȗ5̺)e STpt IJoyfQ5GC*;&ۄrNE5ھmeRG4w 9BgVr 8kM$UIcX-ti݄e!I_SJ|ቄgZg B DN(zњ%fZ5zOk8>fA͡) 1zPȃAZ{r^)&I{1(.HnٜN 1)xX钚FV:6&1)cU@ǗBl7*mXz`?C*o|u,YTcޛ+&5s^4ju8cۘV]BlX:eX;vK#t7郠uꆟutS;[gY;pMhZ8URDg'c;LP42a |S`R,X(fc755 p^E|^S_丬x+-G4 ''3(I4Dn.يߓvq_ZI$s҇a]'T/'S[J>XQۀ*5(Vakd.9%_W>#ѹj=!jhevi%$S,%VYu-꠸t0 pBc_4k$;f!C KR& Qڌb|{Gj7Iggj{pmC*kʲA!D)HIآ|zgkӿM|Y 3GJ¹2]ϣTf](4qI=.8w)EQm K|cM?m2`kl\ݬğ]_8XBҪˡTǝ]G?q~pH|A۩ecxA\ _6s,Fs,أȩw'_^ B-JX4>ҳkG*yϻ_CX7/k<8Yґ;ʼ=Yx__n_sȀN=85q_fw O`<5_ɦ/W)Y O*`]JǤz{*Ӥ"qz7/Lx}^d4yU8.iڮ r_u`Cq%Gy7K5fdম%Gj% X, g⟊ &K??uQ VN *1܄:zB/(ivopapwqB'ԢfF[y_yɯtσ_sޒ`1L/U䚩JQDr9SV]pi$r껔#>.|?Xm|?X_f_5Z'}3twG3dA.DY•Ҽ}<.gڷOmkY:qDlW}prTFX5d w=,m6_,(fG7;BLaPB16&*qۆM SyNթ*#a h\uJyyI[4T^yD|q<?%vv썙 &m!^FU]{I틛кïSW]< vՎO {Ĵ-J1Fr)GѭpV\u7:U6:ۢk)wOE0E w̮jB!7ձA-WȰb'^N< L%` Y̻2zU] T3Pw_S4MP 2]sQK =Kބmz2h5))N;=M'8n^ݫ,\0,a|I?ќ,큳~Bs{Þu[OoX$MI$jI+Se,Z:q7g_e6 t(|o2P:UƗ(zknfT:\6Txur(ǭ%g/ȯwg047a0|Dt nG`fJKs MA~ @}~[,Y_ pןURG].XE-@,0M>Nv.~1A;j]׃Q|@IACIsڂ1. ?}mR9a}'ʇoeF}~Qy]gXC,!ᇄvb]xŻ ֡?*f2&*(b>u/M,ٻ~k_]ރJm: KqxW W8PU (j^JFᬧU_RͻZLUUʪRVv]Pʹ#z9PT̛bwU^VO/Cs^R/&&;-Cn/.뇋<9^(w]ϛ3,R3Z0W&@$8Z# bqSϡ.3S f^։VhM9xnP0\U\6h!9Vʯ56эur{4Vok % "N$5t=ꤓųoj7~vW©چcΡՠSy*?u `tueft06wRyڈ =@\)B=2䤕;_5TsqЁ&Yc <[&ӠM\7we2WZُ=9~??%)P<{-5(S#Ѡ$0ÍJrҔ(Ax8TR*uA$QfG[ TcCiI40)n,D [WulF -*'쿩Ђ ɚ-YfSKV֗ e(Lpsf1v I+pôaIm }؄1-Gu0 e B)4joX6G0ZM ÈF)-^ag#xcƷkEM膲iP 0e 5K&ftU V4&cR 0d,oN ;ZE0$qFYZQqys|HBؑ~ ~&eUXyFa|ؗ8EZ>b[M}gEvX?ER!';/Bli |ud\{R-GZ xC7Ft&9Lن|3:>F~/ kѺ/^4 x,7_d~hsTdkuvݖVGUOϽ_t_M֟P?09( zECBCJO?1r @19]i?_dߞ !/O\c4TDL i_ $`!S&x8%; A'G>|b,T4ۮ׻qZ#cr޴RwxoZ'fhRuRˆY7jFuhΚ]K`}fבVPi8X6F@\ `Bt{(eTK\3``$DBG+kYAhaZ*FHβI99c;ŭ Kg8WDHEK!JX  "niH0a},lb-!5 WwZ|8QFDh4E1#\uŲyw^s S8džC״Mc.32r G6ذ!wg1"bs6q /mƥnY'IE]3=TJ4[aTx+x+,)j6H. R4H.r繋Fio,={d5юdY":HQ$ qBݜ[{Ƭ?\ RP:ΐMFr1&B4ZP"i ;*ߜ3u&s))OOcćb#4Um-cJ@paԌ|j}C Hǃ]gޟГnf߲xOrYHw`/khGWgk~~|Q|ky帟m=Ki # "tI`ٜT$RMin1'}u0~GZѭȤFk292r(bp88xv W]bwW<!D~d~u5Clhg?4-98x\+Y NYIVQgk`1?^"\1ÈOsVp&b`"Riq&h%ʄ{xrqgR [cׄ*&Z3  C4.[$&ۚ@>r*T$*0E ,|*?!,!p)&b xh= O#p$ǀ:J\}alۙNhmMt ;f!R^)'ށhVB P| |X' åfS@ei1l,jQjY"DR=c?{O۶B.C]x^sA. %uEUEYR%[(Jt/-rwfwfvfvvF vZXtq;3É^9M}#'NH&:PT.+$ % bKB$Zȩ$ D03pq."2@L_7En⿿¿//B"zy1]{7^" {v¨uw|Lmv$ϱB[i"Q Ҙڈ}j.D[<{0ޔYCb8"í's9Y;'@PeV\Y ' [v&Ce^ju'+Aʥdqj37&тrw~4k?LMLCL*cQŶcjWӳ_}L+5G>ZΉ2 +$,-CC]!薠P0)m˺T8* H&M8Q` l2Ǜript% ":יϝ s4Mr2D4Y"L~\SSƈLf$g X84̻SC]&'ޱ jgRXR4w*uFTsS@;LnDrK5ǴV,x'i"s%fL JI-Z#ϊ9M>v~+~NԠ59j0Yn|ӮʶB zGE3s({)Xv\0-W:l-XZK ?`[BˑYasǬ {J3ŚldӚcڀIGgVgH8WsaV%|-`.}mW q4k 05JٳaW?[B+`f= #}!. {xZ݌ >8k:i|VW-(&$XX_>s&mV?P;tj`7A !3c$ۀ)=b`!,ہ+ٹBJ <ne9R5d+g!|8 v-Ubf3lD(8Q)6͌'K=MV`j}| `Cm ]=zKh%<X6`2ŎHh>J;p9nS;VBIJ-bRm6aTHшP줉43XMTisӨܸ]8e]6fEq,ӯ~#?gl3 //}XyʓPaTYAYLЫ b8HEޓJM?zW9+=8;XMdzzyYP/)S¥U?~ .zt*ԁ|_U\r /nf:>i\qRdw/t͓  6 _DWadˋws>jCO^XJ-W_ΆӊQ0v_(U-,'-']0BmH-gd TW0[Bk@Bi &XAHΨil[*]gQA%6'Y\M(7zxFtV/Xؚu^CEB[{|^mpv5B< h:mTh_!zY)ϒvC!POL3X /9VH<梱mb8ɖZ޿[͏P jzu0@B0nMN“yɋ___\T»k} de~ԏЃp(@Q1)fs 54 D NyEy9Aqĵ T|05"<r 6qǨh٭n%PPtA3v]+nށrWǨ6ӊ'b2Š.?,=<>xK"_Ιl:qg&l' L9Nic$[wgvЄx=(hYŭyvAA:֕$H)9cȡr]<->Q V0S Իo6%Ee }VQԞp`5/DV:te[ ^PYuhk V}Xl^(r7f<kOBB$J猇EꆱEcto)CUI+?!Gb +{eo}g@@ 5 Jќ/G{.l"b8]];e]lU?3d3BdxD.p1=t+JըFyr"# #aћCZ d=h}#+h0lّӮ^Ah-3kz DLgC1'YyJAxS) 9EV`||`y@^[泠 r1FTb5 c)Ѱ%nLo-UK{$׀ ,h3Bgw(x+[MO+A> a,Pοj~v:2MbXQ\o֎PkHl"bm8aC¯}EP Hbb06jܱPrKzãe2y`l8EVc+h x7"QT^YHaO/))BNjodPB7 x}` 4@)}53Da7E8ߣK;.,"ws, [wS aGTocq_W8)!fŤoD5NC~ND]*-?7?vZF7MK FVĤ)Haz- UPX' #5G) W,]{d F\5nF=D\TKFI(O]j'Ӑ*-UrؕŜw/>“4‚0 }p®SBYM$jjЧYEUտ;ҕލUg GZ4Le :9`]n9Ս w[/|}!tX 8:J<\@5D胮_.v Fj"T竹W1\~,Y|wk‚/?Z#?RL>b{u&>y*nF-|[!* x:󖽀̲Nykߓ' SLaܕ߬?IDsmS@j_|@a54t0 OQFYR1w"\2z+tD"{b:!uXSexC* >鄌yv{]b#$_d~ lc&tV_DA47+PѠFk8(.Y=5zxuz~6&Sۨ<MFe&Wo^v>1Űt8ٗY2+;.݂؟DY{ fZ4Iy Nj)enPK]V6*1~#DYP̽0SL8",IK7(`9.0+5Jy0@;\众`xt`^:"0HsEYO6'1<*.cFuy2!tXV^f˝*=L0; yr~O#ud ~CsOA, <졀C'E1 G1^NT/聩fChdٻYa C:囟e!nGt_ Ku '{PbBZS-6ԒrXyRYI .GkɉQӼۏ}q'Ą:bb/w% ZW y_:_ލ-X/-Wsp8:|R@ՊB{/X(h5M}Zt\~3<7Mh <<+Z'03mx]"O/VfJ曢`˙ )3uk QIʥa6=ed1)_V0W\Mg x\\`2`O,jx6tMz⪷aYVzt3|]~;}LP(jR/m395 9I⠹T o:47@ ycxA͕MMQy[Sm_E`U& Ə|Sدv3)P6M+#SHWy JW0MB>%rk}b~:w$ƱF7x˿n|4@|xw^{ngMٓwغ*_^]ʑm=-B!LVǯ_?^,b>n<ٸm~Ќܰ !OBw+ 2^}$%ij{zv-a@d+)`2-?Ǒ䶿"'cCNp|gHbԫwH}ՒZ=3jz n[lb,Y-YI~+Y9Y˲Ŀk;C'XVu'Lo6-׳GoYI$A@4lAW ==z Y/ZJmslb>Ưj}U򒤊Q"$ȬHJ;ɜV<##S1/TPl0S:NdU2+O$ "i"ż// _v8gg~~l{=[~#[m$X縟}`-=2+'~5;~l~q `fK8:l]nu}Ìa73_cs,s?& (pD~m}I2=I/}wyb5Sڌ6\yu/c a|tИbpIvjkV-C"FddV&ItQh fq==yk:8. Fw;Ρo3X=v|SL) ۑzn6d[r*IDF)!̉l>1﷔"R4CdHTPt` Boc:hc29CK+; ڝ?hü-/o-f 4;k|˴hiU&ɜ$ Blcܛ3G;{$b5:aXfrcur|<8'0Z8Ϗwycm*3jx9?}xgN 8,1$hIG#4vf o٘GcAdLb~15\m}V: {8I[<$i4亊y}n)WwF7'6M'̥U:/KfJBڟg:>=kȿ¼u7; W\}7W`+S*r.f'( j: гߤCQs(gN:ܔh+$,+R;F?=xíi8]Vɻf{7z͞S][!ꧻnJoo94 =m&x61]b#`b!4&}k0_C2%%Kb$29O8V=:üGdvD9ANEP9y.s^]zP.hy=z 0^vn{&m/8壁&i+=ax%"ZJ VkA9|Bh>{Ag؈g3Gw҈`ZUQ;mxA 8H3 Yht̕aRJ6rZ.8~QN3.0: 5kAbd&i8yUGEQs&"4)WvkƑODγ'|CQcGt01%4&V` wd7˿6 gL:Ɔ|7A;nȄȭKR) DI򔈵ASD<+[Łef4/~j [P'(#M(B'`IӐ'P-M9{r:(|Ul2NTHS`$]P({m/.U;1\iMM"ĻE(,6wiR|ȍLxΰ;ωTRiZu:'iTL4 xOB6tlV8#RQK6;y=]>`8iw:%ݺ6){s\IUO4NґP\oSⶫ0o+̇kIB;?ݖڿi Ou[-å4!naaVeUbԁ{b{˥ F8br9A3LTby;sNr=t1ßDM#RXuaJZw& `c07V= 3DUJ2ʊ3M\0QBgjk9.7i*X4c/Gh;Ft Dԁ UM ECdbYY/BᏑOgSN*x#X$WS LT2v6^ېAeJ߷Z_pTaC il@ߠͼKJRMI  V"*kKAܒb'ѸmޗBL&؂^r^0=ycH1SHRyj@>ϠʥڹXgeە6xb(EQ+%h4vvwq xc7Fe-}e7bޒIJSVq$ԗwb̀K6`UsA?Crzwk0sP}Y-W(RKAO63F]mVŀisOjdL(޺$o7~ y[Sv.kF×)-ӳܾWrFHr@9ᾙ7k~zO?T:>uv?{]ՂAєXS-Y0Q*)UWTj0Lv1QklBkLkU|}b^rNW9^䞦`*=Mn=('_zs S `WvMrJw_l*&c şyq@Jc)2Oy>_5U_pL{@~[oAI9qE?vo?ݷn 4zvzߵqOiõ֩_nll4W g K%Ĉ/rDh(V43&ԝ0dzyʎ;q(l=٭>1Aȧ&s3B3p5nC`y_>ܭbbAc,zSw})L>GtÖDHM#q:QBU FU%`^mg r jhlWZO^qd5s.d)xHON6bD-yogx_6ئNUQP7VsyM1*u+/_psv%BTTHI ],Dkt^'E1H5 /ӕ(QsS5`49pEխ30oKƀe!5PnŁo@߬y:_Z pKfPt_z_1p96E(fU9-.`Bր%N`FLQ[cWtѓ<::^Ѯw5QZƔөJ=37N\I/}euanE`fd|xA\V*U R2 NM\GeVc63oHM9LF-_,V-QnySӃTƱw:!NG%vn}IO:hLYH&*{*t)R >ʂVe &o.fVJϸOt=ĀeЬ oQWY *QKAkБNDR~y]#ޗfTw6.V[7QLG)+vK &2{plw"R.)r"ZvIj͙%8ey3M<ȥM6IԥlB_9pC(wf`"s\V(%:=7H!ٵ!=8JN'W'/WX~hmқT&4ڻȗM|,m'T.6!oPQnGcmmQ\!MBsk*5+@#:eiV˗j9J65œC A3^Ƭ}:6 %k֧k*D6&e ף1C>m Vw-vMۀ9d t_BOۏ[~l%Ox%Sܦ)Np:4 W'"RU\H\U!9"@ gTV/9L8opJ-d,M+4KU6(_"$ťJ*Q'yHG@cĔssa&n b, [Q ȷwv=2y+a6:%8h.=<2.EJ|T2V%@>mfAc;a]~nQЃNOᗋ2cDCzNx09[55=@5} UŁ2kGQ0Y> _LEV#iXiǒ\O%zc}6zqk3%J!vY0\=,qӈ@Nq]͸.|$/Ȅr] i&H$ݑ5 ͍@.F"%u5%-g-Qzp &Nšdy0A0 H0rM{,tHa? (8G2QpP6A+jXi",<^h:wPvGLU5FG.0b#ʡ'%uS̗^>>}OIb⎠˻Z2e#{ t(4zP_\܃e&4P "uzmztUX[m0)Ԑ/I xJjkռ\rmTZ+nGRBJ ыHH=~ՠ> ۀEAc)D1ohEcq'3J__˷!mb%$ܚ 9a3eTԃLk|"9|$yYE6jFck2L!HgVif|QpI"_ c@-$ރ8jd}D&Dv!(8/&mtُjx|naWS4 (Y Ir!Rm༪_ËOO#IfA1?7"T.\jIޞ*鵎hN ;8Ӏ54|,YQ@L {zsNJ(@s^%b2IQֶ 8NF@ǒ\ oI7" ivm0'݋EnmWcTq\hʛ69mA D|Z 7lI)7 ${<'!LV}plM> )h+CN&߸6o{׃;=am%9޶b=} R2 RtTmg}E68NJ`K GwwqY#]JE4ϗ?p_TJ%FMgK#QLqE(wj7f:8]&5 |,Ʌ*0--1ii#SC`&LdBnǒY>:',w@ 7DDz)$ T!۝Qbq$nJN(Ɍ5J܃d̚۰y4zp0d: J,X](?{(XۣxpUD׷EuT4Ӻyt-XMxYg0D=;.Z3+eHȥUb>Q}>YwT9P7l;^{);'q*p{p| paU;2"`=)cNDŤj\ FE-u#-~׆.7im ;֫sWEEv8ܺL JXDTZ`u*Iy;{quk]Mip\ H2ؙnTXtLЧ1dt@xzpBDTBzX~TMȵH&M$BcpUɸqSFͅ/x󸿬6/d.'@LaCx vET%U\tqɿl?p}/-,gjOrۉd` aBu-AU1eEL?umX*lDE"vTPt vjN-^Ux$ödC$&q< xorMgß3VݜyiPyݟRcFx- *SaXRiM=2Tbcl<\JWBiq~CnDWa.$Ŀc%1uy:R]~NY8#"?Wd _[A=ޗpd\Ҍ#Eဌy]Ӌr+Po}cLsv>Ms{(qB']2u72{05;=>Cz8/_^VԯB+9KS Oq$@VO_;҃"SQQќzY'8lۍ!7e'R_~DotJ/uƔR!Ji{~ Qvs+10&T/كcb☯'> \ zpLKNj\S̤?xI_项d_&[d'ǹ%թWH PܢSHzΓz@51nC(z@I;dRqI1cbܚ7*/DΣ IFSah1h*9xҟk<7CC.n OM 3 6[\r?JL+H&W~1QJ*W(^aeۃcb٨q ᄴa? W{yMTN#~_|OR/e͝a:eڿ-{#hS55GO9>!`n4"n\t0guX?lz>'5Bhx*RwB0T ۘE 1SsΪg~K/A}5FՌRc$H=ne`8'Ɓ= tC ϓ_vx~Y.Nq?)dZfndt`"dA}L=;*3S]맟_ϡΐpW~ٹ@z;w'Wiaבi/w^5a|}oMoK)i>%t#y<'恀o-#s[[}!xΖ p.튁}Z^.!\9qf" yXm3J.!NlkEmrAQgXr官>Bc.3Ұ h$ۆW`M37!P~XZO6F$8rT'%mB8HSqI#zx8c1>ΝQ_1($0ǫ>.>pE0TۘC(VTaf區fu~wm(I)bMv~N 6>n$G Ճv1eB2 AB>(#C`Jyn,?YʠicaX>O :$dw ̡|HƨvPMy_g *^;spQ+*hO ~fv>d4} P?+3aˈ(]6*!F$Iְ֑-'f }&c./Dh? إή{toc ,W_" ֳuIǏ33UxbŒ$:YO'pڪJcBS=!e½tI= qh?)yK~-T5ftZgwe0yN g$Wm[ȣץMRoŵke/=Z| 0HcjxJۘEV: "!g_mg%\O)(wD"_~cM~UR%BT E"$0 01_e|ym{c9_  }ۘm8[f/g 3j W~뮛ۍE¸10r"T:P8TcdHVA/rߨ\ r^SOA}<2c4q+g y#gU1nvpqIr%>2ZϬl>0VKZȺR#RU\H\U=Sܾ.m:çKഺĺ`?_@֕g"XfBB4D ǭ]ZKn4Wꧩ>1Ů_kB=&rNԁ:uuMXa{d^{c/W_n`wT0o H69Gp ] Uehe ?ӌ%EsϘ)%ARѺoA M=r3o GD *>n=Qaۘ +BW%NySc#"9!`*1۫7\M)iξuNۑ9r5Fr!ݙnC[-젳7r,wfjani7T̆en6eVE5Ʃ3DgԳpdkG3ZpGɖSV{UvF@{l11S@@˷; ʗ!#`FԞ SUȹCYЇZMbϏ!W-} h]iI2VIp\=_D/"!Ihb{jIpx2_Ad6fheaj9?u28ƅ7T7\/•P|F? !f` S^;hˇ} ~GuocNg9 e՝}$dY_Q&!KkV={GC9,*'U:Yq6)2Gy-;;{ `mcJj*r\s\eo-u֡2'+\Os*I mo:Ag?g?AnsȢ*cօ Y+f P1 G %;&G*nfOj`Dѫr O@/O!X -P5G=tc6ᨠAoFW,v'xaW`CgٽYĎҲ]rNcF6H<'=tm"v߿&["Y5<9EEچY.R"|wzH-mr*3df7,2,l:_-[&\\Fe,:gFѺ !Bi =ێ7`{z@rA⼜&{FFRԭA}ݒZ+gz#"*1"F\ (4Ni?*-S1-tB«(0*Dj{sn򓌦h <`j?PTH = B¢}gPA [Ag%,0h"aiLǩ-Q=IP1稶HPEoɋX>~ADZ~)Y.f|?~ڽ}6ld·#xÐeԀt4y-% ~:&S|5GGt7} 1C"vcR݌ȤϢvjO7D'{ i&I;挄"FƑ[IkKB-]_Ї&ZoWcuc1z~FK(y]1Ecw'/-G>hmҒ<() w.RFI]rK i'w{MF4Pt1vٵ_cfٮ^rJd7 dYlj ?8l2z{%K-yCGk0e߭>jC:XS\ C'kx[J*-8Whkмv5. PEVػٷSؘz>UގjXQgm4(rs$] UOo^e/"bT*3-eú0L`|[3ϧlW]mzpun=ms+4vfə`8wR' \bJ=bPP5pt1Q+K G.E.^߽^Tu2NںNr6lTq.U%NGzV]n$j9VvToi{KGs D15GxH;/(k,ܨFxTkܱ5z#x ^M*Txj@޹HsҰu|K h a"RmS-WOoMn!T~vYvKv+wS#z}SKdUS6)]y9{x5}JqJ1gp$uۨ;Gx11 yQƘ';fVU`FQˑT!2.F7nBا,fNc>SiIKl"tZW5`~f^:^,bOKTEn, 7as1sl4u>(&թ G/gU37aoN$D 4k% 灈 $Y=:Y};+l2 FU'6&տW6 ,CFg3S߼Ț_6߳;3e@{ mM{偗 X&+Gc!tY̫Svڗ\Rc$,(g1ʨt;׹/&B/m\DwSTT3,n{;×`I)kF%o]qN![ BH9ޭ'S*"8:y~44aGSN>ޏ03]`2Ƙgc [ h9ΌA{D%蚋61G~UR1¸DMض]rLKg8r%kV }h{/DKAd#b,|v)Z@$IzMbAA52$yAvK]RFߥ-1_fK[4Ke2ZT1';)%,q$ 0<n)CyɅn_sE+1_'b!Ο*a$2 \YP1f 713tRc϶ B{ O3e4iMkt3\N!s -iaH[YzlzU!bC p٢1E*iTEXgí47xfJѳ{Ca;e^FxKRhM 3BуamBY^e $u]nXcf9j^MryrgQmkQrk&ڷg+@4[̜!`Ehff5ۋ߾mHm`z~|Qp-َ܍?_͝8 qLͼ~5f\ hƣOi޾ǿ̸/K dֿ)@ybWn=7Vɞ-vZwo'2J≱~c3d~[6܏W3t_`_ucWXcR[4CJf})Z;cE’yj4|5:/;uHGm9*82'x/9lS5Agi&o$.{Se. a~bRNOm66p&fmms8Ϫn3mW7ތb2nV7{S߬u|^̃%L:kn2n\y5?l~jF6]|B|1YCh?vƣVjܠ;Ngw[!?@Y!kҡ@MѝqM e* 0qf:}/FXt0j{覮g% +sv o b`_\.fCЫp'e2N YX ʒyl)."Ns S `m<=0b5s тsp᫼@iPwM(K 6Ѹ, iWѾuѾ1&ALH鹄C5g(DžW /-8e`?=u}tgs@EI5mH~FӖTrx.uo@iY Qq:O>Ubr=U!%:aT8ap:$A*wS'Ck|zDש'1I l ZA+-(R2pKIQE#ۢcUP~ӳo1#(6/@fSV}E7]DC.y6/'*4aX˯W6X;#A%SgL&1ke+Y5MWwXH8Jƈo1P 1)T&7OBTӈ:48V5mB5%DMt"Z?*?]sKxtӇ~ xvXh49({<{,jl0qn<E0N?53&fd1B$ʴc(VPәN?9\%l%՚:ǫcCPIQTK'Lzjʊ.o@+Nw(ʶӵ{/-tr~40R7zD-B$P؛Ih2˦MeKPvּ8v+M]\3aT6gkYGc1;szINP5;Y:Qi1Rt" 'ȮS|ezD)1 GGE\sjNG~J VTmY M7SR\@ƌu>[&gTUn 8Tޚ 0&VӣU#FCM'h3DE e7eb+cFCWd:=)3Kk;@st$jp;^zٵV-̲;эdZpFDFyQljø]\^l)U8#Є=k =-K~܂9ۿpuSn633o:RڴVS-w+*zwjYq.T!$ 9\|6.bNw!S &>a֠ϯoFR0fk/~ow]TRXJb!.TpxMmђH 3DR?.5 _dNa@ڥuG+;m Ms}7ۗ_B7`׫lW#]I#;KRF/ԕ$b.%b>K9$;yB?FTXmY7~1OC0!E1JQ6ޏq6Nb/CSSL5kGc$I$d˜pIփ8fX|ޮ\axMh3^Ts|K+>P}I*Ł9bT %3<)2Opv;#fPj-6ƨb8znqP3MrQ9MGH$!% ul)Cc(^tV9:t a>D`Q">hbo|%?qhRWG{XȟGl=z=(;Σlzkـ|Cj>6uk}и(y2ĵRlDt2PURkV&;I<2G=TlI8G` )i#K'x o_8!GcO{yHQ'n"4C0)BDb}pkx '(9G*yI5z1#xEpz ҥ(T#-AXLjf㋏k.W#Fx_,^!ri}X8%%rm[~popMN#&c9LdIF%^#mp"{2<6+0XՅBph!83B=/>}tl1#|?tSv//_ }]fx/'߅ F s aݐ4;uc3;\%+(Q !˂9;O1"JZg>lWY&D sјXzk;R:F[ɢ( yβqee $KJq$b4dTyX G X2<%XimB&¡ vByn9iN8ں+> asI=pN:[/(sE'a 5=(@/),Ws`7_'d^"skIU"X" nS1s]`qYʺ\gbɩs$c/HƦ9'9dmA-[|0C-]ŧt]f%]k-qJQwn^mɠSҶl{zD w,8Y(zl9F|#T-4uN5V"m{a>72ف~ fFiN tAuۍ0k# HreaAfQRIIE1z o#ߴ;n' z o=@V,넜 zF8%"8f7-z Jh7IXkͭ݀dW]ׁaIY|eI:ŔP,Հny*eJ.P,Ĭ7F(TRhiyͻw\26%y`΀ȵEyX^oyhwaux;D%:?}`{`r{;x v_=؇͙~fyZ ן[w'oOwdvyï.s;xV/zJovNMjɱ< @^fy )aOmO5z 44SA.v),zɍN'3|[#vg܂2r1T<ÝǾ B:_r*򚅢:<2| D`縩l/_}ArQJ33)uXVI捠(U'RAQWθ|%PJjRHIK591 3U a-:0#TZ̫0@p-F2CC肼S˅mKw"R; ]ݳ,eN[N . [ tAi)wGwGwJĝѝѝЩ,#Q߻{r'̽AB?<+k7 m󖸨r\AZ'9 .J\ [{]T1a#}s۩Jvʺv ÇɼR/d{yb?f?\K[v5?-aro.cA/h}tC:mIV{hH>}x`xD*Wsq=u髯v}@n{:%()&^U쿯7O>#Aj:T5i%T֞Jyk9᫸~Z EL|v32tUw-?ѷ000bḑls$Gw1i$KY6xZPTj.YdSh2?mh+J_f73:ͮViY6.pJlNȢ"B-|s]Y "ɥI#ޥ${}d O;zٴ ^tiŶ22y/U,Juq#D' {}d< ˮf:m%}}8]叛wzz.Gbl#NuR{Hˆ&|drXѝWfҲm9cl`=RoI&^7yvm.K68zȴsIŬe[=.1,੎f9i,E6㷒w6]۴ gf[8$"Of{9pTGh'= ;{|KDwd"(۝C޽ e; se[8pxt{&eqvJAxa-inV_7!̬鸑`}]\^.򠡾_\)՛R9un}FE{D0cdH_,o9cdXe5yy j~`3V2r^rB D~dG'3G"pJIm5'8I)&jlZ\Yࣾ;unRdaPE(eb>Cwz[.5M-gLqɓ8z}{A\Ӗ| |d-˞y}?y KM !)s:##SCp-gMق&sL݁$7{hXWqgf $;V3UFcuyѪ[9"ltcLs]?~5GhZ~m'Mwtd#`%Ɍ)j _:=MͧvAQHD"E J%%4m9cdgiAiYu\{eO&hq>Ag`KҝqJZ&8M9-gL{}V]x)ƒFTϡb,1U,v˫Yړ-8#c>/aR'`_!$Yl#C RӲRg߽<ۣRsR+eVMQ2Zy I㞦k696svLC&=ɍ q%$|T蟁'>:O4e'@!d$Y4ma\#>>O/JD`(rxq!L!A>-gL=k\=Gm<u4k<9Fv]ͧ7RTrāO3=1J[d֑VX". #C6aǽK b2饖ݢlGr|=Ӥ䌧)%Gi+IpЎ3 RkT9] "B~ {{M G&<<] /$ G&s5FE3tMWI-Fh[n -N1ԨX;|wZӔbm:}k{`=VZ?{WƑ _rv~rNv&X p8Я2aIJu~G{-.Vw?U]^VML:SYqGkxJH8*h<ݷCC{Mn䤫kOH`->Swm}lW}D{d oj- 2[vw"biѱr34p-.P압jC.~!` Ġ ƘhRxM=u4sO{5!\2jRNd*Y JoBW 2:V%B󮢣¬.X%u@K K4V9-lxC{Y`"[V1l)FY"lFjkخZSs$aY@!*Χ&|dqбZHLhs21mBD|z4w. ;J&WgA Yjo⌋B3{ h!tOے6X%[z+Y'3^dL= RR-{<ӱ49'X/g0SIajsnuQ@ n\Sk_۱0+I+x&8?ձ Cjxy5w z4z \sf/JZGi\wA(9SygRlYM-%UEdeSUV+FHƸ+Atnc,AF)04onER c(y<&N:Z^zk|m$EMV|nSszfgMVu|ofAʟg,^寙,f+H|V"* ~g[p9t8iѱr: ߿uWGmf 2:V?(7K[;7(y=_*oeaC[Dתʉ4}Y~/1oQӘ c(rȽ$+2^Gp1ͧY-;lݭ*dg4߬S|%ӥЦ1!6x*â1ODVX/ܞQ:UE`ǬӶK4/E^5tb}X%/ougPV"{G>?Ju%Dԃ5zF7٬KOXKvsiUadEf]`5prVL'pK Tɇ0.Cg psuӦ<㢧AF y-b9(AF)Fb>EgQxVAN; j?3}צdVX67f8J|2U f~ `[tImҵzbZb8?@p&,0WI,pC61Ki{"SG-2 h5Z9߇$iѱr/Z_k=@Ym.q~m"+k# ~[`5 ctСp|qsse5X9PiuvYY=ʻ-4O6]"0}x g)cHqFֈ:VU[dn>h444},c[@ܻѭV:C FW}mD`Xw* )fTzmm.ya\Ăh2yպ|Lh޳rdtųjPzYͬ8^1*Y*flkHhq{ ޛVc.etl e)mȰsyw߲ѬELqӍ[ًK`@=+uO-YAF:"'"4˯ﵵ)le0z^ ]~pѺy-#AFQge}eqʧ5_"lq7 c唨lVHZHP g qpr*Ofs5СP"̫[+ZRpѩm6;˜TD3 & T lWh $g¥&LIPV>h2RʃȵLE^jF5?fm<l`[^C 5j sNӓ ]G+h~6>TbO9|zC'9\6|-[#.!-}jIn,K,\:JAa̜( #AB1>`E @aBFoͬV&#)TD#E03,*͸Jb,aE ! L/pmcmP%-)IXkW[RE#)'Ʃ[ wpH‘(n}u4e#Rj`ߴ!(n%NUS*Y98.Ҷ.2#sXVp+;ΚDz}Hٕq|?Ag1z ?5;m  `-&1gvތ=Ӈ0Xҧƛ!)O`rQIၽ"E{7x` V8"a:RL݉¥}>r.OHWV*a`iC(X૲\:$UZ~\6tddAa=9Λ''AR zZjMRWktQVk$ `>L]z8H)>&a~ՎЭ7zXJ ~a\bfJv 5e&#_h2p6p!S-r*R_Lq8^=pëHM7?S??D7oNRVFfrH/ZRAJ4U,G4-9s䊒SnׇrSYZȻt4[#+)2Sk=z9sLY,b0 iJ"j ^( !@#EN¬j% &QXۧ`N0'{񩭬'rlpzoy+9c`oR( ,`DBh 3[ 0 LC6H㏷2Enefa~}ыĖ2=G׭ [l򿟝ܻek Z
    ]3E4^JJd 1Y,-N쎒h6/ZX;t%^ ݈ިpVGђ?|wjӌ|؟KkY3M9YΤNw/͏|9n}eAn!tvb)᭽g6^ eUDx,rBVp&[l9yg>7*EL$F![dЀvRS+! qD%v؋E!+6^x -yA)ӢDqI&x {/ Y/KEcVkHw7{F.ȈCS*&`:Zy/v"qlkA0%vkG -3K?IDq$\@KeVѸDebUD@2טX3m V5D׈Qx41MBT <k{6mcvѺ:9ݳ<|mU_KT$"XQWmeL )&̉yeOfyu׫UkV%޳'_5hf}m]ݫn| NF%?|8Z^|d03co'Yja1z/LơpcB" gǃY(B,63Ɖ:_M'imO>C_aɞի;tF׿VVco$7s>O -yFe鄡~xP=ugZ{6_!Kb!pލ8@dV2#)k'j(4<)6twuj6v]Q;xKy꥛69zޫ?x{Pp[ꫛrbRڰvtZZ# ų~^^yߜ.ǿ4.(耖\-f`ɕ#KB|,w#dz`O,L6>6Q+ !*O~M <x"Z./rcdm۳wa@0oӫB{Ѕ%'d ?bG5?d,=Q[x|DڋWpyma(ٛWQr1H:@zI p@U# /:kLJuLݍ'5yV7גMDRZwMNkuSonS"&JԬc8-_ nOl)wR{/01:R6]t%зokMu;a\Px<"9ciJ-mP6Vf$J4#iXM/Y益z{Iޖ66U5_m+Rw`ЄS\MٵIMҹB񼗸=SֿY ]^-o-'t*0_G3ʾflKS+ޗiBɏfx1^UAﮬijʳ<޷DfI.`^5i.8AjJMq,\IGSyqk]z1;G;$"v @+b0V.T3—[49D͗Qxxi'0?QQ$82Lx竿c癈6Ԁ<%i6%2іgٓBkS[lmHo{z]5%~P& B;b:e:#)w4f$3IHq "7g+1эp]}P/I,?f'Ѵg'{(ŋ ]&U7K9$<hv-3ʨ c`qF Ak3ɓ-X4bq0J3ΣD`A0\g-483RN40^,x^Kv9:=M nR(Xϟc~܏r?VX+PW_Of_[$vP(B_Ɇ|' a.)JCcu0~ q ["R@j@j@j,H H eˑא5Gh,~ipOC[K_M8mx#ٍTTa5~ت:LΨwn_=HjT^AQ$Ȯ zLC3ny.!Bi@bVePv`Z]S;*!HŴT{oU=GLuL ѳ_v#cY+5>2IJS , ?>Q Upר$El,gWË,VNh \D@G9a Њsʓe|?\E`YbcB|vRU4^_30J~L%˶$.5ö5[pԏ4.7]|Y(zlsx9onQ-{cBmnT®_HhXmai4.-6>T.=/h j pV;_N[*6cIxF(]|%AR>"]+oU+DQ2|'g/~x~_psh7?y<4rq Χ$2m&`Z$2%3ИZcL80Qʰ}4uC{[ŹQ dG0c WdCGSɆ!==2wiXeOo_:⽣:HjHշi )bvXa ;%_\LY%zEuOHRb!}!}!}&@ B BjB B 5B`9)+!}!}!}:A>0 郐>RTIʐ>1!}!}!}#Lu0)?}{='㽘3S0 Vpk~Kb"K9$<)'3ʨ c6L6<ՀTB4 gaTk)HLu P˧ g%^g"C"i8$^q CmýQT b ?l~/zGp~CvgB1bJ3fU"{3/ֱLtʉ&6IctNDoGPJO0N7}S *=`Ů,jo8MXHHHHg QQ!`0Aإ)NTjѰ'BR(RlKԭ#!1)"1)"1)"1)"1>Ȯ$dWEvUAvUdWEvU_:DRHGvU<ȮȮȮG`x= CvU{ȮXU]UbfY08д.$]^GV g#wܴ77t O))iS<{ OG!vwW'AT%0wOi~ !1OycOJ5R&v(|kٳ’kxګƷ?Zu`l׻:V'i͗nrڤR`joSRpAVSlWJn.ҰvtrC)!T:SxZpg ,>{Ϣ9d0BP Z1|:h<yZ1kJ#{{]>ѾHkM U|A//=GyN Z M_$\њ7kpxQ@an~ k/^iO+M=0ͫ(fD04cv Lg⡳6DesH\9 S7\Zi8'hqrc@-R*h%\!U %|Mk7Grw)2a VMɘ V`(ml3Fcf #ɦrY4Hvy"8٤||XNjȯ;/g&Lʾ+w}UMw]hJɮQ;)K~Ol6AٸTMH4!iwԦ(wK=F^jB^uQuX#:lgqeU _:Wt<8a<}Ɣovt-j~k=SV/b|'n֪pDuZMѿ26]|u1U8#aAL .1 RmfLօF.5i/Gf!_I_k C9+ zgP{<#%6)!@'i_cj`p4n8g}}r{PR_]FY-i"OjAzLDD$tHxP -~ےI祰ֹKex_VIytJ)l0M%|U@TU-הπRkjcDz΢iތ^-g7gIse5ݗJ7#{psnBDkk}!$F ґ`6_R})ϔot&YJTJc(n60w O)=;yo蔯]eb:\ L1%>MEy$[Ks 4yNW8w?X~)#fUɺ֩UhNq Tz) uۣ9v\}uvgb,rQ3e꫗ʹX%č;X OyU<9w/軍w_񧣑"FMAH~4?, T,*;/ g7e >8gn'svtg|"jޏ{VzS(8i^%H4~17 .aO886|qo[翞3j.`Yc#fo?x~@Oxw/{zIɤoS~F>$N|qazu8vYb Lvgt@թ њvijr* ;0R<9p,=9 ^}[W  P s2¥W𬩄OTk N:ͻl{F}-E)o>zWvY|weV{7&3̉vlu;i?L&/ysW7ݣ?sqyvruGӏG뭅7>e=9楕GN/Ϯc>Aj9=[o~g&w3tz?ߎ/qDYY`fܒ3v}Eiv52RJ3W\Y}up[o,GL|i7_ަ߬r!Wގ9ZnL{DN_s뛨u>Z_QPS\>H<{4FHE 5n~İ $E땲m 51+厛Wiv]RKPfkkmN7S/Ub UgK`O[6f={dn}]+OC+yy_nOZ\ތRI8Jn i!ZOBts)[.=c+! حjON]jMY3.R͚ai|wXɹX\j`-i+ZJQ+/6{Gb:pli7#i .0MQr9C1~-6ڔ,}m /O2Kc8W[1և*P3u@0)%5H seQK 40q1^#14RёJUF-97p|;"Z#)Q_6sɺM(K6ؔd,4D2|SU0 3!B1q1mFM#d |wUQG"'@1 iR#|[{%bTgu@NQ^%SR(U-RBysYUuP i9QJj w[ n@?GIl1'@5̇5i709Fd{r5nj9oP3|qCFUymt&$$%6yo]nj#rcHF 9yR~YB Q` `ёJbP=w >B.聺a:ir@,x&TPdFG>2`)̀EF;A^kaFp^vyXlâaDԛC:_|ėE$\Ő ǖژ[Ckff\Zl莬N^@f4giF#/+CMf*PYj(EqA̡m0kt T{zU G6΁ERL1, v`5:\vAjBiRTLl`_!:oGS6$%lmvYFo{j`B]:>n {- 1 Af뭋HzIK,,_ta0wTL0:Z8()ؙH͇ :^\ T&/ ճX C{S2YR`(qH(N0 !-:?E<@/:j qJ4&u JqV"q݋.{m>sQP,f^$L'PH/>XҠ,EtH(kA Pe 1P@#n e${BB8b!z[/`E]1("^S:Ԇ&n@ẟrJ1TDymU ‘r'ư, -N:H4pU9Yt ~ xmLmT1cR=iom-?tXV=˽jרMg&=Ld2b+Zܗ k[r#F_v *6LLP3?xs9$jX*>@vPHy "E'Xkʛ9LAZERClO]Xv颀k5c4w|JgE@QxW3@Ll?ӛip0^hE`]X,Du_~󾅁V&V۪VU,bWio6{ـz wMLꀜ@ 8ƃq-;;&I@ m8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@ Lt&p@hu{'PZ%NBH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N=vtHN p@ ('ktAH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 z@vgp@N96we3^kqH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8^˭<_Go~9᭦|~{sfCok|^\<!871.m{o\WNK٭'o᜸p]:th,e^c0u@$în:[8Z=]1JM^a|mΔr4LO盫燍z<$v}a@^OOR-VA G8q I$MYN|]pSWDoͤcki<fZ|њ sע /^<잝PnXd3"XOT$B뀴TmդPk|+dN^ _z>E ^UKW7X`Z/i 76AFx~[<*mA&<916K>I0B]nqccPġ1㡱*;cGi}@/c<t"G:H ( ,1b (-I:$:QJBLϽEt5Ǯ:th؃]Yei7R.ǰdwX];i6VƋeoD&.| Q٬7M.epo7!Z:,`9'!4r}h ?{H;Q44rsoK&uk}vq00ޤ D8cFV-8k0>qiJSo2`-;@OEeRTIj`,0 12#3 5mxn0p⠏.{9W[^*q#z5PW'#ncF.~&q5ggYKb2/ϰnхГC4g]UgfyQr맳"DM$5֢ btLF uFA+)7ˤ3^vBƳk" Ъۄ:OHH\JDY\kT8Wh?5( H֔weҐFZs;F#H~GmZM/Gk7cNQHy6O:]P7FAVdQq Ɖ >FQhNvt1ŸOWӋl0sa.ݗ >\mmWD`8xu5͝~,+-I%]ʊQbV5mq_ =YٻM+#2r{JZVW o(ukan#Xa8IYqF;4_*S_QZE</zb_rK?K,yB!TAG"hr^KϛRK@SU ˑ.H|O/O|xz7/qL#0p_U/ڒEs#rI?\/Wpljb;\,p P4O%ĕV_:?:$@$m@O\RLP.8e4.!43@L94 x>%6SzN0G87?Op;Ta6=sQ'Bf4ieW>@]PA+m hS!B"K!SI.mujYԙ@-ݳ6Xs4nNwoS*TUBIT+BqEfW )؊Wb.]SdUNLh%#; W$8]֕|D$\v$N i>&04.#Nk0Ox\INE'MR(RR+NF\%Jh>vqP6^5+IUOdz'BN-.J֚_ROH\`IO'/eTUB+>P6kWqJNIJ"\NE\ZvYi %n_z|TCMC+]mR[+܊'x)2xLkAFUg1ƥ[4[Kymٰ%sٮ]c>|w1g>ۍ\:r>kU'sF[8J}w*u4o`h*a/:5o_aBX~RkZrO_Q6|9|x?ޥq@rY=܀/F[׽[s]2QIE_!&π>3@}8!Q[dh,n"]PsMyfx'\˽0/QvRԗNx.!X0!wet̙4.m< j;:FE1n*O&8Pǹ*}@kuC.N#n*y[2P7?Kۍuf'pS{a4bxSQSNxS:f$^y$Y8owO05xَogzɟb10 0dB .BNh%>rByd{! !c5VkՃ=V|ohebOZZ=83դ:{T9*g5M'z44cw1݋{Frl~+~҂,"3&<ϴɱ,w+'dzog]9 ~.& eTؐ>7[ĹJoz$%ޙ-ޤ~5ݲnC8M|ӈT' H,k/8f@So@c|:hLi*(,ޏŸQ?+zݮGfBX!*#H;HPTCfa/ 917)Oe!PF M2fW eps(joFZpxa21>@<8NxMYj$HRϿk50KI Sv6,Y2F3썁6ZLXA3RJ mw0}rUUÿ*e9սLk/f|1/̘/|Vtv%۪Rtk^xEcVkHs}cZj䂌)R ai0V lWrE؄ A2 ƨU1kQRk@8zq{+p֫,%h4&;W&C ƝCYxs%{%1`Mg2˺}!2'.86 X̴Q*g6(Y8S^Kc!ϕuݲ̃U/>4@L>qb <2LQ=CvI/zprliGJҥDF@{Ng1ϦBJDl'ҙP1sT:FՑFqFW܃Z1u|љ lJ+﷩Z~XaF9_ye1X6uJ"D,Іb`a-g,D +/%"W {5y|k^|n&߭D ҳ@1R.Q@2FDJ9;xG 0@-ב HitL*"+VafXTqLNj 0B'L|n[~d! Bغ3cDzFq䔓LcP譄;$(‘(nZ?Ac K0wڈeD`js 8+`(2RTat?< I(< \Jy&Rd`sH/R4ç>pΫ?80f,qW#j94.w0MyL<=BwpwVRIkUfRj}7UM <ЙSeggJIkw줟kwnUv1F"r6DgSwf2\< Φf@0! 0TK )'\wN  cq> ミ.O1t"հ};ኟ7Iq^xx i8u(͛6=0zPmygMǓ~V_;?T7^N 5`'U`.ͧ uR%'QgP ~$!> o޹y|T&kb|}M!>vm5ևAQ'ԋi/ͬExs"hc_묑jV Z&H8}1vSN]|;EX[(_p^\URzyu.ϫr?<~|^@ލnZD_wQIyOSP?&/IBw^]|~|߽yU޿%̃qS& ܩ~~|5U57*ivcM6yC߮Qjw+y\lTLU&piܲu;^hUXc#UI'l$/F:n6̤KVc8e6 &0=Wׇ RX%TP$RYՃ6Uܺ8}=m{m^~82(:o;cƌL"^ˈiDk45 iZ䐿SEQs(9'"zqM*-4bK2$5IiOXFw7 qf"{+lF&_F7UyChۺ9Fv\i om,rzRL(g5gI7P2zEEq).}Ԗx93^R&]sDmIuCoV <RUnm2TaR'ZK(Ir yIEt$LEt!"'U2Aʁv8,r!cj*L*hL> 'H kCJG=JK74 biy/Ǵ @A QzHLdu4(LR#1s\&xlӲ崖ZN'4) / qDԣ68zS'ኍ^m#c 0o *4g Y35>Ҁ%oCmH)໖:G- F,B0#BXYLUUSFDD b FDT 1Bዮ>L f*aGiwgf2sG01VYE 62`wFm6*GT KIw`P#>ŵ2 =(7P;Y|CM[f5d9¶N 1Dzh7ai/9vfJs9AfZ YDZgؾUl;m,[+OlچURQܛJ,0S{4̑)nq}ػ6do!xs`7A'_^`Qm1H|=gx#!)*cUUݿ:}7RhTBb /E%Ñy T*o]:Lp%!:2D!Y ("LDH<a U[k4OYd qq՗?־lb_gT3X{TlVQ^zGzB !ֈrI y)8^x\(a+c𺚿ǭ+r"K9' ; Dn'2 3oIKcm<=jDN*(4$iYԄaڨ5gR əD<Yr;G쑆-{x="vmчaiOZlK;/_9aW_d7$/#Λ(N(D@j4RZmKh#'H\N' =:ݳٔ FWvfpk(vͨv͑I[F']_UaTq<2U}Yt2W)޽ٶPH%pu~DUdkws$(3N7Fy¿VC>ȷqv{YBSRC!DԲDgyQǜ|:1{jlMzJ8oiQDP{1<;ӊ;.Nb+o6w9 ]u-C3I59 >ٍr^?__T|#~Mަ}$Ɵ7`8?)Qãˉ9-G03͍r0'ǩH'gv3OQ>(Ӧ6'39qjJNa\r"!2c;kk첸K/.;/]T-A#l\mV3{utU |Jl< >[f$A//̮(-u2Gjbri7/pRݘzWۊ{9j.rFW,pr~ݻyt;^ 5GÌAzδi=B]z wϠJQ&E2^ 2\-b$Iu&AUI Sk y/JX,3(P~{ցo{W&mYVmWG>s.iTP1ib0-̨*\-d_xYA*/l2*9t:.st\`򪉜1-Q1R"8P& \0.U$EQ@hjF#g7j܅Rju}jtċ rtw z8VLp/O8|Ip{l?;s,cVouЧ>O}:AF Ɖ*JQ%.'iay1KeN^[q?LɇƏHҝc8AŘYt#]-רQ:\ غ)`m:k(DN?ts~MC-ƀ!؉Fz“&94L&_ш='iB v (y 0ϲʻpW'7._/zi? {` [,/=_|kr^kRAl;Ys7_w)y=+5\5{Ԭ̿hq"~m(55daPN 9{甇37|>Lq>-fE .K6}0nfFw`8çTX\sBK+|ݸr)(rUf[p*4F`r]ۿ35kESxN\Q&+è*a6۽lb'on^-J;]}ӠZ[nkb$ Gne֜~ 'j{ʩt] FJkYe(r̫OpyГe~r0R+#:d]usxEl%F㐛@ozy8g7<[K+!~?,6Yp0~۟Ta%EQqIJ4UځSFbi2SR1M!Χ&6NI:a|yDt>Щ٨FZ0VF@u@+mg1u +!)ΩӊSgۛGomχZ5CihfXwvV@8>tkXq(qu `PƠ\oCMr eQ0r>\% {S|/DP^o۸.o6遫tKڽƟFѱx\8%3xC-@e {KDy%,wÕLQhn I~:@#ݞN]VtIAe9*dd8.i-cYZ3.QTyw2|c';ƟKOstHiY3ckrlGK٧;d{c[JfVdMbL#JS};#uLֳ UQ*K(s9:j(]Mom|َvGL>O,ů^(`08OP(0< a.2½g}+JDl "161aAX$icP};e% H*aJBA0E A1`&(%%lB[B;BEh (œ7y>9t7pvcy={ Fd,PGD+ Ak jˡfwn냸% l_]{uXZ$nt{|9{6+(/7.έ^q^}Nh|緋Nv4xgGn5ߜQ7{iy1iW޼ƣ=͟ z9&<ȍ&w4\WR,5ӟbfY՗Ry1_~?xSKy⭽b\#O/sHRp0DB72vyW%jDI|CRN9 3&  MJHcmcODHJ/!9| I3")ApnleyV۝Q{h8ߢ!a9j4:(ijFMQPW2EǛC/d%t)*[pљDZu9\/᜼"/ۃAnotwDZ`sJ[W ^M.:uR 8a0mMS}8/!ZY8.^ emim{QeVNƖi.m C!|wݛjP;ۨEݕѷ+/E TgegKoa{m7j+_i{Ur>|{u4q)ҠШܨо9ŠPV]~1KN_+k_ |-2/Эnr1ФG'e} A7EHt^ ,uh6-ǯ$j7:}Aqֻ۷`3:\}6k_5RwygjjxvAp~mx)야ecѨaI*ܙ%#VBѹeD}Ylo OeUr Nz *yw#\Lyqp/ēt8~0xӟ? ZUKy㠬뚆,L Y%"( uWY7xEE۲DwUvPm Lf6.Vztv}emf%hn^-gЍlU.w]ُwEqS* 5ޙ=NӲz-LOY7K#R<־Rs}o3q5x*_r %sCs tѷ)s.`bl6&6Zh/9v9|",AJÃ&Hͻ7%/%%k͏h‡\B1T)b+t@Z3rVk/;^sXeC0yq-| )Len ӻEE ujzJFX?lG>߅a6璁4OkRY6 s˖;I_NZ?4PXN:}  UkFPjO2"xRڷ8Q AE랽\˶c}9i$1F}]Xu&:ҕVcGrJjQ"CA~'RYӥ@W[Ƣc\')n0 ;s˭#:F w H#c]Og IUp;oB!)Fg>LLj9ⅰTk%iHgI#ǽIKow4I+٩us})_F~r\5ꮚV_s XkY,}2(*" }Y46Ee~ % L6 0fece\(J/(a"\;tx  \%/%aH{D>aL`TW%\|V]-sIJtIVMoX'5'KP[AhV>`cZRgk%5tQH9ZCo&$bw=0v-)*ebW?o5}L(^נ9! tc** 1_ݟGq￿ʀ+`A6z* +/W*;F8~NV*3C2L㽙m'|пq'ld8ojwG,j!Na.NsgdvԂ_s\PrOy'.~ǟg +V 4}>E0]Иt&,]< n+{a6,pS!$m=3W| jyE96C<(JAY9*w+'ds1dd±\7FF~<i >GZHbfNKs,4k5rjИF#3F![diVVB2(J*!3!#EE ȡ4N/ !e;Q!(:-RLpUJ9!1^tbp֢T۲Z3P})-Z\`-N64g4e𩼰Oή/ ֬9:$_ lMH淋{.ۧ(km>Z |Hq jm:x1=R#dČ!)@xHZheu+¥&L "ZFe)Av^jF,+_VLsKJTDx`{DZar4e SHLj4#ȎG*1қ=&L^PUh:wY9Vq$\@KeV8e*"fJE{јI&r%|+@`|udE6H Re E1> #C&!*rFrOa[~r* Uy3 u f5^S+rSr̴]Z&옸smڄ65Y[)5h3:M.gւm}/~r,s2Jr?8OၱkT+mP&g )&}19._2Qeл0Tt4x98Hgg8bыHYɔEXVO?18-ut6vS8o}qja"ʙ҂3!tBYC~??X7czv퍋co:XSֺu?xcvs=C)>B9E%ʂE1EȎuci=ɴĮ)q|i5j?7FCzI[,׸~ce~5=d|bzZ@Mfw>ZoiU*&_G!qƒ0bQG"`";""&ZH0<)c">c\NN]h/n673)ǟ7`OGL{ex }OkM LMrY ,Rg2D'(LjKRb%0liCn Ⱥ;7GKD)an?'k8D =LfRj5|jzz/#K κ9`BB3W>tjcIK>m6~ն dzhsOӉYI˭{MMnZ ;+֟p\_Woz&N;)}L :Z>UI6{txl833}ƓS%bAtsT=5 Ä1LK6:|1Q|gNRmN@aK s 07 g@yKVԴ8t'-9NZ~r)p0{q9脕lV-/g'6i%IM bږ#>|xa!FlnD>.n||ړ!՜oo~w|x,C-=W=,ᜎQjϟ?zhmfߡ]|eqp6\; O0TT(M]EvDTW"b Fg4Qv #NT_y~u^/ۯո.Oy"F Іb``IfSg;>* ͵Þ9?7z{|}B{s\Ȳ2yYu%%.pgb9Q@2FDng B#m#T녙u;"E RrȢ̰4N+50K@{<'c2W.?TRE#)'8[ wH0#Q[? aY=Z{QFV $ |pcɼZc`{AjW [=LC34D^FH `OrDFi.|A9>7se*|Ie[yq1ݞZy\TָOɫxu:6.60YG'/߭`oMK%WOp5n4ٯ%3Cʟ֝AH_WJR4ZL#![H&4k`#S`)"ctQ J1uf: t :뼺7[  N[lB\:$U pW^=qPݍ1૾I| _)HDAw+%sUXq?N<S4J@O=:qG4CMYxr1 W|U -pe0w%n^$ʆ_gET=a!4&31]{oG*]GCq؍a pCŘ"lW=÷HgH4-a?~=]놬F v3,Ah} &*uM\vF''Ak{%^'\뺹jpմa:Is`.5}= Y?fIm<ǧ8+3+L*YJ?f\_ue٘i?Nk +է0ʿzMfV0P&(ڜ-rm|ӽq`L)"As|JF~>9${U?&\'& 6r1 Z&ͮu:ކ,Q@GEe#x/-7"P YmЂ~KX4iQRq5@բ|S3Jm(L8.&ضq,-Ɨʻ瞨A9/(g\GEri:XkeWU0|]AO8w3csORw: Z.^]p:7:[ ]i|I&) ר]QQ8ҒP~'roDUkfWl(Qs+9qѴbi(HQ EOց!8~>JqTCs+aYA[R.k%%`E#InzQSy}hKLWǫ$d4.oQy$tި4t͚||\/c2As3|ΐX\!k|K ΕT|>WoO}0%X`T9F$㑅H>HԔ1тFQG!eLDT3Y +A˸S.&(nNo "ztQ3UyH7Ⱦ>(u1VYE 62`wFm6*GT KIwP%wVꘆ3zep~X|Wezuʎq.>Bq=8̱|4Vi/9v򹖜N)bxЄ5{$1~G.f}{ zi}twRښT2bzjR{&*N^[VLVUi+Xɧ)>#hdB$RJI6!;'~uoϲgBQui=14{3qB9E%̂Ery"2Eo^i95ulV\;ُ**ɵˇ]lH&QdFSCQc1ZYj4X^o ݠs5ʻ[a+ :4ش&S0t`l0_Ifm7&<[oXQo4Um/p OϽ/ÛUmH[7.J"+}F7g6P{"[A 2ϯ{QիRۍ ;1vSdFC_-VL.L9e#/d: $V5yEM$4Ê1w*ƬFSڽzt26W/l"ՠ{pvYGsiL4\<5}^d%*I€J| _Rl8 BI }R-< _qxK5`i~~~]~%{j2Vh'xvF0(F&^2{VSD- gD"䬌[3~u1reD|),&{,"hlKu-~QeegO7'L54 AoXZA˷oYW+:c4ͷ4n̾ӡks+>ksR>~۹Pan`I%0Y/*vP4gKrY}»աVC'TKXh@)܋ Ț^,ܖVExl=S^xqA%?0*^܉gvM&^݊J햤׹XWf7tݛ}; t[e $u ]C6jnMB볏;;g:t^XM&ގBV, -[qj=nTt{EnpW-AsM>pH8KF-ǯZ&!o7qCvK=xYǷg`6Zw꽘jw"hV׾|{d{7VѵA5ALmEAՓ=ݍ BP֠4C0`Ic p5MI3FCO3F9w;M3t I9znoNE 3z&)*/ӔI7\)pl`d^u-%mKTJfЉbMRuN|q? iOݎ z0,u NH|H|H|H|>Oq``q("Ass5gY"zl(d8 DB[,{] ^gzgߎβ2QNLè(i:봨e{w~Z0 ( XXE0\|1T[EisF[XE4`'. W+-{*V]=uE$=9EzsYR.>f>ט"__Yr.A\ag8NGr28=C4/Q뇗ϲAT.g#a|)9q Ύ}{y+njz v\5hrTM6J4AMVMߗXaFxU0QWiQWi_]QUlRWZJ5ۥ):$ibʟO?n^gxo^j$$<"FZpJN))"cDL@7(A$eCJ(j>WSJ\W}aXz<rnN>b5͎?;yeF^,n~Kᙃp} ԈTȂ]й@QZz..WM s)Z4$'ZA fCdwD -b*=)G64P4vHi;fPx#rYDF(G@PĘ9O4|A38mUYz X/2T63JP9焍 N5HX M93,r 2 'L?FPvsNA56:@Tj5B p$( KDž Tn!T>/f/a_^}^.l('3VZc[r7X*(j=CV4*G9)F0!8N Jhh\h8ړVAi $=XɶQ:(ϽTJ  RH.bT`KU% O`H"0 @4LG|$ Ik|G$+q0H2I] XYwlo>;m&OcoH%&ts%l% L=a"N:  j!"4^q&T]K EW^zp'H> +m]wv-GzF~U +xnVp8jWK&}_4(*S070esYxɅAvsXx>/ZYzq*ڰ[o%~x{V bN>S)U,:&YEL, UX!;cƌL" L&¢HVHK;܌a+g//1|%J WV_++lRͻ;Xg{8W!‌csvm bEOp(!AġHJDf?~U]]:DaS 80$5A4V *8@ҠqroDʵAǍb0fjK ^Ȍ ^`A Ɯ\ qז5Q6R02g~0Y_m~0o@f v{@\eED%} yI"R:"dc=`I!Nʾ \HLa@YGf9AQN&&s&rl4 {`bxcZadPg($^2#YobA% 9Ijdrc Fˎ:N8mv9B|_lQ0y&WϢc lu\矿 .L%^hh slU슱ɘޫ}×`/^1ѽt7wg~rOS-u罟vR4&?1gZ2ڇb@՞ _<}H?]<  ]> ; 3&ٯcw=ƥ;;+'|Ocg9>{x1=uJԢ΄P+QJo_u{kVZOnLCN^ _.&Y,k(]?/_tVf*1YTX=ujM oN:湒a2~\GGoA[-gUUYFMY.򹤫+Mn<|u b^g~ rdxA@]3+B쪏%"60ALrhƘ9*֎`D>?mUseٔ'o*HQepWYzlB<,VZݼ 1H !6(&})N (ϰxN_A`ʵlh$G<QN A2 ǨUZ | \KK7 ;[ft†06fe,$G C#뤞 \^\Mqz;Ecuk@}zݡp߰, 5[*u<' L,p(= +S Q@2Fbl 8RhH:<6OKj}v6HFǤ")Ҍ;$8"` (O 9(:X~, J0(uUPÎYu򇥁Sj ?T8rI baC@V #~7r?hJ ;kGi#bj$q /WB[A;rQcvar=7!:0B<ɡCP"KĝA%W^On)29OE ạ9fY2QY㮒U#j~c\?´[ofo&+߹`aqhRI)fVUS~j??.J^^eJF ` pՍ%E2پa@2L!ݠORP;3y,x}:?HoP&\:$UX]PtEw]T/ѴJl96)CT(p׻1&R(mWU_ 7GLVmU 04&JfC?nAN\&?=*{ cJ_7nՃlEg.6_8|,3FяT=lra!Ԓ&-1fHS3a<|a4.@ wDm DPcodSMcՠxYy:59Co{i3++WEJmDE뻑~t }"Ó~/=aP"#=z~Q{%嚡_*8{nps L$/K?~x~;uiΚ]e:q5jihn*MӴQWM!7vS}({:gH3>(l}EM6"kܼqZhM<QI\I2^xxLY,b0 iJ"j ^( !8#ENwJs[%+1Z R:1lP#uP;2`DBh 3[ !b! ݡN+:yH54F aim |`hITS}Ks౴4乴WuK_b uzQ|\Y<++է0;zMpfV0LPF)3Zd)`Z0n=?g%!>.IO#)\ C!fH%RojRz\TkndJd0rʘ\-/ v=ކ^:RQ= / |(Yib. /_˨Ѥ+']-jߙ1Nu|F_s`\pBYvH@H t=Daߌ*f̱l:@lHE>K921NUA}ӣ*Qe,:u_Om$Y,f!Yn5@,+0Ջ2d=$a3SY&&VQR`*ƌƁZ1b"iSfM$Y혯R @\I FS2i3 g1R ncAzմͣ-$|R -,J"g X04L`D$4") m}~'h4u;,X1hG<Ց&8G|#U+#]w.,9 =!"}:ghp8%b"XZZ;Ja8Ҳ;Wk{d|Lw}nN5IM5<*Iy)b;VNvc^-Yw7̺A^ᄿsTGyB0] ~j"eLD2f,ά63MF%x '.z})(y wH9DŽRnհȹQjE#aREb/J?dz: yDs+z.z+qԁhK]!thȟ= bW .}~݆o<y(Cgcq,;p &ָ 4ϯwF,B0#BXYL^jʈhA #(G!eLD iS`)HdjᐋfPl0Hfg1Y󙹝^eNVF)4*kc&Pf~glrD`p'&6H|s℞ŦH_w'K=j8^fjRv +/gf$Tq61/aN -E3fͬ$C[K;z.wK- 9`إAiQ:p4Z9CY,P*ek .0ZڛQ wcrzb!xelvMrWQz-Z?`}}}3<;^,HFk HO ^e q)PFQp)  ugOgrKؿ5>'vK7̆O҆d*!`g2o57L/n;zh[lwyɰ&w/am]njP+͚oc'.ʢ|'3#et|s|{w$;YfL:x,90w]h!Fb-( ]CmV5<:|,wǙ+8c̎1ǘKuǡ |?0cuLLSv0TPPY* Cea, 0Tq))5J~_(5J~_ (yJ~_Ma*5J~BWkE0`%F_(5J~J~_(5J~_(5 *5J~_(5J~_(5J,^(ԐBX_ a}--B] a}!/E a}!ˁzodŽ0S#5δCԟe]rԜs_ gUw~K1AӅcN2;_"Ɖ4] 7P?t߮poh`ֲ}K[6AW#.#Y?6:CfOG0&\6"DhGw򧱓-j };6O%v]ϼmxcrwsvsk㶂l^:nl z }p^vﶙOʒ`BTR@5紶ɰZ{#"e,e=S` 4:,c?.LjjfF6z=^ς@ӕIi{ ufx~j'5P21C7D/(18-(K$F)R!V3ӭ#ʻGGoy9Yjf)I0 7tNhu v9-#VyP( @x͸DVPeZ_-ٜQ85<䎲]u6&I@Lڹ.$ڂX"cOHƬ(-ڤ4Q tP|P`B@8˄NS [\$HZG-a +ZFEVno7/5K[X[Li ? jgQ#se[4_G#/vn,aW87_C 6ӏ?lV[-j`rL5˥TjFx:i6w-L7B5U~ =F%⽛$t϶yeU.B5a.ڵ_m/5~ј}O0j:`*},:xcҝ\nQPU!sû[l·gyUOGÏGz =+~ϱa ">JeqjAN`%mIK="wKщВYe8a PHDSV:M/[X-J=_m}ٙZ_"E%oZ ׮>uo[MQzlƻX RvIo 1Pף^ڛטWkokox ٛ}>[7|:טk_W8:t `շOYg{,}9ݯ/=@ԭ=As+K(&3Q,x2vEiT{Q?S (8@f1vveav5Ba=x4J+$3"@cVzѨwbeiPԡm<Ӓ0D}9 ˥734_Eg T6(SCKm #*%RYbo *cM\)m*%mp"+dFg"5@X,ZY#r_+-SaNTsYXי84:3jrF`:y'hob@'f+A! 6P X`JI'r oh^w"H ]j-ƀ!XFz“&9=|&!r{NqXuY&ڧdS(\w8j:>u߯ۯzSf5}%H-]RA߰2+`?Grb_:r#y t=+?5X\ "v[YC_g~lW8+d^K :ya0J('lj9a׍p *]zzo(NJz!R \]kW7c‡fwp! Lr)U}ϡ+=Y;5JyktoRͩh{x79{ܡ\+2o~?C;q܌>\6&ROWyl~nnn<M/,*#ʜϽAھoplҐwn/!JliIB\ɲbزbti1 mO؃糎_;M7NFemxy㼉3K>B~aأ7T^H^{arzs |3f=NH޸1+5Zi}KS_&*NPHC%_^~׏ջXć|x%p\&2*q~ެf \+o/ڒfEs#)wiSKr]NbB {7bs=4P Axl޿lMUމ[qtM|ݱzQBS$"z<.)_qa g6 Aez@V&τc(xAWCReSMJ@ٜɸkTqU_z%j%c $$%N&:1`$g0']' 죃؜AF%dzHѪ31fSXbJ z HũPu})gf4%"!m}a=[ڣSKf5TNIg9PBKk pZ)Cb*!"tJTģP%F &mW676g~>T3߯/7E/PԻ4({7^?;@xh'L-sjE X ovp.miq˺'Z"K9' ; Dn'2 3oI2gV6Id 1 Ds:ia6j͙Brc.OG1p.7;MFԉ6 dq c=P_OaqЬ'͕4>֬ cvEvK:|byщeHZHp@JcItD)*(ۉNS]D4ƒ2p!2Z\Boſy FZ@ܕf@rf?GLw(ꅦVIZ8Ok%BHY^>ZG}4I]PpCNQaLq?EAB&B,E(GN7#r))\-:i%UI$:%0^ؕ5F:1"j~ sX-|:}keG*y߀<u 51b*#ݒۻp&ft]>=.BO97"ւXIm- .&T? Zi*qP'UTGANkb Є(>&YME &®" v8 H%aU Π~1@s؎ggYpȻPϟD˽w:}VzӼ֫ϰCyC}fa[6Ё !PJb<51Ie>J*Cr(8N̚;`LxgXJ:D gQ0QAqBqKpgY-7' Ǚ>-;>hxQS\Y>gi)cpKZ( k ˣY]H;@"(6 ]jFn?5r;)c ި?vmz;nԃquLI+R ?~:ygwD&y8U !&ʌ߭^ -O\kk+p:h*k_ZyO,C<3ٯQCZj+]6eXTN0-C旷/o5쾴Uƨ1'tMe41 ;lhqwf޵Էa7jywV)MMMO_iy{;K7lwMVٻ߶$ew@>ٙff ^`F<{)ReQO*l-RzX:kǜI̙j9#rt+YMA} YIUۮ]e6yϿNTXa[Rs[cpAX!-1Y)xoG;ڦyCB`FO&4ȕTBZ=4Ԫ MФ 'ooqEyu= ciܿ*L.s6/Ow~Q jZGSJmΌ6>G#ɼa{D7چ3ЖVq{v lfKT 7}Zt$4Nrg:K0P{O>]-3(6pZ8'5Ό}OY4JjYB "W[FL7*d5$Z$ 1!6bi1"!h+aCX8塃6Vt.hx/Xg"^ju`͋k l+3TPw%ݫfL4:hI3I"ɂCgL tTw]ʹ.L\J3+RG)lIh&h&rBUJ@ jI$8ueTUMRv+3XPj#994ڌ\z7`6]mFǥvjOًͧ `(ҪKVe {2Ar2/8@ Ļl4Lo}؜;<ǰ2Cȥ,&[+,'~1QT{66of50M>ؘl8;zh^p;3OnÖ=5#dh- JcJSaVӨqM{e/;,T)R}RbSx~c]6Qm~7yٵ",y۴֫vy4-FBqB\"x߱=DЧ%jkG%*5<7+ #r2*թD-ǮSWoS]q_ْ.O( ͣjR?|XEprC朢6`T?i9\h&Dk#j6ǖJd+PĨN:~%$,ZlgVxpXU0F^q%$k'HEP($qELW{s1U`YK@Pi뤣"ciy 僷R3scWAɠ5lK2(LbqP,6sЎK_ J()B .}F<`G((Cx@!n;-5|+4ğ TŋU,3A\K3Jq"!~4&ix̸ZqSKHg wZ+QHUԤNeh%HOɍBUfq/1Nk-KvثK/Q$k/a6aFyDzk ҂<&f\ Nz/ 'fa+ZrϚ{l 0BlUBGvRSS+! qD%ga/-(aPł5^ 7y"Xj;XNxvAY]Pݰs5{4E]!j/Qs0S_JM˹rftiu/HVkQ{ Ae䂌1$L/qMX-tUYA:KMTL®6+4R ?R5eYi5ۧ $)K hdžOYДd`N!#҄>Նea"7+$& ^P ThBT :q`UD*G@G(oc S+踨Z SvEgI\ĚiP U`+]tYJAGd RXP\'cG$cp0!d a@Mh+Z -k D7l A8EQjSff?ȹ̥~.zJYix`,( V0b]\;,&[rvSnwGmn-B0#BXYL^jʈhA #(H8H7Msb৛@7.}ߊ}=U7T-}9pi3rM]~|;fmwZ`jNnRnd =!"}zɞ)RRe$! %K9 Q Ƒ6&N@si)6vpYɀ?-cc04SWr%|mk*G=Ɲ\\ f'9Ϻ:?`uBbi^zav)]EF&mܳ o@dKֶ d#B>/aicrs!N-%w'Tq< w1I==x4/Y1p͞I(G{ E9ɟg,:6x{7ӊF-{&)n{*]aʻVЊ=~=-RS6Q<㒁*s\XJ@,w|*FA`cAz-~)9*LWaj]!Scea[=܃ݨ%.pgb?F _D ݹΓB/LH/+:&ś 3âҌ;dW Ayj'(TF(\X,uTRE#)'8[ < $H7zlVM ;kGi#bjԀ$q WB[;vQvi0D^FH '9 x[D#4pRUTX si8I8_;YT=5Kj$^r&LśCg B0FaϿٷ!J5EQt~SvcvKxVkP!}hwlΑLqjΰG SDٵbxnml<x{:/U*aUiC(X`,Ia3r#[]NAѬcy lzݙBw^?|r{JN1`3]0dSiAh6Q 0zQF slNb?PߧW~..|ܔ\ցb|F̭~fT?m-H _\_BM#i8Gm4 #ƍԙa`*&+>?ٻM''A|jM6UfBd$ ́}:My!,s|aC䕝Qfx,_}lfuTSIKf(1gJk4N{ƙ+QPTId8{n@HJˏ?~~pՇ?]Wuy`2 LY Lw#@@u5jkh*дY θ)7qOKn}Bb3=Ҍ o =?/GfC*d(X`$#Dx|?Fa((vsw.y!>9ƈÑUyp6:(M ŝplRH-a`+=c2D4Da4;iPZ㉭vO}Ǔ]v^GiI*?U]a4 ﭐ,qύgԧ:Lo^6 )6gF\j"` dW7t"[J,dٷi ;!yBYɽrT.B&}:ކ,Q@Md/>,˞ЂyCߴӽ}e0|4 D-))6+^I.L})sG?Wj`mo+ /m(Tz໢uw(VgߠLwoPC= kWcl T/D ɀweCtU sT"^Ák AHRVa S띱Vc&ye4zl5BZ"Z+AtO~/c K1R))Xdfq#a:5"!iTV:$B IMR Q;7 qfAn$VJ]"gQL*Qzx, ɩ=),o[QYVi#W\*T4>XR,UF1 syn9փxDFDJc8r%5U(T0GI:s \-&x#32x5sf"(5ck֌QAtak.$-B½VUśܧrCeŢTؕ//Ç~o;A3~Z(R!)U )R")f=Uտ1t6gA`xI KB*B{%G"EDQb]͒N&$V^M ZG$A(H.8!(Q{SzOE3 [BVZIۦmA(VR*j,0{:a%V(5"pPf'֕/bUoVsLbs{aO ~/˧?pT|}TEM^WчKKf#['Jo}0nF6ϸnjP.Qg墤LQ:.zƥ9*CNWu8^( BޕGT.R%⽇$n:Imy,&x6R7v^_K_[z[h'LngR8@?uUNBwoN7ޓrcfp=z5| 5S=8 4NB[ 'i*uB0L׆C'ض̞B»#3Rs¢%@"X PHz& DaQAH-jcvLΣ"@>Ŏ<tg6mvE.~9n\JZU2SW\lzsGH~M&Erj`I-\{Vl˖^Ϸ(8yr~K]]5x~7sYw`T,r]˙Y/I2e'MM0ʺ5]\oNf4xZW\ ms׺kZÝejvF#D.ZPSM A{Ҟ*sDU0ы$"z/]B)%q)aHc.r6BᓵFrMTbF{ި{CErY#TcF୶&HjST)ƄܢP%Ol9%A8iQ"齃r\f9>lSTb,wdY\atBHC1^/MPz"C#";.̊h־|!\v^M](wn—ށ[jG |k UhЋ8UfΗB <S| SiRR gb|9u^Tuy!,%(at\`R 3BDN H 3[HJ¿Ex Q *2?/u.&m|_9/IS|'(xƗVt.:Ɂd:Vii$$1 FsF˒"J"1KZmDxkd=Xo U C+ 9;~߶͎BtR>\ո#]^]4aH@QRȤ|) A1X[+-oc5\`D-tց)xA$BLN-$Z@ AznPw$SI- T&HugSG'oؕ4Fl5t1_Hty\rbrT4,ύ0}mp3"[WP>h! R@\3ݫqQa{.ԡ @ΠAFjiY 𧓞6fs\s܎s|#!^3U8ڄ@3I""&RIRЉ}fp]-ON84XpJK'^3H$r06wM>>@R"IɥPfjgRiL%6Aw VP͠L02Rj*SiLCu%JBQW\E]ejޫLiJ159 u=teS-$S;qOUr`Lu"QWg6RWH-%dU{TWRW`F]er%2P d3Zuu |pL8ґ"Xcu8re;2G%W ZuGf;Ll 'cg>~f墌: (6vGu`+p"N?%:p0\>3eSৌs4~)dP`t&WCQәZ]Mg* j;TӌI ;kj).t3ɳ& 0޽>Q *"UiafreGd)T I$P'*H4ij&Ys'OY-A9s&,d4/nj|W؆J0l,Wƨ/'b#1-ૣ?*ƃCxuΎmd%݀Rh9/+l}Z e*Mk|[1  MsI!9[M36{QyxiOmFu3.`cj^ox`f7P۲CmCm@ZIh 퍫a(" t~pyw[3DяT{Q.x$u hFCM9%X+c2f4%D*@C( $( Xʂ*h61@cAQny2M+iMI;h ۂPLi q,UJ ^u`FJ`% Bm;X>&b@cAB[ 'i*r}iv{[ێH `%8B"3IPZ_q0=m-ܯ>f<*p%TkT{3z]u usȥ"3uŕ@hy׹ =:F2ۧ.[UrG+:YnFnjb[N(wly|xu5zi&wT'5x~sYw`v\,wf U9oǾoŕg[t'&{ilsRʲVEt`SKiŕ#!xJF+[۳cթl>-ڱH wc?xOz,v.mae#[ |YOr1H%LC M hZѨweUQRVO)k<hSHp;zYm\D"F24C4bh5] S>/T>0%Sne_'3Cb Ϸ{|:YXz׸N\+]2"PE$ r)~WDRAHP"!ƭ9$8 GL:):}k )3ɃL!IIi 6 ъR- B\d P WZ +%C! EekuNXa|4:Q;(ZC-rxJ=Aw qQ¶}ϵ5zgC 7VABEɱD=7!hih#h RCL$ExR ZĨ#І W0bdGcv/œ2|$h o=u_bĉ >w}=AdqBigPxSq+yu"*co]wYnކ2C'L>5 / &~Jyʗ`9vwQHJ쇷~xsϟxg8|{e}δ f 7oYhCǛ ]mo#9r+?1_n;$Xd/,|%Zp=VKdeccMlV)!|G:|ʒM^X}.g⩫mhM獘fnGv*mք=ٷ_lQ:Տ$ x<6I)ZEJ0ǐ$- G 8`d4T'WfWΜM*adi BlQIeRT<:$]T6M":MߝK}1Wfywk#:eMJs /&~IW-OpD$9MwT %w%) dkMrXQ3' q'2K|%Z5J44< فk N&x LG3(.[y\tnU&RE%2ou*ꊩwW&e' %0֐Ȝ b|1bT(.R#Ԋ3,'›پ6q<:bN_* )| /v=inW'i}kֆ|obtZ%<)Ph4S) a4) +l|p<嬂ECo߃Z:kX,ŅƠ?YBK/ߜʿFo°l_Hɷۻ^$+L#nAN9 5z1})h$X/h $\@e {ƸH|Κ{oRa^'!eLX p*)%9' 6S#FkuNY*#FK{ â75(ˌWYހ<j>bbRmTyyk5L15O4\Q^D)cF#ı H bِX$VYO퉒 t."*g82u<*%t2瘮"Ա %gP;xLmbA)Gh5X[) ?crYƣ n mcQw֗zhd'2 onSHm2|Z%e9 ,Ylȩ'k7VYQ 䴳FFC4eՀ$Cu(EYP-ٮ/@3}Z>nz_)\YȳdJLɐ5w)wB 8F|0ɠH-!gE8\Qjd o.$+}f ڄHʃ!IAY"q : G+eVW9wɚIw;tx<ߒ{%$Xωia };o'E9~4 /Y. } 3_ʟl}å?&B eW8&?|b(β''d3,? Y;-3ǃтt6[. O.m½4+֗\v[~%4mS~2~s՛Vcw6hًM3;~soO=]%{:nSFcqt?z^SfzjЮ%3 q5)'dֈàF.jk#:m5y4Bk9 '1Vyy$db<1aslwڞA#&Y5;ݬER5fq2f>T`Cѳ"Pduz:Xk FWW kjbR7OCo5cOy'N>վr8`e_v<ŬP~:6åkb̗ ^fUzszs4׬^dUA;kL~_azs5tkYj:΄5]=m^^j Y*sųF.OXU)X)IU'ĴfH:{vTMWMasm f6.tWyp5#'ʝ[Y%(7D88yLm~,RWC6G cq{Dt#W6ņ͖:1< un鄟U^ϓ8;[U2-wIJ'$;- @)l2, Z*"3N< , N&8 ҂8n`'܃F$wA?n8Mz% W]"2KҙsבvgkE?"OkϷUӉ#)Jg4 xֆ2G@sIJ"6h&9UTWt "6FB)+xk9mw,.>K+Sg)% IVRJZIY+)k%䒕V)ҕVJZIYbVŬ*E%eV+WIY+)k%eVRJZQT%eVRJZIY+)k%eAP{(VRJZIY+)k檤VRJYIY+)k%eU*)k%eˁJ4o}̙u7FU+ILˆ1F"pA64P(K/"Ł4>lXfg]%L :_]}s{ON~t?w75:{(BlG?z߆Nn |wu_7p͆k%_va"|m nL)>;_ m޶2.LǨ'ۊNFGZ}9lvN9f( ,R(ˆ3FI"4R#(냒?]tY&.lb0,8b}6+lY0UIHeӂpp^!ѳ jC|@ߍJQ=[l~KRUǪ-c88BQpFL8MY\p^($E=My^BlTMV5혚5J[mך>9kr`CJ V:QU9X!%bgAy-cő$!N{FsSjTҙHχb/Yn٘(ϕFjiȭavTbV| B=IJl?}͉yU$mv얪Yxˡ'!76%ҡ;v|1ǧ6sW(si4-]{7浽yk.V{T)慒x4Pvn0w\RϮ-oG#/zC|hΚ?+kzZ];7w'jlSq[ϭ8|v W\>^[n 8cAsW"yB&$0 "ô:Grr/੼t~H"Twg7t5&Yj9jG"qo-BRoHG<l3_UWWhtKhk;E㥔8Ϧ+8IMsG1Rpn؏;/<3ʜxO1?,B m8#80oCX3)qƂ):FUXyRUp#yGГy&L{ -7 P+ L1XbQz(V$c$H`Y3XB#m;7HwD@4:&@R -f`f*%<Ev*4 P$vx2ah@tJ-᧗*aGN9460ýXLz+AxGDqNL6Q0wڈeDZ05r 8+`tQ]H.GG^FH '94x[D#4pRґ3On)209Ùg;Ee;M^4Wƅ;ao57䢽!N>5<\Ru1\W7`f:_>d'0!YbR|^t{Q>g7 ZV`$"a9:"33z0<;=?@0UiC(XR2.*,&W+&;.]ŗ4B1Jc,>OxK3*tkvnJ!R>Q_J)L}┅[+G[)hr[]ÉT 4 yz$OaTas6קz<U{umy褾p4 5buTOy;$ʊ~Ũd87an//BZdiK/nlY3dY3aEOŤEiݳhy"hi긓Z]W :LaFR?^}1 Y?fIl /ӕWB~~~UaUb|r'7TT;!9ARe~ xCOTq*)誀bng{~o^}xuǻo_p M񇷯@2 .}^Fbkz5G޴FM5 퍛4m5WiW4%f)ݙ/E"Fp"Gr*ߤLܴ/vƬe_#x1Q gH, F(. ~"Q 1Qtk$m0.y|aszl0#uP;aDBh 3[ !b! N#:08ߕ,P͉Ge\? +bqMKy$"kY"09 oCG-bsvփjgEAy!W Ǭ?yy%T[TT R{a R;*s*4Q,3tnS7beP-zVeoO/-Lg ZGK!3jg>sWL#,8ZI}*{{=ĕb4\'}u e!YBfWm孠[%8JWg9Nb2eQ'F̮>x>ro U+Z(Y\s-!Q")H"e1jo#66W,J9=V4侹/3b%w '6obXMa6vv~ڄ#e\Drf,έ63MF֊iKw;vtG'(L) 9JT#ӺB"ld1G592J%E2Z)"V4Fl1"H|NB?);R4҃x-<zs|sKS`LgƝɹ'~xD<y(9Ccs,H~sgrXHSLXWa5 QD*$BP*jYt1Dn"5aR/v8)V2-a$Ey$RDY5.?> |н9<ڑE/"|n.F'33M#&2J 4Ea!P4;wPu Hr4*kc&PfglARUt4#g=ueI3&v}]uU!vEkYnɲVzଉI=%4M9kKnh 5(w ʶu R RR 4 ~wGh (hT ERZǾM)KLKwv<<_Jm'>]<on.c,V-Q1~s)U*MoB? >ys 3Feߧ x_F&)v5_Όj)g սyìԏMeFUjK7A^6W oyXڵFF͍L)R<=Ym We=̯ 3sg޻g# Tzҽ9ڗel_%jGRGã}B >#+ *Ҹi-g&6>dSNXL8y\WiA%yy62wsebjmM6K +!9'.F^U';P8WyJ!]haSsD#;0,؁\jRn&*g[;ځzŌ4|ZM%*7)כ# xe*;j" {'b0,PHUuz8㤇:C ܠuD4b0l 7k5Z" N#A"(J;5%4HP&;YY (tP'3X=3՞Feefierʮ_G )FM)3 7͇=QfZw]%*)nwԜ+DH\%qt_UV]WJզW|>⊯8q%غzo;%ja-e}3vwB\+ފGZOWLlޅ]O弮Ɏf IjH s+lUzf 1αN!$ABO?~k)fP8_o~϶/k#Jޱ7:fBf-t1EzXy_ t8|Cr' #GĤJXLY2$Js&suaR,X¢\,sɥUcB E6s>D Ict$JPNi$)-/p!`QtRx5_uXoG=~]@_V?r+fz9\oW1Vb;;fsQ.dOlS)m~6?eOlS){&$M,9q |^*}NJ )7oE)^{S/jl>Eh>ERk(NdB̀jr9F窔ީJ[H%H%s= 3I 7a&\%$Q񮇙 6{ 3X7m>&pmp8|&҉`]$*8ʝ7\ 1qV{1gz4iB+Io5¥3jXl-Isy Iu %d]1Lk+h2ݒe@ɠܠ._i@x!rF9 `hn)1H-h[*K%6]Jl,qV֘bl 8gpTSiQo\0:h}:;XtƘK) Q 5]1rn *.>@Ԯ^1Ÿi~lACro.HEA*ϕD -X,.(*c\KmHH )H;oYĠv6xk. 7qx>r MՙCs{p mklɾL惙|1IiVIrw_=EIm @[,9^~ϐ,٢$K%Hd8y|gwD!,ov+LXH]PpÀ.S H0΅LX Q'pBin0ZbB%arDȫ/0}Qm. ңXWP)h!a &'϶`//eZ GpyV^m!.&KO8PVM1haB8"r h# fpc+tLpz;XUH+dެ+Kւb}礂0ͳ-\[s Cj fu OkU0EEH[J( h) :t= ߼>祰楨8yRXVNDp'Rz⍲ `RNrKc hTrNd[ǺvC[ h%1h'F(ɢɳ69TwCлeWz3u=lj2B{ `a9y7ՑRLN;܌Viى>z8MXK|ȭ$NT(Jk!筲Fu(4LjMeD.45Axg4jKS$ulȰ7)0Q샤Nৗ,S`MX&tBxchb n $SZP1Tb$*qRI.P3oY3&!p$KAtIFZ:^nW``FǤc!Q+#Z*!KI1BdrMc@3!Q2VǎHdžrfXkoxsL֡<b2 ;پHDc B7.rMŧwV9V\uR&S\Z\h{饯%MP\Ͽ77kU9[M7G7{x< VA+~7"y8,\dD`v5sYշ0G^Ģ˗D3[lp(n5FNXRRLuz:QcMWn'uzKm6wyɆm6.䢧h`Zcr[KLo'wVd8[^2S:ٟMÑwZb<"p Sb!3GBf+}c9ҏUx3袧x9x<:ɌH F28v'f6E7DE=S"ԲVoaj&-q`y-Eo>GGK PMUi85-Sf~@wx@TX3΃A}LLc1持iLcІr'‰dja *(.hGѹcEH5 aoYs+&⏯MFmSHLR@PF?(@81#]+4: &h.F:TJ:D-;!IFY%F(.FԬ"m3ͩI xRb#F O"rdr0\B#V8|!biN&LՏu.]?⛼d>+_՟ "Rюb5 ĉ(s_#tqB yb0J('䆜\h9čFp{'1brnӨ\ * wK601nq y s)Uo!v]9KwzW򵽟5ϟrvv:YT)82o; ]MNhP=~E?q0*?3MT)F:Q^|Ozf;8EmխNպgxYS K !vAdqJOhgP<,|мwΏ_ﺣ6sT·i[qB7,؉ӯtp2~sarWx+W~??}{~xαo .s~٭fhٟ:i i))2'[`Wv:-­_ 1;;6/+?߿%G>ѧۄCC?`x[{N\dt;(yv-K^=Scr+RJН|p;ջo&d?ux4ge:t_[< φ囩sEi&:?Uo/[ϽۋnEMvn?6jSmIh:x Is?κz3|%n5-׷=<}Z(\Jrj|FwW}@P(.jRaM)ch7ZLUC6DYz3GkhJT7o=gFs[D+ը8.@ 1P ༷T,NH]rҞWfh>co+"T3K?h% P4BL6^!٢hrN'Up][ 5UkbM튪Gi6;ş&߭˔ rVV9Y0MlU hQxES!(ya۬6+}elӗ[V͕%pA@8%ndɑq_$ee3=HdjUůGLe1 a ߀kȲc8r21%Yp%)4(-^@P S"{} s6&uN#: ?mHf̟|Is缒j|ճ/d9(#6i YXʠ52jy$0SPo#;Eo_⟶06)-BΠxhgb&6y7?l{LWb  8LX)x Jm6mwnӬy±:Õ3/ P(R*sWS3L3SiͅGwu K*nʆ|J,6f+iؘ81"yjC {N --FU9湑/B}Rwࣅ/1y2/i]`xr}%2H=ztZWJT)Re5w,%rhSO[ϣ6r0Yh !␜D%C ̢P!2pUD'2$=] 6$Y&pR O8#-zk~6&Ξd'cu=|J'ܧ]7ҸybrRfa9mNY}yE 5F!3Y)3B& QzV^e2$&fQ6jq t|WJz@^G،*GmkqW`Dž$o ZjmTc 渢,_(` q0L "tNԟo7"oDWчL]ѥK?>I``4 -s9_fOgߟӤv=MJddI vVkKՠCtхz]({+BBEq沓=Wt'c,K0;OI:p,"d3\J@:}3.hV Qj+4\zkJ zh 퀓&p<'ǒjVg,qKVDj5>VBШh˙9 < r@;,&UtMu62Xȼ"GV@>f3]"B>;g8PN:.)mWbT~lGcS30*x 8`Lg2Q 5zSHƎƆ}É XeÊ&U.L/zA*me}6VZ_^*xk+t*':/L5(($"Y*;qRRQhٙL*K<(fq|Aѱ먺:bv^3g]y`o#!a7v #}/бspl #nh~tp^䜽~G/M3G2WB'Ce}2*00 }#mO`.tUvʟ>8ĕ[ݻX4} Ǘ)tFcfwP>N> 5}}=.5GMo]K=S^7l},m)حwpfZT}-Y}_,nZ9J9CYhlo d@;TLy˟FnM&VbZTbc$մasm9߭e0[|9\[ `W/lXQyBCL9co6:jNHdOnS_(^8axH}6BfMz=]*dU7/BfW"2փsW zХ挿& 絰*J!e I2@~'6p, ү=A7{705y8vN0lx-KO74>GtIc8pQ I?1d ~aC,8“yt*13EP+Ijy9:Pj` 28!q۲7}ԟ^ Np@?썷EA9IÓx)}o(Gڷ"Ѐ!]2#¥'ĄQiN7Uƅ):&u5f 1ӜC% “*dVIVXFiq=@X=H39{Ot.2Uge*j]]q+;;7o 53ng={onamUMmuxy3)kR 2T?Ӝe%nbk|&:_+|2" a&}gK?gp>g.tѵAs6̼KU8pyUI+EMaCK: G` <@R b".1SȢQS5ʧZ^|z>O_b3reL#Bt:ߦN }wu'DSdJjyܖyэc7%5~J?X 7EZ/4@ZW7=}:Q4}m<9{Tc44ژ 9JI+ X(UQvtwu;C#9@`4׷5x :^z0jKAg@Fga ;!EJ! s9x'=gA<~,}N"qA8NhSʊ{)n&ʕcƳ`  2,k$qT ()QܐQdQg+wo < d7 Tqt }aMDbXEC(˸Xoe1=yۮmaz_vF-q\4%MZȝJ-ᣙjo P7ȢrM:!$;yuYjfPjAܨg4rJphOMYrQV"]c9+ 7s䤚3nG)/ ӌBѰ//<8sn|S7vLnaF8=r}O T|uGck&4.('LЀ[V {kY^pO,`Rx}{YQ7'dB6hT0*Jde*gr"؆=vcaO xƴ^[6e[MRPtQdF%.:,d%i T1eY1XEȄ >q|MB@a#w?d ~FXxQ5U[؄rg{ȑWiťmCN;3L2X`qg,y$;gq-YzVF(Y$"UHHfkzHL*z ",iPΥ`8 L;#ֱ&vkzNZ/i-a;%Z)~RZi<-V'H$;v$#T~#] 9CV3LuTЊckC GlW>Qв/";1ɉ" b(0#E"X{k @M%ܯZgrrj69:o`,8-LZfT]«W*$r!ieD]~cZJ bxkyYj%gytY͏08yb~ ]x~*_R7 Z%qũ^/-2aqEWaRcrW7kk/ob[g6/yp5/c Nq&wW !@rs'əb1>Ͻ&AYD:#}GF:oZg%FzD:Oq !Q -a&!p>H%0P'Ft k[EĖP@:aFhNPƆYnWzM^ .LI׋ODR24Lqu0 YSF(1^^! Z EKx3Çƈ_ W{swð%jԎ/ZoG%}Y>@. @Yj%0,%9+Vr&x dvtZãWŅoxV PnVEgc};ɰ_w!`+tCuR>*pO75~gۻaT*Sh ;Ǭ,+e^eF}܈;΀pBp2ѯS5L9u-&JJvuTSWߡRH]%=uPPKQUN]}JIM^tV]J u(*QePpg ~J+ 9 uwȕZ~tĪSWߌ+N8Fch NLP.>e'[+UZlh8Vd+'erzh_~?ʆ۲T9IRu1_/omI z}+~z=i鵨e6XCKNKo:X)+‚QW\E]%jw ۫D%gaM*t$ib_;J23SJ)/GqxPsc !ScJ\[4Q+5yx軟 HBBs:t|uKWGoͭ[$Vq\'S5N C@4Qv #w=R ۡ{lrA><w.v02V<peoQj#qƂ8FVaD(ٰoĺ-޽ <\5?ǯWB;UӫB1mx&p XuF  3HKd $ĭ+C%dS_r8:aUTRE#)'8[ wHLӱ^fh/4Av|s6"FL HA(+`( RQu`xC#E42O;J=;LW4Y}vqWiW#Ƹpo3{DS=?O&n\쫌8M;'O 0W!R <ٻ˞N=1$=QeX!ΔSDog)((\qz7W  N[lBqTa>Z6!?] F0bZxGm^^_wN j[%hqNֵZW :J7XFRKUߍB6YR'gJU'ETTK绯oȥlt%QC, *d0FeBϩ*tU^v__@I;{]?ͻxKq mMo Y?z547n*JӴ^WJ.״2]B()Č&zg@/~AWϘԾXh͢b;7,rB`OI ( !8#ENwFK6K^ϟmqhyj3RP܉ RGK!JX +0 Q u9ۻj'6g5{ع}>I}k/l<ԠԄ1zhb-e$֌eZ44i>a/]ŸxVW65i™Z@Rjsfȵ1'HkS0&hsܵV ڈ6.pgYeϙHom⟳%iQjA;ZT C!fH  #Y@SJx-Uv|cu@fc`*硌x oC("[({X|(q~Ђݧ }ԗs{? D}l޾m.ڤogvkǿtz]7X+\Yh@m'vl(koeuɖA8:ʚ'9?W;O{Mгۈx[2}n>&;Y.?,[$z1Òh SJ3% ,0n4LG|$Iä!an& ]QQq!,ȍ$t\XH#gy>%4 lq@c|qq@296_|;r1 syn9փx'DFDJc8F5P2zEXtR飶ěU`̙֌\3.F){хqƦ4 I w gޥm$/W :zި~˗7 B[REA 4b! MF)!+%Lw0$x6N0+03i.DjXc7Frm}?e\Ek7jmڰ֦"F%HI.# F)pp!P0( v: ٬mY1 !' t r $;:X l@iX6Fr}XF2e_E#6jDְFdF4B-57Va0g'KFb3[XA`Ir.Ea2QiGY&j^gc\^ EN/Zem<rB1 Zat RT7 4^|/w"r, AJ Tp GA hV(\#V6*NcF1Zt03hCT KIwCM_9k<g}rb]"ɻcI|<+:r_RH!:}Ns 3X~;5a_j/9vF&򹖜N)bxЄξF?IuH E4V9#{5KiL R za 8dQuM1W2:&R <`s- OAuJ>O3]:e`k.PHbh՞2#,RF8j%ij&sWZ磩5i;sC[_i>yW 4[aޞggPofRN>Ϳ 6B䌂y:r -%& Cx <+6ih+{ּK m,Ҙbl ,w HQ#D*Y֑̕Sld8,9 =)#)*SDcK䆘,Vh64|p9.A'_.Yw=$Y2;J z{Z]juukl qQ6646E .2#3n<7ώeѬrtA45s}kFA*ϕD -X,.(*c\KmHH )H;oYoWy[-. 4cI$L!1h>Dg-j/y~ ZtNp$ؙa*qyu,F`$FҕX\Zimɑgy}˳osp L)`*r2҉#`uK9&rہ"VF sIEdQMq9lVJHvE4v;q߈wyr\*SF^zΩЭ5΃ 1w19d{3qR4}hu  -ȉli%%iC6l\¹zsr "W4(I щ0)H^EDpɌU>-EZY𲂞yF`*W2rDrlspDuKch؇f.-^6Q7A eeDLx^j^9!àKcy] .K=Kum ? Iqd(B`TNj48Jb ;컍&P0JG/ !/v A)0"'^Uy>ݕ8C:׿Pz5팁lU=l;#IsPR_/S'Ny׋8xӭq=@uZ節 #f L6cv˱u+¥&LIh5FҎY0# µA8z|׌fȒd4EqMql|Z΂, s &LF 6L,34~<)6! ʉ ]P'sIU RYU4c,5"\=RDhL$xUzUw<퓎5I@V4FGAp7Q)`ue#<2ۅd pJ++rWSJ,siÕ;* ,Jxi23Ń8SD;>Mĕ*bat{L#8jELYeLgVշq21zsBۘe:W(7QUZ;!}tphGHqhhe0C6=SkM0$ƒp`(*PIQpɤ`QCoPٻC x@;IfEӸ~%x/ wf.h|DA2nmȕq8u{=-J.JuynIӮ$«^ 0ˑk0kqdg˿=*̌-zf/fk&c{V -'Nv- :T R~M9bDeXd| w3+:k:ڱbexaXuӒ.ŰDŽzҐ5l dGm6.t+FY%qrd-gKHoC ^Vp"ikj2v^!Q;+#ON:J$Pp"A]yIސ0eƯWJtUկHcܮh2֯xr* I.IF_E_pGux+CQUq؞vp4 ztd.2ʼn@-EpE))"cDL@7(CId]|__mž-bY{1G=yfk[Bya۪zo<^gWk& ڷ 82Oo/ \pS /I֙nAyrKIN>•t*zpp9n&0߾hGa3qy7VJ;HJf1E-@$K0:ęB8w|p8k=B  `jKiht+|yfxh*yZ-&Np6\RI %_j"I-Ixf|=T= ,Ȥ)bzi.Wh*j<*|v3\%5~5 Ոs%tUr847cBJ&tk"^JRZU}7WIJAzJ""~E VDsĥ$-SnOU_W)_6`Xsv1M%w ;Ϛ32sՙ7sfN9}3'>p39}3o͜73eN9}3o͜7sfN߬k7s@9ƙ7ofN9}3o͜MWitFYD\lq1v\@cDFɁ޿Q68-:꘡Ci} oO2\ƙ8sg.e3'kd\ƙ8Mf\ƙ83~d.e3+f2\Y)2q2\ƙ8[e3q2\ƙ8eQT2\ƙ8sg.e3q2~G TZuj_s0gj<=ƙ\terMؔLC>Ͻ&AYZn˞﹗=;/{?%KP(Z0NEIpbC !p>H%0P'wMGC) 07‭iU"bK(F[qoR0#4'(xAc 3qF>mɈo|y5c3_z )&Ǫ5ؗm~,xuWxUƺq_E5GQï&uM\$X| M%4 :f!gP=WM#r%,˯6ÃqgRdp0%sH[ ΉuXmF q|P (KۙtT: ='P{#hGfB QB m8Q L7A(5&JK}b +/%"7$ SkcɾL,G}E45 :O - L1XbQz(V~$c"bK[k 9iq(a~leH)) J3)r!ʓH&g ឪMLQ8 РyitJ-K0#dX, u:0x w3Q H0wڈeDZ059 A1d^ m-Y!tų"uHX $LAqhd.Q" Q(\%`uK6v!-{Y%[-Y.l>ʸa\_3݊}c=?OUGK@/ QOŹxhɋP[=)`vC($h~pa{}߆ Es,Dž`m*-C`"lSDbMUKۿ=`/C_ɷ(*:mqk4!bq"l*z6~iAaP{c,>OK~]FNQ7wb&aV8JnTSR)дhk'a;Us3+Sh7^~-kΧ\AΛngGpu2⃳в4Ys\ Y@ lf;'PZ>A ZLpпTtug7^^8Z+AsrYʪa⭦Vi?Lt$e̡m8P c>ŰXR~y5﹩!_K a~[iΖ4!*h8Fv+jP*)WU@O)bo.\/`$OxNoN)dqߟ~vC2X_7@m ֨!ITu]u]yI/ݥM"D5#ATo__k|~r~4&4G¯ϑ&<& ,14%5/qTc"h&ƥ}\pr3=f:(; :Z VzƂeXii|dOgNu'u2n~sqdg +pVeޝx\];6i; cI#Z |I¶vuϓ+'%99W>upt3>&l(~7V…J;LJ2C_֙P ZPLxΉ3:wrpk]bBͦҐ!Ԗ-6pTZM7&M/1s웳nZ ujdB ;tǀ7 I\hkOߝa*M0:}_)6iG2*nàCoPϗbi ³%˚i`, ihwe{7ύU~'53l&ԙv `';E i]^QUߌ_lhUqs1,2dRGEJi(҂>7ߡZ0u@ν~"_RzYoCtu;,C0!B*Caj3jX$"﵌FMFS0YHKDg +kUGM'ߐo#'x̰$ïrJi`ack>a?]{o9*BY0mQ|sv8\`Qu%${Ywbaђb,9n,U_*Ld:XfZ=D`Ɖ5 EvU~WЋ#B 1B*=R0IeklQ<+՘P~|C͡^'=p:cfWo}ig59\*OMYrQb fRvA$ `\KϹII[#gf\R [㌧BѲ..yjeRUn ={Y9`< F /,5bpBgcʠ\0tRXeQx=cۍ @P?!)ڔD AC"039ɬ9bfؖ5vkla4L+hָZ[e;Q&DmtX2QrS̹$בs "f#xtYvm1XŁ,dB8FtMB@a#r?rh-٬k~(CшS5"ӈF\{[OIMVXiYz}F% Lotuq۽,g m $/2e2 r3!h5jn$K00J%HaA"gFw^X:[㒧Eղ^T^A+W}tؐwֺ[;ՃN<6;t'װc>a`۸-&V3E+q\Hjq@!Ł$3J8m%0!0NQhFzAJ%r/Qpwi{&狜)|മ2*}["ג6)`ե/@hs>OvrDd kWT)]Js8P̆P~4 diھu›+ޢ敒a<ov}Ԧ>o[9y*N/fsl[,z&@Ւ5_6Vse^pti.^Lrl=J evΤ?{hrǬp|J *H+͙0::9q?JCϲjݳS(^\w P22Z<tY2DGBXklVu.+="1%0#p%cfZy3 S"{>lN)3" ZlF-=y<}PeO`>ؤj!@TFdTE6h$3 T"s5.YFԆTj V Z26bDyv͞]Snߤ丞 |PNԆyY`BKcNvVLw؝?cV< XO`mK?d%`QJP1Wn1*!$yɩ voI)ŅGHwT0ِ- \gȚtJ,6f+m91"iꎆN! - 5;M}i_`z5b bZ=˸֮{̍e{燄irp֣S Rew,%r:~*tp7IQ2Y10Dfg8h#0q= A&ARp̒ېg > [/61>32l-r<|OZ/9]wWY ȗD9er d@TɆ$L tB1$IUw!573Y8eW9FW'%rxK(!z]ia3zn*jIrdm_+o( x܆wV:q6ٮ,=YF3{@3p"T %x ,U(+t|g_~%h% ~gZHnK71,v~H ^Y~T{TJɶ!h 855ZCTVA,}gm%Npr*bDT_Usӥ׀ q2:V(bH#xI{ ˲K8tt>iOdoY&h-HDʀmöEiWr4Ǜ{0GĮ.bDž~ڳ5_ov&ք(Uoup.( h;)Igh2`K!>KiCdXY C2X!xmRN(34sr,5rt:zAIR7.) 3(@'Y@ 23sL&18/HłI~ؤW8T ڄPn +cC6ȹ \鵙YaM>;gtXT>Yjƨ&7حF$c1s3-IU<̤c*{@p3(Jۘ iN;۫1kۇ}t+XaW m5Av~K[wy?JrRgVni W[[ WAH..~C)k5J+wH>dG6& 123<;dqU9활BJ&m ([ǩ]D}Es[9l<%>R{э흭n<4s`ء=#j/nt=\꼌(Vϑ-!F2WB%C|2*0 X+஬gq*{O`tvV+=xX4_}H7 )&c! oWwA>OǝMz^uR a}( bE;.mFѢ%(/! ̊.䦓}Z;/ ?e' Fg"ZTbsڭfJVy/a˗wdv eIh\Y:xaX Dwvjc)K&hIm6fiV'ն Wdz^abrmwiY4.xhD';tL wL|X=ސ4ns︧ X4/cY U;ɹ7TFev;xLfl2_ǁ2/vb]htk5=,`g+CQawI {߀\Q؟{#; KbH(ȲQRM,0:'eTe8vi[I[80fqzuL/FX;c6=p5Gk==HuAg)ם\Pnj?9-y:s,Fޛ0Ʃ^bά/?߾+E޾WO~Ƌy[9H>J{r5 ?,j%绥v;<*Jm\Ԕi&JVgRµՋrN~s2s~ܓ>@HZ05!{LBEz7-?gN6()-Ank~~=Y'7֡  ,&j7Iߙ^=z~73կƾWҹqcft5^z9]^ OЄvF貜/'.l+hՙ{RT;EMȜ(Mt.G4xD{pfz4]ǭ!mBFHMʆA2$oDTm<+YF.\{ڴ2r66L˝-:_ʨ8#*Mr8,.A88Dj\xve~(NQ%sc/|[^RXZ*:kJX)TE {UAܗ,np3z\XaR^{Xр8<CMYX; wa-lChSt^l2#¥'LDŽv!( ]]Azia!FB*0!ITdh6FL%+[ԓyGܒ!PK&.љRrdaps48?f聸*GaHSX|\g,M74nx:KtZ8B>r> SQo›ޥH]VO/.D>$CUܑ=[&?/*fˮ+Z'i-*}`>i^7+^NzٛﯦV#(Ԓ ?̃llە͑p4jZyGcM-9[hjF45l7fV mO4e*>n=mџ6wN3[l}N6WG]:)Ұ"hKWrc~{ᣕ[ɫ!~?.Q]{?_}~~OfmGU9-%d'2Txzih|1%Ozq^EM?Re̚ju?9 _HI?ߟ?xwej"³Iyg鿽ykijoѴMӲ^|vu]nhn:^ZX@Lz<@xorlXq߾XkmLl#WH>DϚuFo6I"&ó%B`>g.ktu6Ko6,KUKϟpޏIgkmH _d~WೳqpI9,XCGE*d[gHćőD)#z SUUuWh70VGQ!h HJaPL%f|tZ٩d^Y{g=޿.6;;/guaUe4YRMR 8U!j|5wO~sA[l`ԟ6hIaEt,ѷj2+$ AHQ̦4 F%S0̺,CLY۲n;-vҜs+V5鸯Ֆ-[mYxj$s aD! sIn"k#]A貁v} Ѭ D2L0ٷzŶޕW&=ޭtpzp׫yttuPK}s QJ26t"p_)rRh]򁢔zǣj碔zzG0>Ss+<#j Γ2AsnL%n˻%ž""LAH *Kddn;SRN.E(d7#87sW{FM`Y5 ކV!Ι1D;`%Hkma.FuVB r!XJ)*Z+F'}7ٟ/#a (; µK9"}?Tq.YJ;HD㠥8x Y|{o2gl}Bɻ$]YQiW}LcTU\\tpouIvxEngϵΐhhg1J`eRq%t:gU*Ǹ9 2C}[^-n$|MZ{ /d|p~WPd+|f; oC9hiìD*Z'l*H+Ù Nn~ښzd1BJq% l1I%C\p(%|6ku.kec@ĔJ!e.HAi)g8Z+hyc&8~\Sm2߻ ӒjJa4037߄u٣o=ӯɥrPF0lrA!j @c%dځ"I2`.r-uw^⟌v00d_ƅAKZGg.l(܁o;i43`ܤJRi^Ag+Piwb:`GGCzr W&μ$C\R,Jtftͼ2I+LKȬ5Rj#2/*)QwژE@aW&sLjd -/Lkc#/7_LoX_.b>RY>ǴLNptp8HztZWJT)Re5w,%r~;ϝ6r0Yh !␜D%C ̢P!2poK'2$] 6$Y&pR pFx%[/61>32l;2mhꅀOZG'&=zդ>zfht'_^7 ՗d Pc$22#dIk gNjU1{,CkoY8e9AǹqΗ"1[B [r@s!t4ғRڮ`Zwk8q!KECܯ<',W|LQ1{ q8J 42LZ hS_;?~i*N/GW[uX x:$?TvUn[Tz Uc {fX@*FZj=2ם?wظl{O*:PӁC}m]:tї[~mxLٴ|?sיٴֶabu9TUkn~89[;m`g{M,}r47^c/Ɇӈz%ۣko0;\H{WѬ% 7Ak=z`EYǕQ}B+*:C`e< UAT,@n`h Aj;>^)Jd,ZAjh]Jtht9@~a,\=t-7,'3QAM+Khp?pVdo@xL} V/l]{<'Ԥ`7WU޷=š=*MMȜ(!:GoCmwy?{0ء0hi?g%p^΀q^ 2eҷKA`Ma~5ڃn-fB ="%&N`ITh 8c킬ug-56+ڡKONbEKJ(Tm'7+qw+SS^?,y,oo]Ң3&j*$1]DTǐA.^UF#48u:P ]3J"y (Q@]`Yiq8BFIАCʬwvsߪ@B_hwʺ?=2~&o56rܓ'22ޤH9"[;_ b'폵4mЂ!YX4pyyĄQiN'Uƅ+$qSv>%J'55b5ՁɬD LGHh% ՃdqK 7.vr* >oG#^|,C_hF)y@>5>XJ O6pX6J3?~~;DO{8Nj0anF fA/ 'D?$Sz7H3Kg\> q9FJ.,dɹOk3$U9}d GzTACd`:+Ԭ$Ձǣ"A8987fs'L=O3NUF~lMW[kJɡJr [Kz~*jrz[CeB%6u☖]è2 |-iru:D{s]}y`vGgk*՝9_N[v.8 \^O;z#j9z5rл-U[w:`|M;V޴ii&M6G=M5myIٮzYMxfGz%OPvn_Rp/r'ٰ&}gK?g`>g.tu>cƥ.!gur==!H j\8vjjNL|Nl/Gr,ɱ)Kv*Qn>~ A\)Q4[^619 5iDNtr`˸eMgWחOTߍitN32u~Jw΃ Ώ +zΑoXy(wt ݹVa; AMPK=_ _J=bxLr(`cL07XR*WU`]n+tXe6XNsl*{#:ӲVaa?C,Q> dQjb:||zzb40Bz䱈W=Ȳ~Htt26V_BLGR92;[W>y%!TrL)"@L-kD!v?LR7Xo}?T?&0sk>|/'S&u-| aE(nՌjRR|\N3* m7X'FQCf[ڷQgȌI솴!dK"R>w|&nٮHa<xX&ɣ&ׇO?!fnaW@۾Wtjg"܎tjM|lWуꔊ|am%!ڒC[X_ZJMIBR -.#qqLTod&ndR4X;cpXx+Q61Z,4/ xvvdz*v v|u egQq0C^eu~Y=qFǘ;3TX2)ilGarXLL(\\U&;K/qv#v8/+屠v7x,jΨmF֒ ZDρ_[Th-?Ґ}ZV (0CO0 Vi>yjndTLEj;DِgN~`cAnXDΈ#"5^vKTƉgc"oRH+#ZC*G6$q޴`!l@)0P׆3; >IY;Ը8(BU'0c%E툋#.nqBE]A>E*gԶ)WR^[P hEJ !XSMqqx,xM;3 6}MoWq6F>( /wYļ1@j9Ku!Q|T<": Т5!ǚG[R!Y7JsTä@*vJp7fhX+(hąqrB_C=KNg}6vfwtu?VMȭM/)v[]^OWC8zֳoy<͎\'iӜu(-i{xw=kE]r>[ݞ6=?DfwtJ' 9eӇf⹿[i3yNX .s}rvgtu'*ܼNRg!=!JŖ'G'}ON=r'IG#qҫ)bn!֨Ъ$t & F]d]Qm! !*$ml!hKʻww-W/pGFμr?wҥNVK's/='MZ:?iji:X%1#_=_@Dj9IjK/{]}xyځjp1(h# Izґ'U %f|,&8@ hƠJЪH#5VSZzXCװ$T1-3ӡ<3L7gF.H1D18Õb Cb96;>cd|`ˀ*[*R ّ~sJ?}rѥTW`eؐI)8ek }F2'=:] ՀOc5A-tD \V>ƪBgM3MR+!2r|ZmnvXؽZleWmVBRmr*&h\2&`lxcs1xbעߊ>ؚ|qp!Ė7&$}DZLYM|/zv"bY^}`}ytfb,Yt ,y:0bjryÿ ,䔆Y␽rGcSV_p#cr+˶U|β*ґRR(@⌗tjZl-M5 r(R[ށY!bٖB'b?rWU{zKY DKyS!lR(XXlVXM92>e׸W0|?<&fZ_柿Y bcazǶ0T@T7R{t4W\-q'͐IqedxOax+ ^ =)bw?[&0*UVG (f ^ *n!Tkrܫ@œ\.:}B!2Z:VP-.bM xG'aapgI|'Oz&|[}T5*&_ +!%!4Pek+)rT"rIX\Z5{{ +q73cC=&;\ :C;56}7Ao Բ99Q|1) &b2goghΛ98*D9_]$sʈd*og™͆5qn]wϼmeOɁ٬h?XFq9%EujsܝC},,'IܷM fذ@oۛ?f}6yfo3ΦX7'|:o?̇Ilwm {#m$Z177ד[Q#!*~6 k?$I6TZtc؇R2Je Tw^ڂSH>ۏ19t8n-ma*y<[GNyxq^&kM糭s?ځ<5Cg Ƃřa LJ9* ũ UY;-T|vo,f1Gɲm&ȏt*Ts9I%B2r΂wʐ! oìj(Aeծ"~OEZeP2{/q&Y]|K^^tzچˋ|GD]'W&$'YrM;KlINve,U Tv)ĺ.򎭼K1Lcj8BM}+"+^["̠*IC}I<v T]箛8=9&KR׮g 3Y`Ub٫P  |(zr t& ` ȮUZA 5aCMfM&(T 9B)6 Mc!ʎ?O%p?o6l2oo~ v݆ U ;n7H$gghEH`r5CB'FĜ =יkOrޕնGy,\L>z6Bψ< yQ:ĵ4CuYcr>kNSAFc>kzNfwF7xrN)QsK9\/`1B8 [v.6:D=3Z}7?@#2S 6Pу842K"E=[@i؞ى/fFw!'E>@Ύ]ꅖժN ohp"!/7lx@V6َΤMy.zlӹ鬋F͖ - ₳.~Xw:҄F~X~v!I/-Pε7#9Ч,R}#0 D*L/^ۋ53 %R3fa<3uOw7V4 P^5eВF(g/ċ={I3MߟW]./z^5D3g/S붨pyHkM)*65N~4.AoXEzo豻~ƭ]g=$Uޯ8=Q}[} ' %ƻ>{zw|c `sLSG1/gU0 v` W6*leSU^D'EF^.^ʟ_} ͊;F$~v}It&ioG>frvWrEG=qsdqqyv;M7;rCmH2(QKr >W>\_7њdc;O1OZiz㵫"&p( W6㎬z-v~BLtyͷG癛揙)Y6xLލ<O.Գ8bG>ږnh_yJz]>fΖ or«7fYplYx֞ $/{16 -O[\ތyzQJΚT|q:+Qrtד~?)ݪJhB*7e=ԏq!Lm0ݛT>w:N5v&Vw7=wZJQ+im=5Qu꽡laH$MԵm"bP)ۤ  0kn8) ;*x{%#vB餍](P!26:)4BAnQxith**u(:ݡ-B Z 4BsX?`il "(՛ʃ:_|ďE τ ǖ[Ck2b:k4`Ņ%4<2=K+5RqXa0`TM,(YleF *Tx@5w˽AAQlG6N"(4v`ktufpWOՄӤ22)"}Eyh}'I9ck;6B@F;=5(!Ȯd؁ #7CACLAA&Li'i,,∟u0vTLc*u &NpPg"52D5må<l 34~ :jojLEs%R9rt Bp@,H( :j ~Ji0֭+_C؜s«q߳.׽6 N~^P #qҰI&}EՊX{{>2O ԙp3`ڛV,KTA BqŘ*Tei5 !Ą\F0^a6}Q~}wabcgG]]MǣGu3xuPKq@r *}tl<+MT$];-Zm\U&>:Iqƺf5%t_xO(z+RITiyM(1vÙ.MK}4o%fDr@!YF[[x+`m CK.̪d!:T?xWwNu2qnQMZ uY'QO6$dޜv=h /fs,hTɣB-R@1Œbk ) fof==ѳf W nk 9%bkVc[ DAk^oB9jwXbU3\ыN(PhН%lϱ0m,$f8xXT&*b,zP>~VH blUN,UhEAyt֖Esv4Y65F#42 B)FC;~TG+ UDX'xf-w;$aS6i؀ E}>~F9KPa ֛]e؀~ F׋^MX=X2b5x@|Yhl)m\c F7HSpjqHuܵh\ ?k1i3jFq5Ro*ZʇpFf0$/{%o=H#t9̛K޴;ayҐQUr#asxY%\aۊLn1J`]P:LMY!1g!(!==:T}f=.Gc_BzV$5sJmhB $\a|G?$o ^Э bp\ 1\U FZ*(.FR<z#Ai,W1QFǤz54 hc#(Vjҡƚ5Y>_^6 |w|/@LfY \Vcµ-9V#;G&&S3?xs9jawhU|`Pi#U!:ZSDޱ Y6.4(+Bj@S"p#D=YJ$=k9ǩw\곹R-bU.M/Z*cQaBM:SjŇPCJ,;wR@õ1Z`4bM(]*?u=>"BI>z= R DŽR/^8ۮ-gg;h, {abHuLbE풳f8?}r \]PmQRh*|>Sր5ZL"_.A<.ߛ[AHI9(cHH !HH !HH !HH !HH !HH !HH !HH !HH !HH !HH !HHK3W{H$rtHg) I oc4B $$@B $$@B $$@B $$@B $$@B $$@B $$@B $$@B $$@B $$@B $$@B rI hXbMCp8$MOsL!^$ @B $$@B $$@B $$@B $$@B $$@B $$@B $$@B $$@B $$@B $$@B $$@B\%?3IV}0$aI TDBD(i啐@B $$@B $$@B $$@B $$@B $$@B $$@B $$@B $$@B $$@B $$@B $$@BaԔz., w]ճ˧C .`d\\bka垃Kls.p郮8K6Nbs}:pƴ*XpUP8pGu0je+Xkޫ+Rk W/0\%N`&\^JbkpRγㄫ]T6~u8p(szg WҘ W*Jzrovjqn Xm_W^aW;OBVOAw<0 CG;{ѿ*~8:?\5O96G|'|I)~~}kRiT `4&Lkbv4[%L0mb@Pb&\Jbkއ+2;ӵ4l%(5/56˷x(CW{^9Z3|1Ko`Ô~=;;{/x3Zx?kj "xo_ˆ?~7.,jAE,BԴ15vo}uˌw o (ȩd>۶b(xe5!u=tnnG:GnnuffT&_ ûy ஖m\Cgpu ?aXTjJt~;N7-vW''.S;}~5[Y[ŊVq-\}s+om?٢xo˶~gi>]M߭.tk;v։k|DZ(:kj.oM~!!u.Qw9Z&7͂3NxxEtwɵ_!6/]c u?nb^Bhn_|.+糓wwm:*dYH)h ȚЏ[E[ GϒM\z?'go_Ot˭++$K\1\;0}7Riz'KoSWz8G"̚C0T+æg$A!y?x_|Nwq ~)ܪH0$4xu-)57K׼gK[~wS.b-B  & 9ҎC Kyhng`\ukO V"ZA^2W?gлs Ae^ {S7"[}'7f؂h2 (y`I;q.6wX0B1L'ǫjCS|bW_Wc]ظOr,tW-Xk[ Z} -tq(󧢬];ค!|M!_9^?HQ#]tQdzn?mYutsRA55+V" J:Oɾׁ(o.ʨ%dblgT/Wc0<[ԍ.o^leaG@KZ=x8-G(Z}C&X|Yٖh2_$W*w?vPk^zɼ9/gڗX]ѤSI`ə$QITT2H-]P2#gd,Ub+c_Y Bg ئW')VѴhr; }  F7~6ؒ򚫜ב\I1'TPR!9F !q\peNn8j"#{Y\D`*"f J9UЦ.FvGøb.:ڼjL)u Y/c'J9p(eT*ư"@Ҍ.' >$E !' FYŀ z!GO$_ C]ayXVq?B"}%(,E';Xɒjqymrru%BrMQZZCaTҲPԍsEyKY@QcPw9%('ՌYl+ea(;ōAKtNpn &Ƈ@(g@ǧIhBZO\|x(WP'A-boI\;E:c^t"ٸQN;8uޏHVPITPGD+*AA۹9~n9ps$BKfᄁ#pJxB!!0EP:7rD8Pǁswn%ɟ`\,,tuniz t9~GF&pe 깙!'6ivUS{s8ɖFuͳY75+9}].Ra<mho^9JݘwY;9- GM{g4ßW5fNZ%K_w7ך[nP;/Up|{\છps~S@I'[M]%HIWE- gR+ty@J<%=Cdʅ-$GuXKP@t:AEiT=JHMvχ%@}2@ Xg\Fg= dSV2Q?i{V)r+3|H Bs/:Rf8nPpHaD᠃%` SWLVctx*AB̲YY)v,Dt|9HĀUXY-EaNe~mzH P5@@wD 7G$kJ;HJJP 7pjv6?ygUFz!RVT"XYW@ED [X22t pa}` kY\VJg)po<"h%TIn<g p}+oTQ>o!3W+凂fӶv;ZPOβPoSs'&2L hLLի!}~Ty7|M>\%>>.F',v!]ir/?=Za9yrI=)EM")˽.P`|VS+,ΛٽaAA /NFxXCgUW`{fol&y1-?Amc,lIq~ko7^ɬZ;yij<4;^jvd%{={?qڝQWcu?gN{zI Ϧ!kV/RHRpBXk*W3W)J+c>E!]>=ɛ#+!rmۖrʉO@\&  c$4&CX TBB'4|>0KmԚ3LI\"j!Yr e#g[zm;wx];pzwA`_ J^(F7QQVԨHiu,ihbC"yV8˼3e[]}UX<:aP|.bWJi->$/%EДT*P i$Q :;-]?IBc*t c"%#AU. 82bstI>MH>43crދv߬ J4 '|R+'(ؠ-(%) I9ƨƲI6:AI {!:ge!RƠx8$&{1ZچV. &y+ vI j/ʔ49JmV[@]lMbo&[% į\jKD r5D]U6KäPHBWw LZD GmiYCrHsNI>Hz)răL脼q㱎':-`;_Z6:a+H'<|$|GbɹdDd&2aQ7_ CDhe^,%:&I"7  @PgBcccQ^׾%blC[ԙ_ ? ƋƬ.s֘(0 '" ͱCGjTGGRE8o>H3&sCc*N/γ9EޑJ#Jn'֏x@kV p{a8}{CQ J\BB$>((GAJy9ߚD Aǧh}gZ@)}M5:хᐍmB=(ڳb!dGR m P&2οwCI`1"i4R":OA G)0+Er2#D > X}2ZN qd[kvv;/i6x8Y i@  fu?>psl<6{X$3 6Ӡ٩DL8,w޽lNKQAk?z)* ,UWUVC*KitWWJ2 BR+Wpq*pR;\e)u^#\+NP\EUD ^!\i9%gOZ \؞p"V5•Q԰S,'WY\s2pe#<#2;yK08\=O`pnAgm`y.9'3oɼimppȊҊ炟K ͍ԫ hw}=%t8qY-N&~s3Z0l>%eղHP !ӫQ:;乌p.&x$wQ@"y!Mu.arR\#0?>|MJ)ߣ] ŧWyM6o֖u^Bwq)W鷟~xfHꝫd_)*52&KECpn]ԧRv!3z;ow<7ܹ'_a]O]YdeBBH^ !y!$/䅐BBH^ !y!$/䅐TR)M:O%~pU[yՔ;4EJ9X +ci. Zz+X\|[8ϩ(-j,_r֧eJ4 +\R)!wZCab< QJR@ȉ$80Ӡ:5U=I&Gi rMIKjЇ7\}_%&&%bb wv7_AubjIJScN3;:G(p iH&bI hrKʽU0(Qt'jFr:J $J"nU)hDD4/:#F7vÄn*k6Y[ypfP|@]ޕ;02\Gy]ҳk=9ڧ\wcuufֺS)+:"]>_?2?]&y q]tb5㶥B{ŶXZ/diqr=*UގgVr:d>( U@*9LҬEyE^i1ޝ(3ݟd龸eZJJ]6CDPT,$"Q$L6Bv<|j^GH0K*(0C37 J$AgiyhˉVvrt{vÈQ*Ԁd|Is!Z橏¡9A<::8+q+jr)ql~uۗɩ۫py÷9N!lrkfcS<[L5(JNQy9&X R!d5ܗ@V4*ц(\v[TiH!/-.:$NBOB2NIXAQў&3qv@V ;b! Y£0X&ϯe&Xn?c|oTv4܌wؒD倫>%褘%WRh8jM&Aw؝dV\̾vgc_6/],؍x> S,xPڛ2S\]]2eT*ưa &)ַORNK*CF'=X"E='C h?N5l;َSHoDl~싈cD "n<@%)j%\'5!6q Ԧ(@Q C!Fhc ["%)f<`) kUTxޣ'd)gw)ә8sġvE>uv6KE1.ʂ7P&E/LFh !O`c-.4LWc_eo3|n]5;$حڷw]l,z^iy>L' -n'$hs~{v`x~YFVzCA\<鮅Wqݕa{J]]V|ͥ :ks!BzlnMN:!D{"U&$ 2#8Greϝt;']I=qI؂Ibc*Z"hEi"idMs|>,*TGu`C HFbPO?JNz[yjȾⴱ7}k3u&a<*fjfl7`FLW_^UBrq T ]hoo" ][*Va9aI=29ۑ~WqKV֕v#ATFAdF:눴Nzp 0Q鿑ߕ,ض  ʉw6g2%Y a5ǂ!z.AK@"HJDNi'6EZ8`\/꼁30;\NX9L}ʹ5)ML*S n؇ϯMoTW<Ò#c8Fye?d߯onQIR)uGW%..NQI yͻۻoߟ`'8>y52>X_&kz5?޴&M3WifWiW%fPPA GKB/s6Z}MN:&ꨜ|xL8=Η]tD_-;yKܐd(Bhp't)ؑ59J-saʏFIy.SFS= gbe:uǩUkkηӥ]Y6J3JJ 9֐Ȝ }ls=-Kk緅56VX;!gQk`jOËIpy.ytR F?h{CZk=k? U\v"nI10H %1zJ5BDIH` A!(ŨP\0*6#!Պ3,'Ӗc"Z`v"׿ZUNfopկISpz[?yxŠpqI:Vji$O(TBe,M T)ꔷms]dMo*cIYK1!:0hйb9ξٔ}c$7?e06z~ +FR&Kn)r86kXgt5-[Z" pENu `4 $,,07J*BʘHB8ڒēDL.di=Kpy|ts3#]֟]}q77'N輍SƴF3J.ZS Ά"Q]iG?;S{dbk.Kȫ ED@Gʣt2@o25Ա %gxw8.DXPqQ#4ZHeٚ8+%:ˋgB>q>!nwl~]Ci/ǧ5sȧd)$^2mȄZ DHbd yZ!^ўMrY#!Dc[,[mMr8DSD)J̢R%e nM5x!zzeI3vV:^mr,7dYM񂳦&ebUhT38,Y_`M%,FhiVr t~:U+н^H~,!YB"^#9i&DR&4I eZ"!GrzLp&tE>>Jhv( MؘIBVޕ$4YZj= M. LB#K9 \eq5Bi m,%%• !{R+p[WYJ\@RqwP`AU],zRo<vP`R2p亃WF)v!2pW V{ A)3($/\67[iEђġ`vG NQ|Ȭzp(`_+m̾~J߈~77 (oRnL3Xk;~ȣ'Ϳb6 ;WY'}\eivRJ +3{UaZ7 ;i'+rDѬ_<_;ꗳL2dU*>@ZJ*.9Rj%sQLpJ}lnojFwng&3olSWX%ܞ$6PhosG{ bXL"WEbDP*1 %Bji(Da,8F1hGd!sq0B6\jj0oz8رk:͜%38Uqp|pRߘVzGWleJ1SO|d:.&ڃooIgIf&0 _P/F8k3D5ѧuk{w41᳌3SUf}'C h'1ݾm> qB}}%~S(0DiU^ygwwVm'YJ;?.*0Lvl}>nm϶g>[vߴ[ڢ>7,ಹeMץA4H02e{ JcI.*h/rxա0ۜSۖڑI Y3M>AND%)8pZf& $@SLQK)ē U xȭclP ABӁRРs]JUז8S:v9K{4t;9Ȓ<4Tsee WKGy/}>5*y!1tƸHom(>D8IJ"6h&9ժt MRH(§ƙsg-(C򏺠 I@-Jk⬘6&MQ/\S2XNh^I *(\xܲ]{ f(('NlɌ`'DEG*= 4*Y &®{h|^U̯hţdAIZ,q'o$ EIm ϵ: ]{2=w8:w܁>ڛЛJ/8n"rJc.qږNrڽoZFih4~ 8*B4NDdd 21U\; z-8kքx.h&hsygL e06uڵhMխWUϣ/'~Ü^P#=?#!  !wqцcT'J8dhxߎ к~tUlKgi:ueZm8,vy Un̙I벁v.AXX0*DD4 ,yD140ֻFƨ*$LX> ڑHCXI!O+ȓgy hp?{FlJ#_Ea d6 .g,y$93~ȒdDAb+Y.OSOl9|!B jΥȀc B8(q"o}DCُl~-whWuZtuG*r{v=Z@(BfK9xMUr?ߤ/?7OƓ zEqQm C$Ο"w;K<>T"V>}ZתQ*}{Z2VQ|R]QVj90:=̹v쬭 O 3 GEz.魢'c'W=)1#h.U B Wr\^bl fOe?/x?.XLlʓ`mpڻlLi8os6л.ӾEegBdvݳo<@'rjvxk\ ZU3&b#}>mDpt>ȸy3b{m,jޤi7WFTABS,Ÿ\5Ac/Sm#h onԻݳmn%lxX"5}ߋEjʔWGiSK$jVSkb:<ޓ p9o%Ӧ#jJV)+ʠ_o65&6D*h,i]mx])YMBX/o\0;\>ԅɤ7FYTx)r!1D1Uآ\8# 1)E<{ d__gȇ [߀y|2nox(.J^0|J@A`cV^`KD${6Z=础]vwL:bp1XbQz(V0'5 HƔ?-ZH^u`57V5xb){DǤ"0SU;dbJ0@4$]Ý㱦B*y k^N%a{tQOKkjΫwW,0[ לO0z;۾Y)WɈM;/B]-iR+]EW30lfVYކ8Vi8jB7mGo]&'A!ZzV :N#U?ϏÑKUa,8 +#=75DU?_}n-H\ziJJhT|9nzoVv&llK-wOո]&9bx *)dFRi"` >a67HN/=8[BE~k%2-ȲIa2O!A"1r;=5s܆"mQ@CY(&t܇ ++߽;wcp>^:MӒ\aT!t},ymz9_َӉF+oLeɞ·~̀83%?!-AeGZW}}X*gyӯ_@d@Y<(eR\H)bZۣo4-ܹXpC*S,v׆(14%3^XȜ4`,E02(:o6[ D佖豉hj iȼHM5]:$ŢW^uvM$z1Òh SJ3% ,hi=4LG|$Iä!aʛ.0$5IDXq!, I(waaR"ry%j0J3ftͮgӓ/`26;pk׷ trRQ4,C \@TT1Aq/U^#ؘ" 3n9Ø(>OG;(p$*JjfJFc:+ LGIr0Q[Wig&m2*la6̶mS NI&W w@7+1F㛱ݝ_(M?ƓbsTh4REA 4b! MF)!-%L^* ^ 3 jmJf yS0HHLBDN;8-+V;vljifMV{`i<4*1g.3LÝ ]:vmU7UB.m%wW'l~ _+0dnx6؛pQ6y9S/!ęfmuw 1[}~>ˉ.\sOkPv߼eK89ލ3嵻>#[:MQx5 "wI;s9|GuR6C-a&!p>H%0Prr,nnEC*ɧ͒iU"bK(F[qoR0#4'(xqwmͭ\vdeRF0/aASsTVcU%)OY֐r.a+'~fY[FH(eZ:Af~ewx\ '($([Mtm3Όc*C'Tst©(mDg[Ƣc\x,X2tix$VT_%}`<җLLryå T3`LLj9+j a{}+MËT4r@[iCt+HY zO+;zMLizHxOx#@xm] Ϫ/A"/W^۫]уK?f;zt8ޮFa;q9I+Jۢ= dQ,Yjg v_YgG5{FpBEp!W/ NBxpomΣ^w[ N#,mhsԗo&ע2mjELFqm|EjvUg/"MvvP67OE R߳Oz8?7.Z}fpHClKdRbJ)h6́čBO6ihh,XK+6mkL1E6d38੎@0!8BcRE.WZGN]^.,{& )SDcK)҂g$#2:,lh~a8}G};|?Ndى{/>{9?#TLd\HЂEB/Ee k I!=Ei-dvol~ c`_4zH?l+{cifۃ[o1O0#Zdo`\Gܧdsٻ6n$Lŵ 7Tܣ*٭$si1H,k̐I|EZāCwOSVR&KnN9W 86kъ9^?^I@^-te)8q19] {hTY A IB[L* ӿ "6!Jx(9qL)4q6$A#Q[S{P215%ԫ@âDGE$yNFO^{N}Ա %g;E8hM,(8EBh5DzM~FNLg}y5FO<Ϯ& ]MllVji׳M5^2mȄRJ$e9 "Yb'ᓵFrMTF{֫{"9Hcl.íF-[mMr)ZPR%(TɓWFv ^g}r35zܔhMQ ZoW-˗,mŒ"% q(Z K4K cI%+$ u0pɕP*St9+8++EtS'N&WMVL%#\FɥOG|K430YD%kOUDsb*vFZK)1.U>oVD~N]^H.ADDrP ?T*"/jc, ߐ\ܳO P\H0B3+4S龻B34GW衻B)Aգf0\=\ / WV闁Q <W;oBU{VLbVվ]Ruu`( \sҊ'guӀ9i~G@K`sK) $4l)hT%S; ne8u$3Ŭֿ17xmΣZK<&qlp aގ|Ql}14Ÿhsj7BEر'4Fv~/ |DQ&FNѾgA퍜?fwE9_=\NrVlU:1#;o$~bԘ̗bZfhN+|eQаAI{۷BwYeoylُs^S'(k5yu.̇ȭ y_(HZ悞V5'22b6R CIX<tH (BO(Bd O[P&7r#cJoXg슅g,G,|V,\P6ΚjU2 &N}S{k J;oro)&mJBE@J/'0jM<4WJZE~NZ>fcJs L*/<2Pg2h(7|L+w)َn< -s_P7YϨ͎}`7q< "%OR^IMvE.0p(I)61) $퓐zpQ3D gQ!քHjIOģQ4]Y7raeԏ?db}AcWD=#"?"7@!M%` h fU`S\0EJEzED(m(8,S4eqyͣ{)WJssψ9"V;i>:{]qQ∋G\Qv Ø!*="OhdoTv&1%/xw슇g<3@"ru_,^VgVoA?bUE];{{ܱڏ~LA,\J~("2Le+R[T!*!SԖĒK'!Fss~QٰwQt^De5R3B%Dn%w9!z* SA<`YdcTvQ~7}]K31u.%I8ԕW6mν~S$MҕgJl]bjuY=.hE>m^ͪClYbnZ=/o1V:jr͟c.G wNqlwij|K=*kmf Xs>y͝/ bnt\=|H'hsZv䚧҄JBF=d=7bt'Fz"@'\Qb .#`EJK-[%@.>Yj|'ۍo;7._k>wjȶւ)YsjTj]RW(1G}UNt+1m_\вP߆>UiJDs%pRxJt\}VvyU:G*Iwn<ƜI˔J+(k-@cTVE iAr%#~+ޕ$y&4 mVTu)%q Ǚ(jTQHQx `H;7r: Z( \y۠/Hk qrwsbk6?CLٖ;=kqk:QP:Yi ^{ Nu -)}8T\ϼr~TrA79ǗɁXr@*lx:͎WTRTJ-iđv{'8I y7~8ox1Vq|oo~uo].l/D`0UzՆU57kXjf_^fz͎h#xRAVo:"\XxMb~4&#эSu$d@&I2ET6Z6rNd)I˘"!ͧ$7Ɔ l"MKI-Qy2*Lr ,N0')IltZŋ]ژX+v^cwL߷VuwvvcgSH kѪVIޞyz|(xܻɁd:Vii$$1 FsF˒"J")&o+foE_R0^.EP!:0hISљcsFoh9o] 8o>!Ͼ_O?H@QRʤ| )JA1X;-Kӥt:-S\bjkzuΜ0:msmB5ِxs=gG?w~jOL\x ҹ`xёATH'Qdn2= %p|;C &N &P>f$Ո/:鉶62lM ||ػ쪱%[-ZȢO Ci//Nef nSH6PTQD:dYJ$٭ "[bA;OM"kD%f`oHN;kd4Jc,$Ca:EKbLx-Nmy&3x!qNLezEYnI4qΚ,Xo.wIK^pd{IɆgC5?l%LST >9,O%A@-5!$|R#y&ts&tSs7eߧ>hj9&Y%H1 )qf`'u)#8S,P-MlT.zx2?=sxz'oU{~T6+~/,/>?o "/qDYR#IR1 Q h(svJ(GPfNAdfPixꀧ9^zC yV8aֻEΙ*$\Xb}@:$Eڑ ݙv'uy[rȱ|CR$P!F"A@9j8 "JMsGp|kRO7[~6p7Myj0gWmL^mͮ -6@ݒ T.($U#6I +H,֓";R ˩5K>:WG3g\Lb +N5nڸ/*{9_qfZ_fxݹf/W٥,Quue_wMrlv0ggկqKoUZ'6&(?otqv~mboW54ձ+<b_Pު5!ko['.>+LK9([3^l, sMXTP2*rNzx-tx۫8+џihG'?oJ;dA~eN7Kl. .wHueav{V3gqHT3~;x|͵t27,JLa7tP]sow ˄&oƼZDWd5o,hViפ2g!ڰbtVu4MgVY>H烵d #q\SGb^ " /-0"VhǼjim'<u5OOD ќB䎠Ɵ|sс$2L K4v Ӈr/.~}ڭ96Yݬ=sY*ؽ ("/y17|2=;SьPZ?}ÈX/|pDq[8'͂$Fr(S: 4TJ R5)bu ."FԮ";8גH6Y &NJ "+ý!R 9+e4v# dPBZ# .-LM׀6JJ+pVW\. p5(4x:\ל%G"n)j-oВU'Wx2pE³7:HZGW$o$ UXۓ"W$eHm W \ጝ"N]O$HiE W"\Y2DH 77?3vtk̹\'MS`#yPec 5}R|)RwWyye7w=G_w?wuN;}ZٍW?xԟ8WiEu1_71fBT'rW)?cƊx\ÁOD^]e GfmӃ6!!z7,K| Te.(O3kxNI Azw#V ``XI@9$RFɘY9b:IKUfgQ|0Ock'dFih:"dQ,`9,:ϵ{&Z`d"_B[miu pqx5_tutBם,Ap{9|JWBE^O6l{f^w~]#UL8T aw]'T~t 0gTv2C θd솝]c.Ùҵ}>^&Ϧ͒Ҫ.0K4T^a`hrqgyōuÇɉERޔ\_.8ٺ.q~>wn\ "uzؾ6Fׯ<“*i6JFgaR ٬k|58>MJwŋk~]T?~rybDeǗ\O~u/.Vڇw;_ /ޒ^n/Vdʖ@-ozU3bU3qyeyG6'z-U|}Ci&V[괓Z]WK:*^4#aE>9w7JA8g3Vf^MaDuRzM?}&~~~G<0!U'Nҡw4^κITZ*5]դwB&$?~,ÿo~|~xZFr-UgD0p:Ls_5nߴeM5MMiZ6 ߦ]dWT:\Bx_Gpd5?7^ꐮy e4ϸL}vߺ}>Zd;HIm$u$e͂:'\KNdŐ"l笝Ȓ ϙ:`[鹝 Sp@~{:QfiPtHK:F'<05)kvO=Q9P}e1X3- {);-θvΑG̽;:]"vЪ6levs ”cxi <_m0 _\?w:]^&󟥊4Ո?wzjh}k/ wNy`ࣱ&ϒ()xkf& dU#%_Iv0Y\21z<]˰kS(R@ӊSM@:oKQx_5W0"2z[B%W\Rn%]vZշ`Zr/lw6@f5G㶂\-Yh1+ۖiӒZ8Ӻ?UNQU_ Q+GVو HVm0 ZDđã7G nxF)}t.T Q{f~Ff:C@稓C Ɣ\JZea۔.} {h1s#w;+?^膝ty-dɿ^R{=SvxE" X0!wB*2kGd&4՛eNзVV9y<MU{yeKoߝ(*ot^y%4cg$We[]bXΒb:IU3pQdsoRlprKl.E38LQs-O)p@idlLWi ӌ}P4'™Dow/ixEQVqz^oxfV,iR[lJ4Y Ifu XRaA8=2B¹5:dՆT= ) lJALELV2cRfMÈݘ8q<nL;Em0j[NQ* 2b@mM.4ϜKp8\i!`J6klֶJ (d!3$S kbbA$L)R9œğX9F21q6amԗ3n?=DlL?EDhEnK$FhcR/sY:.'%eQYOƛ 0Zxöܤ6N $/sˈ r}0te2!jJolJ͈xh=⨚Lcu6%jU-.#i; 2L6Be Ƈ\r[KD\HJf@O`Yӎ}P7z?<|AXqcmȭn:mc?^2k_(zGa`8LCeK0k+Җ kah#?0GqDaHVDr X3N2ZЌpס (1*Et)ĝLM&wd13MR׺]'솪'U7\+ڨWW7UZĠ!2DE+*]ƥWO?P̆~۬+v›+~ۢ湒a8ot-1}F Y7T]_O.{5Tsim/Y^-߽L}:Ȫ=g͛P"\|r2@ω"x6)J%KIpS BMV6+Y#`.\FXn# AI.3s))hL:H$+mX_e6zbd7'xYCbD wIC&$yꮣUS]T ,H ֡ UcE|;=ˆ^Fpd!R&R/5e#ZH0 H8H7 ?#g5.>y$Vǒ.PV',ۙ0f2^S @>$>HxpCFR$j‘4j<5D. HFjӘG"h&U`FJpFmQ1,%! j@CMhg==NpW=.KZSLֳSy[(3FfS{Ɠa4U {u614,Mfpb g8礦=ì4V"8Qnza^_4xx sZ8(3ln%9 l[9m7C=k4 YUҠN(pTӀqFfzNXU$HlIϿki/=Okt$zFK`/N+ڇۮy+a"bas C~|;sZ8hrɴ,ˣsBN0M B4z)h>p"i~r)Wpv(J;Y\zQ7xӟEhp.:pnK\_v|X]7ûi-TS)G]g)1,'.3@݋ugdq#H?d;oC[a\?RqׯލQ|Qs<~#^ -(#8g\2ܥNz儬wǼav<[G?$C[E%fV-JhVH;p2FBpd6PTCfa/+C-V8m0@L αER`LE NJunzdSƠ3y=uݗ=>~s%(G9E]<^oůXR??z}uc7{F.ȈCS*c6z (W!IlB O1jveZ\^jF醒fT>9&MO\S"ƒ; vMY@61"M;DXmXfh.Y)6 s>_$Vq$\@KeV8e*"S BE8{јI&r% Trk*#}v5:&sk BVYti ģ n2R( kutl\ +^TJM@m`P")I٫TdNE6Ν;.F57>7_hn4kR(G[$C2=m*1u4?% j:`,߾-Tl,*w*uw]3FY4/];j<ˀndkft`pP?1Q}`I6+d+_^LRyM>23MGPet3K(*veъg޾f# }=v5T^Z#V< )oĴZbF#3_h涗/,4R8&mf(1h@ 0E.R",bL($HD(w0xDLJdmi&h>: Dg'`x_!=\d֒3<Nv(%"Qd 1 Eqh^#F/IQOw~adVX֮llIQo?A\xhTXyYxR+@gEw0\] )OGgӯA bMڝ{ųLL;,~Ixj)/Y/׼dYhzc"fDz(K7ӫzL'J޲&'+9],f7= Vfγ?f$Ot`e7 ~3z6% La7r~}ǝe{#D%fm#']**<¥[~u)Z4ZX`|${S7(]aԜBZ`6˜_l4$Aeb 3]eB4Ϣvz(iB !t\y2qD*)Fm)!4ִ,7maBV>N^Ef#vykl!?ɮ 2 QA?rSن7H_|"4"PT9˲w$V[KSֿ>kJ$^.v;۫TrR}Ej CѲ([(14%3# EI R#h x$ )pr-ԦVc&ye4zl5@BZ"9kv_j@OO4ʬ K1R))XdfAJ#ȡa:#AH& lTQCR4 jPٻ6r$eqGe 82 =/|ce#IWl=,9zEnٲ Ml6}bxmB/"A` aAm;'IKqX-qc 759\+OMYrA#2) RIڗ3xIs- O)TӚl׌atac3c_](օՅO 5O3)xM: ,jG8@{^w4~[1H8f-AQ+`褰[ˢ{e ,7K:j#`2!EQR!hHa&'uYCp:G,۰n;5v<Wkˆlv`7xjԀI*(1h^&JڐPg2#+-DF6r1XŁ2!CODC&!ZS >$O,>acٮkR? VhǾֈjV#nܖ !VhgRS>+-c>Koai\ &HGXW<5Zƨ@"S\&"=RPFSz Q kƺ]#~@zՁh l͒}jX/V/zqcB5h3nx>R:fe Tv`C̆Y f0sJf[xx4Cݰ>-'Y5r5Yl? ~7q'Am0IR+V TA\89 us9ssG(͑Qi+@π!x qhuB3һ$.EK~*ܭcnx4.FOu:]fT=]w9~ɍ] %Kk_KvtBݠ [Y}CE+*]?)]MJo?RfͣYWH%+*9^+ޡ慒p0P:fr4;CsZǫA {O]Ŷ9{Y͕/$Z_Y-<&_>/̭O`5\%K*~7W tƘ$U4W`!W.eS1Trm`%"I:IWGDՑrIh60P KDc4|^%4Vs˨7f7.q[f 򐸊.rNgTɻ@BdV{)0;mj;IT/A_h|u=~ ϳ~oop=6L)0 a2X1-^}"8^ ZJI͇VaIy{ &dU&:A 9S!ݞ;,3GLCwfSk^(SDGCсURDH/~Am4ҟZl!Je2Y C2X-=)B9CZHKMP5֝h\|nrғn,iGuV5Fū`-vsJ >y-K5ZtUtD[$F3#*MrhCfrL[ЊMTH"c.l9ZNػ InXI01J C]r"S: e pQOZ"*GѹzB;Za]֝v4dl[Ţ'Β\{M`hIC0 ?v?Ҵ:B OѻNyɌP $3FPnBڅtfi,9!PTԈUV&3$0MB[p&) a wKxI](:1i&xY|k?:q'_s;K+F3 OiO?FkM % {¬4s?w.|?ћԎSJ ̝SI&piB:OҨ|F%pKDdK&NG2Bd4&xu=WM~[Jst%2묀yw-[7O(4ci?N/sk%R_j-tvvg^$CUtĝJQy4^T6?nh(w0j|,=ӴtgRDλƛE`3)5̕^dlH?|3,7_cZrbmKVt&)V68Yޓ&jTrwYlGwmv`k[ls\guDQLFJÊXag;EmL++UwWCn+Z)?t~~b9;;SہNL,X ä:i`x5πUԽ׷^fL۟ _HIw?ߜ;sj= 2XB낅w᷇u>4?޴cM5M͚viZ6ߥ]#io:^ZXl@zd=>7^2T׿ݏejvF~&#s'H>Dgb:7\^[äOS|G?g0 |\k9cfʥ1<8rR4E:VE]\r6.^``"h.1ƧvO=o0|csb{? ?o عcgjvm0n R N" B5^r<.cU(*,U(+$U2@7`l n+hN$8sÎO$f)m!SA*ǁ R z8S4REt GP.&r fB\R%lm^T&o'ֳO#=QQb*&֖iTq%HH23WeJ:g̩YovƽUhjES[_&?OJx5,ǂövX[DP=[z)+V_ < $ [zY)|9BRu^otO|x(iHB19.d!##y˒! .8b16-VֹdlvBN:I0 ]2F e@I>M )J=pEÛug&#=iMܫjfǰWOڞ4W"-գG-_#X`>ؤj!@T&R8/aF$a.*hᝢ OS5y"l2hm%:K(vɨ߶SM&fx^]_6b4`8t6xLb @yLɖŴA G$>+|Ԅp'B9X8Q,J *3Jpä_$ؗ$ aן~xXo{;X/݂4I.[V:#)Z5s;s5zu6u:_tv, e~ZѽU:- ֗l |X:ۇ{wպaRtgRί嬁ًL653a4Qg@/?u&VINY;T9'pbSpgͧLyj#;w}x"'Y蔪@BTY], >eܴ-[nѲeYhArD%CiEh CdhlvC6 d$.g, }8 'zK&#gfˍugdzcEK͗•h^뇭nGWrv*3Y/yL2S/'70F2I)B/LQz*'U;ː73Y8eW9Fǹvs%.䁴=7F$T92߰7֝g9.Y{VZcצ^9$z^žƥB:/RvEFѸం^_]4&Q4ZVVYb~UM0RX=HyDƢ%KlS.hT6HQ'γ!K'*5ɖL}-tLh{yYH̼!}#i l ^[DGpś?F3_\D5pT2__vk:!nUOSkyZϲ>Y9+!Rr]qf' 2M^ I ¹4iC{ڨFs5ܳˆV'TtFʱ.s[ZDU2/Tʆ:0RǍ W[[0"h}4( ڊ1 6VDOYB0@&egmIgw1op7:=,wg;Ʈ2Γ/, T^-C[B5%[`O.-:z"tD_p_cٯߑ* /s%T"0,c cQ`O/9? <=a)9Aغ(erU \,TQǠm`x(0"qi]CZԋ1yL>zݰ9g|,BYoGg{ȑWi"cq2f-sG1Eji -Yղ,c%Qr7Eɪb;6(m=0S54 Z$`a4Aj5Ys6w20-dfs-ӂF,88`C܅Qlt!rf[?Jݢ3/bu-3v`4g-^vI* (@P(T)Jk P Cx—Q3fTi<h<9;i|mF,r\bpLK 4$QRS`qwdҊ][q!RGLw8$(,p'߆DP N;+N[j"](Y{)E]n`{ّgn ݮ_Qx=rHR(p^j8#Ii4xNw;?et,t=m䲤ܶVgof-YRx9ˏ}G?_ ϊߔ1.<CugUϫZ cȪ?4wg9tlѧOEfD*bh1C 'q=5^kk`4Ų"U/ !̵_$͝e%*iUƊ|vo.8ޅQJ%eѴt6³u6ѕm6ȿ]?ؕg?F#APobƾY $r2)K4mA\f莻fٻev,6,zVE50|؋jJ(wTezԘWfSVUm]hhmo7tkRYg"b4Ziן"Bt2Z* I D DZ4a\#26zgmh^nw::"ǏޞDD4VK!"pOpǟ9:({b"IB@2t$1Iaz^JL/~[;kfmY{bӭHx/Btn=V}Wđ'nE?ѭhD}ZFY:1CIU\R: e:{2h>vo#}zn-Js~~#UUDp'R 7(A[,QJR$\scT@6gI6Gɭx!zai` CP9pt:gPWܭ"g߆js)i&Oǡ-<$}wZMEj^:,8MXq%ξs1r- E]QΕC%[e ,Q>pipȅYwϹIQ5Q[%Qt,+*6I%%"} qov_%!6%b eGcC'utwH *yP6 u3X# @j jE$|GbɹdDd&rcW^n5& ZaʠTyG)XJtL"D(T40Hp&D":v2E26 ;)d"ڴl\a ڙ(́;iƐmbq3ktX{G{m/9\v'8/D y_ݣcxL}èqŭ{\arQQGr+ 4RjAF{We-;-{>ჼHEF|LB׍ye4|_ χ̙_azޯQܱ+?l޼Lf5{[\T;xX{ ?nwH{Ν W+qh+-S8oqߑ݉/^Vp.ՉEybz~ĩIH/T㽓J"u:2TZn| ،3}E{c ֱ:]X/o@2;6}[idr/v/CyVWs3fk+eM&(/@mt=kC #iσ=#71I`SsqNwn\ϨD*= v%6S-?3[-Xo9F||J+$3"@1iYp-C@E7DE=StZEwgs4ݙx>NۖiVs/  mQ?%@t1P(l}z6q ֒ aJsap"+dFgK(r, c}1w0Zc ZQҀ"tjw&9$-,uyk\obGgev$;۶> 4x#:NJ#*DBR|)Q{Bd#wԭ mI F0D 5`7&7 dSO1wW 0Gk r| v8ЃʃVSB6-'\w@):3VbUV9G ?bmD {Wag2[>5g/QB9adVS]Y¹\}^n(>| wSDXN5Lɍ8np$cbRNc 0{,.\"Ujڎf-&w;<<_\Th\92~X69VhPwm :qoaBԔ. LTo^U/O'g̓El 1J9'}Ӡ[nv8R Gßć; %Z[zt kkFJ[Yd G>h6y .'SVYꬓm[-EHnX"0Wg t&JU*~?́_;QՑo>i'2x8J6C"4UTT6UZBNbX.B o/_z}~׿:})6qzWq.Um$`~yj?ߴ%]5͚F4̀Ӯ-p]PXAa'AG :O(um^{$dH. NS8e4.&~J1 0T)I vGƆ*s [f J!qi' *@V&τc(xAWCR>Mt:9٪Fu>vfjLze]p q}O8z\\BBiKiK^gz)#ͿIr;6-c'BUPXb!MUSFEL1VzQQrVU&#sL˘{\ݜ_`*"Q`CŇ3,d;Bo W(߽зM}=7 G-M}Ya5/4@鶾@Mɍ4n1 ㎣1շ9-(k(WKJj_lwiZVSMq\Wy7 e-K QIIOpN{6ވ`@촬'R_ l/a׎))ѹ(,Z 2MExb61DЎ8JZ8%3xC-@r3zKDy%,w)H9kWB6G~ѫ[?me)HRo1VD9K6~J6^!FwEfg"P Y)cD3.q;'2zoxvF݂ u 10Ω&\r[,ɗw8_V-Gu[|yrR]l,#UMUpr-o*[k;gD^2#|,p}M8d K HK4wSdTFS-.&&NY> (Rd=9dJi*.!HuFtΰ3 YXvXI&ӬD_Vv1*/0_`a/'_9bKkrZSjxJ%˙c+!$Μ M'7wfEF \頄W"2Qg J t؝s7b0.l1;MQw|; veZ(S ,i2BޗW4@)R1f4ؤt{$vIQ3DBk"0P<[/WÐf1vFxXmg 20n "v"wXp86wBHI"JKq#Jڭ5`+ Y(ZqGj&FL̇};#nDp8q&[gg\).ʎqQpq+NY 6AL&+I썊3$GPV&ycWܱ)Pmaw// WkZQ YQ?{Ʊ 7bEz=g5Z(R!%_opHx4HLOOMOWW5_Wդ-Hp0GRc#7C4ml&RYCe Ac2a !99#99j-BBCI8:ӜpQJKA$DQ&GC1zgyе`fu ou{ZȵMF,՛ WKgfĠ!wgEZWԺ~Һ^= py ~hRˊZ{wnxo|xsM-z^h~>G5ήs!Kt㦔ôĎe7zMBԜ=鶉wTmV]?<'מ[7}?*mLJ[]UJ>*mEZd^H)EWҦkﺚQyY/%U§O-UUh0W6f]2ZwSGSģ)#)#)f>D\(EPBIB=X Ub9%s2&mc*8-j[|vFfʮm71 60s!@}:e%Mbÿŷ%bc)xOz<}H s#u,i_S(澽AUUzzL-ν>{ͅP;Ka6Ec;غ*pW)w/p/.RػPn0C?q9\~J܋ kS\Q[ .;:~u~Uүr2±d,pK+}5m c80刷15 T\rD̖{.`.% 6ݖO#A]nEp\0e> K|SGJE7F X]/F-Y< jU0;,G96O$aJY7C| ҥRoL ܷ~㧖;b) Ձ 1;Td:&A&yY9쇫9Շx>>',0WV^zKKd:LܞXN;pK &!CtR+e@Bt%mĘHsf-0%F;N3 ߍVIKnÐyWYWK5öX;w '33R^y8 KWN UW2v!o!ʽ>Fbx>ڧjb K2x8ܹV38ٟrqGyљef)IMV[dzt1{roWyt4"b .Dg Z6::K(v)}C7Zkd h@T]y~qֆG͠RY gW"t):+:/:^Y"cИ ?E  `K' Ar4`ed\邰d; KuK"S"85Ƅ̽vƈ#,SEh dTeϪH44w-bb*jc]8%q*:1v iA=¸dP`|dY$;i?;yU2TgULG+ `Ǥ$ if &;i]q|bq뇭.5B˻BBr6njHd CfRP\ZHI8B,CvH4AgnMjpLfo*YƔTDg%g>1OVӚspT9@%Zg߉ yy}W=xO1vگ<y[nwFΚˢB h\ҸPMF-hЁLp8NeuhVdBlV8sDZWc\g;ǜa{>F*s^Ē:T>Sp$fF^TLD > ` " օ՟xX{/uh}1Jk 󿟝}_d]4.s)r\hc`pFh-P4yFDwVs5ݠcR{M_dO/׳&HUWR-z(l?lfմC=WX% Y*:Vj#&(sݝ_O=fױ#X[~ƽ\MMRF>+R̃7ș jjIs,n1FBxū`u$- Ad F%y4egm?g3JӋѠ,أ4]΅0hO.(֔oA9]=ovo95z`9)oO<Kz( BiMs,Vu! Jjzk(3x!dR1qi5Ee9Zز&Ζa4E㚒 W}!zr߭>) l`9LCNE'2wM!I7_t`Bok$Im|`&N{u FLH˖貌Lӣɍ t\R[QM C,ZZ*!@Ǭ8N*n:qNRə::;"Pk;}" m`P Zh~~~V%SYJoHNfן.ƣS^nSUZqNI?[pNU_!k1rE igt'r{FLlU* i4nwCΪX9O#‡dvjX#=jZDI]5[6nxU>~:o\;W0r ]/~.9DµX@XmJWԳ\}lJ#(M^={CG~r^e>/t/X~3 j(x6jnVCnEb;nm+$UsǶsφI*W,<*R*K5 0 )gGNF͹QQcxFؙq3dV,k%EG/dä4YIZ^0]Fbgƿތ_,?t3˻>i ~8;a'̹]x4(_ٟ{y'VXbq/F}ZN嘼ޖ yWw1-< 頺u5sw;o@SrƮP~78BEN .Z.s}a%M2 '`zCK3j6΍=jd,짼)UL] .{nNz߭IعLytUkZ,E#Z(4;%<~[|%K\%1n!ėbPPk%^+! ܠ n'K,xT\N]m^Z;]$;s@,\йN T:cǚFjQwË9WE6L԰rj\џigC//>LO] g/ƫI/k:rtqAݠFoW=5j` 5 _ I)'n7 i(*:͏EEFnA`7܍O&΢qs.U{ &NeHg#<[G_;f5_ŀn+Lc79e4$( B!5eHM d2A\㮑q;MA{fɆ>Ig`m"Pp"a,ŠjZmsZΨMNyj1N3(n}YsܰN!F7J ebsCtrKA  ̘0C)h !Fwіv#cc􉬅߀}Ƨ˿2Ffj@J@!`"dD@P$%Kkw-mI rP~1vmxHCZVc(),N4N`KӼooX1X kڵM7"_^| A๪߁xRw1ǰxXpwXipnW7FZ;7"uGVObiv{ `UMҞh77Ғhi& @s9TAťR䨔D% hY.kI_t9NGQLH+itŰƩ=0)VZd=ο+U=N(CRJ`z:8B_z٫꼋ބovMۚRݪj MIѠ)O!ĎP~8 j<%bBݜ윤0',5mͩOW ٖSfO}R;91#9 3[CWcJh9tӎ4twzCXsF~h~bEW|]{btNw`Mlݡr>?Ԣej᠘GnHNXx>x͋l2O$<g10oWo߿ZAӗMò0#oǟ!Ig jsGag"|O΂?Ĺ9O'xvaΟ]P)g;j baX4<ɽW +nqtѰ$Ƌ9b0Mf}iNg^d6?vAը9}I99J%o;kHE;N)bxЄAu C9c@hp09sH[ ΉuXFYnWR+R3Ey(2;˜샱F\ȓK5o'p1mZݓK()C=T+ߺyvkx}Q;&tr'.e[vJ@bF$-`MZc7\D[솄CJF: qII*n]\H[**$gHWqRK5nwFaނu5I绵N= &d}k=N%L3yk5/׹)Y`LykL鶘j $nJ:9j#.ڴ)(MUKx[*ePr3+Ή:u TMU+Y[*+m0G5tbJh{tP.=9UTiS*5tUB١ӕugIWJUړ8?ֺP#]iAU-+,%MUq[*}K UJ8Sw=O"ZDP9{ЕC m#Zpun֜uk֒B`ѽ%jͧ7e$vӛ{ITܽ$V>Il#(k}-5ݝ0ƪcVX0W@< Ihl:7d8ye'r/ԝ-}wǻwvs2Q"bT$*hiF羼HW%JZյ)dlCpbqkJ_YeM^r"w~Rtߎ$:YhEQAf8=>@Tvׁ`Y ף17-~6sٯ) uX{ݲ7BNƛ }fJ9{! +/%"eYo3s}L2E}tݺOfr<o &eަsD ҳ@3'5 HHPL58i{T d.n8K;ݮyr(GJ-E9Xb yB8//hPݦF X5c擬ƅist6^Вؙٸc>^Uo %^ )"sgo&;3##1d)4Ճ HJ?8ׄ{Uљ0FU)PzX#VqSDzw*)pnWb x C-U@P&PY.*laK m>>$ $,/cミnj Wم)I|o+Y?_/3 x:,znE7SRY˦Mo3'`(YQ!1L4:>!Z~Px9/ `b%sy0YշTFяDe[ж4Zk:V V@UjևѸ'ԋiW.Vut{Fnu[[58jYkl:* >ߏ'ezVlDǸm&-,ُ/GApd|~1بpduHL+xr\^v"JTJ[*@R8U378%Hë~x~ÿ^Wo>@Nv/aeLUm{0_@=~UkTPߢj.U&[=]M6yK_QjwQ,yd}P)keuLU_[{H}$`}$D"#&$ƻ% ,!D@HGӝ rK5y<0h D0^⮞weZ/sMs@AOR( ,`DBh 3[ !b! N\tY)tv@-"6qdb-΀v2u^krSiXTwnJ(ڗNiA0fﶦp@ol4 l쀐/t$'2سcV6H}I`z[Vʚ^凤RE'U8 Ckh<-FE*h:,Ag~R鮕YltN'eKG2mJNS">}0>j6o҃sxvInڞ_৉I={grkʎ!;fjlR5enoJ)*?zw9`zF,6~|v_H=>t}6I9>Mj[cR;ĬW逰 ^ wE%K0Uvɚde,3۳P?umE:k,)<&K|( ^ \ {}6:8_X4 b\7lI-yi5_h[N{Y^uO.'=4p8hvG1`sp+?UA5&b)RnPq4ߖU3֪qb\mhK kdYZOdiQ _j7܍l"H4`e\K3Jq"y0% #\ 9\{f;?n.:\pP. Qcl1ie:H\zab CF""s H"M!D!e!x0k5f,`ZFL&ZOHKDA|&T0.fMC K1R))XdfAJ#ȡa:#AH& lTQCRLh@.(qCvYPI(waaR"Vܭ4O, ܂8P"ɽ"@GUe m_$N29-wj[[߻<. o/Z;0k=@AyZV8JdDi4F#(WQReFd>ZI@H1I \TK%bdF/`Oqf"T9[1.lL2˅a.$>*.Mg2> par3/b'gl mR(F,)Q(1dpci養WBr'ZK(IItY<Sl&Mґ0хT 3vcpfl;y(ݘtܗiìM;,$*Lz#QJ'r:$R{1 aL0!DIQf}ȅ4*x`A (IB`?5 018waԧ=2"kYLj#޺,Jx/+UH&8("!7;FP 8pFq)Ld$&:łKs&Ji 3bcpf˳Cm|\lLJˋa^/vx%҃#,q}(H\I!OaKf X`fg1 .e6xM<%v.-Zd%Yn]C.Y#Ue`|Ț fRd;x*ѱ=T-C=܃ \]*]VsVQN~d,6x@N(*-:#y? 9LIT@@b8YssAɹ9rs'HAKn'G&p {K,9[۾u{›+s#mQBa<ovw}6o3r YRϺ G.=/i89k趁wVWerCꋑ.Vbn7?73:7*%}]}s $Rjj. 921kByd<II'NH2l`&YBFg= 2@r=~s-q6.A_h2:܉p{=X+( vh:z`j0e3 b:z$u)H-d%.շE P5)A3JTցx=4HВvI~҉'eɢ 3m\V10&2i@ 30AQCi `q! f) Le mJ40ϬAq>Ԗ8*ZhdîB͂֬NH$|hipCqX߂eAcALeCzCyLq?Sc[>uyE4Yu)[uƮ8K.㯏RwQ/ޕ(8q>`'PvqR1LQ| h+=%q8_zq6u/^ /T1dBMh|r{4. NN;y-/g!WmT*P[vN/UO|{;* fjv7ugǃ z?7ݏe鹇ef}:E7%X .?tvs  HZ@s- C3@HdI<K}0:&uv:i"):vܐdi#J9/ǒ$V|@RR&Er ͌B.RZa!`]ܐg7FUY Uw&<|ppѸ̥Yrv\ʲ7S#\@8xDƄw+୬QWzRvNIw<]o~~y=mQI3G[],WݎtH{/~|pL>UHPQX9ӧ :t3sgҞ2C:vB͚4vR>+RSoEEm@2,UJ" -4w3`vsmDG-o*Xփe!2P9桐`Ѡ$wmh?_ Tfwҹ Z$9GJ=Ur)^}{|«_&wo5zFG`9)oO<K(4$JbB(q 1Er*nUWAHTI3Fol3^TL\ZpGmYFhYWZge4E W}zV@>)ڜ3oG Nd: `_0Bœ7!lIUHIm|`&N{uY0/ntv.[22M贤YA&_NIBfZ*!˨YqJevB8+% BBPgBc'cZk?ViXlB[eepbPx@c^w/i<ZLXg L#^#w~,1Sƨ$ /ASx4+![8cHgX*GdA` % A'v̫^P,S(2dt6'5FE;ӎ>4<7˓N((w< ԉeQ"!exxC11% Q2 6 8 " Ai}kZ'.C $ԃ@Qo -6xQ* Ezf ɬHYV>_J$&: ʬG&ru &KHxSҡ|F6yx/_Gٟ/{3+x٫O22KxV+U/<@_~{79jEu^WH_J3v9߇D(rOnUX]4t++q☣a0 VR|r}iW,+o;~hJx̖ҵԛ_8J7$\Y|EE:xK,=:.4KNAoҕz,~GlX&U;Z +VskR}t/8t1o^a v.:M̾׃b+w/+\D;3/:=u妸KT]` צQjWLL~a9Ai$Ce*Z-IOߝSpד;&W~z_jh>cx ŊU+';8CԶOd]|Q&w_ YF^j&kR±G;x.0^ҥͣo ];jm4vOP|B C.ˀ7Se_-R]\aY]Ew5մ,{ٯ &\.h|~S[ܺ.gheCե6o,XzUP)o]c-&ab&UF5/U;.,"D*ͮ4?P4@Yn5h"0AC*D 3& 4Tiʅ 2SZݭh>91 x~A-mG}(\OO!5cVKV & @A%P92Ih撥b ##_#uvkcciYnpYrv?".^tN6~v%+`Ntw::?~nmxz?ՒާrtaWZEBX!}yg >f\IڕY=u35aܒUohXnQ* q~B ӢG淙)Ize&Gߟ(ǖ =7邮ʪt9En2Cwg I"6΍v\8gÇ"CF|u@{~<:Cr?8m/\NjIN$=p/rLVv&le5F%ͮS뇚}T9e#CӢ%PHUh'U֧*U+UiեdkliMZ)no/V&)T0nJ"Tr^ҙQӤmԽs:EʍE6 |D! (y"iK?ǩoYRX;IQ1'B&4AO/Ƨ\|8oB4nֽ<'RIIE1HGoVطJ0%(mGZdjڤТ-)$qu$~JaE1U>)Nq`1zҼXt):Q} ]&|J'{XpttcfC (Xeiq=5ڐ]jy;aJ$|./VQ ָhS5>#J@`-~AAN:2]:Xi65hlȴ]`#);:/ͤ-kyGD)J6NB+mebG VkD5{RQR@}JuLN*BU3h_Mr^KYF%EࠄjJ YV+iO3`niXEjE},B\FФq!:38o=apy ;Э^bF\*"͘u44U ULl^vNR'Dh.`V9ÀaWkW F=Fq ڦHl b9PT8xi76BOАf%SdIW=$ B+X(IT}p. O&!z*4}4 ]{ #BK|u)}ю߃b:G4 J./sgmG]I@7St TE`JI%PKcEX A%DEWHPl5C펁5$BjF,z,Z{65^ PФ td!.h]q6 3)I@d(&J+1A2~ȃR!*8vGyPG[a%CIeXYvnjpM"β] ʉ6BkB`%͐NڲhF&8$YI$e(m@VӥGoU4^"ꈖ[BZF7i QH6 W\M-)K ` ڪ6<֣sym^._n?<=;Z+uT$1A[&n[_o44z63 }$v%L ;64k*%8O)N]j31i'rH7l}IeFI*$:@^9uHMlz@'#UwW[OgM:(Jj]d*2(eFAjЁF-H6A( =`-iÖ V׮&;"($'Mm2X4kn3"O:|AWtF-LB)* jf,(*Fb1; ;AN l?Y[Tڠ@8XAVT@9 Z6i =W&݋iq#f0Q%#pzul) ӥ!D 0r( ;fAjҪ\ >$Ct.Xz꤀%a4Q:坩y0B.8! R GR/ZMj_>I(;}7}$bagԷo7Qv_E[M-u?G_-YL6fv7lڠ]ZսgY_=t~7pߥzZw(ˋ@ZњwTg*䡎iO8?:nT 8K'5/%`'g1X,O0@W{S.//gߵӣr_(d?;}Ġ;=K@WY\xI!^g^,~:z[O^zw=~)DyW{ͧcM1o|աvkO`)7#@n &E7Z]7X7nPhgDWlD ]\BWVo}C(dAҹY+llZ;".n;]ttt ף1 @ -O(  i1,/޵K(Cz[U4sޞE{8:\v9k-!j9]pur ;ˢA_l Bja%^^ד8_->vceV˽[ h>疴=i DrW46ǿ\4D޿Vq4DDB:C:ﭗ<,,|G NSo[\QC '7!^k&*]]װPМ}uvvcM5mY0{$W7es, pj7H5RnT#JTAF٩8#6]8* #+]+ =@]`7"υmNWn3] ]y楮pϪg*+/v"2] ]E/ψ٬Ddv"*2] ]E-r3+l pg]u,"rWEʍfLWsB렿|18K/ҺU l2 /եL i:=yT,Bˇt<͇:͟>J<3|z1*U.^PNA?y~0-|GtjIYZ"4M {[ `ʊ;u$gpVE"L55kҲƐu#\K k`)Xyj79, bbXrWX,9D7$|%(-,#Zl4BBW#+έ$"B6 *"c+DT Q\ [ ]\AѮ-k1(J tDt% GCWX vBm[+XDt¥t(J <>"B.%=THz3]#]Iifk]!\Nc+D+Z?3(;c(J)b:B<W;t(uGWdG$ T?!`}]5=ZOVBW Qv-dwAMoM ,=RP1z lO)7l[פְZG[һo@nXk TZb"İhKEojeK: L MeDtk\C5ZL Q9FJ+v&hW@G6D)xGWGHWB b+xtEwlz#F*K XjlORbkvr%h!Ǜ5kaiZMDPhM?!DrRrՎ3l9Y VnyR°ҠhΣiȼ`kx kʢ;4,C82mch<jEhm-j9Y4c@ߧFDWؐh CM5CU Qstŭ8"BZGCWטX ZbNW%]IB@Lo!Z+NWR40#1U<B+DiyGWGHWHbr`]!\b+DZB3 ќ vpi4t(pD+ Ѷ?P.oDWlǦg=B)㇟lX:C/kHjҶXb]=qEm傿QꞱ +u_r[j39]6Bx@I)gF7 `fϡlV %J G70*FCWX ъUc+N5""B" •<BxB]]Ijd,Qs0MQ5 `V ~1L{?Ow2Z@5.zY]zsr3{s|x"noyfCw} SzNޜg]ٿ% LzY%NX@?,Fyr{-`7U IɭO|dN3?HZ$8<եRN䨔= 6h.V>IR8萛`9 =R(e HRŐ19$'(|/~ԯ3o/)7O@ja/[JJ6M %JijGIFa6,BBWVްD?FRكtKWR ]!\,bpvBVututcJ3M#2n<6`J@50kDLAXpM4+@kIADIyGWGHW8cZ9cwheAD]VoWXjlRm!0o@W7ѲhQz*%(-kPlkMլ\mx1, Q}v0525riEuò5Y%CǢ#̙`ܤHBj#6NF0dwZ~{5zY`!Q_qdr|?|3XĥAwK,hvAW .q}]u*jf;Iif}΅`Fd! (`PŃTKãFA2`!{XQ DX eZ|:<[`C 7ŽwR(l6욈[LfEVrcqs7x+ AUch矪D.8w״`tagdߜMC~WüQuĤFTh YH=Z/hp []|Zbk*/ġE>^F߮nm5Mi.דr|] b )0Deuus}Tfa2ʃʼ-K_krPwuɠWZY'A+]y032o9-c頨͘ QI%Y}p>@n<\'"=B =;sLQbRkB?5T9Lƹ *LZMr0֋ܒ\lpP(]b&_o0X^U\]yǍggy.cE/,_z:$Ua6(Ǩ 9^x :"jt.FMnpZrv3yWVfH-`Y'v}}N,@bi #$](# 4<.G*Kl>C1d^uw1զ0JmUVi-y6LB*( f %#%-dl(0^nStYwhNm5͵)rFo:eT9 }a=?aFN"8+IvG``9νu(tA@kAqD&eyX׷w#3 Y=*s=Z( 70OT?'g J2x7:%C/iV{ÕPoV Kl^%gnCK z{& ,ۛq BlJDCl1K1o*+#,g=j@Z(j9fpz9y/lk NTfyBB4U&'ڄZn~6qP¸ڜi/#?UrJ)(2_r*4F|4,W* ZuѦg0|{^ԨjDM6ʴ|:O.._;?7.g `"}J0p\gUr{]5H2^ O+wO!rlkNBޝlضlU̦.,a<) jkGxtu|{adkܟ몐rVV Y: ~1 >K'2xp1 ɤH6G,Du+U_Q,o3?W&#܁fk.)1J+ dz$JTJ{@R8W?8NϾ9~?_AٻW?>q+hy0TA0~}ϻgmɾVYs#vɚ %_"o .ڟ-N@<]?(y22׺L[\6/6F66TieW>@]PA+m hs!B&P"eNg拋Gop63M&6 dG/\獁yJg13;Uϼ6a}#IiJ(je4_VqZu2H^AnT$ `!;L'ezIepp~[gӾTVI ɵ;2#}DtETodfvdUaaP,4ЌXW,=ƌodc_r2_ 0LLN/IESd U`ΩdAN9. !M `9Y,ɃrTuFnlG8=7$池vPԆΨ #j vS2rZ̉K4]fĩsZp61ҰF'_-,[Q >钠`M.*YSK➦]/*,Iabs茇َ QHNsix,ح<3"∈#"n\'XrP-2Q)*֡DVUY&Eפ{EBͧ4dFSb ťr 7+1d9ˇ": .·s\g^P\θH#.1i gJn+.GSe{  .j9Zꈋcnxh;}®fe#wuGƊe"q 1@я;2{~4]$TP1:C?9a9sTduL U`1*-X%oDB@X93&菋姥k6f\z5 ~~MOYYl.B#B/ݮz?_{f ixzX|yy-n%ui6^wWxW,v򍖛l:p|{]N!su-Ρ|\&<]v7$ĹWt;_V <>p|_d_(QF7Xapۤ0|I_ξtJ)s: >Wɔ).*3=t:rG'D:HƔ `BR-[{]Kt\Y3H6&m.n`|֮ɷŒc֔|Үx[u4@9( 嘔dTJz]pㅮSFrydGqz3YM+Ζ,xIoi|247fV&bZRHdZiܤo_)yXk'0bi?S1 E34v\kךtVX3M r`a*BԄVkr +5h"dkꆑ* \BЈ}[Д_\VVdU49;5bHlח$$TF7+#݆g_{CS§67束\񷧋PI*^K )jީ"~-[q5%kmZ[CAQԠql % sM~nl{y T.>lk\شqX.}<4*WշA8;Mkw;$[|9 d逢"X3\ JrOJp_ٱo$`Tٷ]o=g^J|o귫ݐ>[[W}i^\uBe;,4{]Vh fo\"E !˄l^4YяGgޝ_J iwDhG s @M¬Bf^_cx1lPt4fn,>{ߟ_ɽ_-90OU^ũ泯3 e*vs4+3wGYkگWTqV RbaU Ē~|ns~z^XPV9ƾ?(#IB9w.,9˗|Z^_x9ȋ C lXݷz{?/F̓߮μ[hՇ:{*4;/Od9i,]Nݺߢ:xuivmϧ0jdWxħG6Vc1p9XIjT<9xay!`~:{[Jh&H ŢD.nT/WvHρgbɻR:eS! D1Z5a+ݢcyY}~^jljt7>.m6Ӳv~uXdy;#;s{z>ʛ{g-^ |q X LV_ScލlBzȍGOMy TErWPmt)']YLJ&*,ɦ[]gf0H`,PPd:w3g{p`Un8pdޟ$9<\U1B||1t wN,]OٜNOXwFF29Yo2B,BSY O9bsyW%0,ƚq7l#tC[-{kR~;&ϗgyI/z#V QcHeD %wu7?n*ѹU?"2T0 2hUZVطV)6xU"W-VV0%t9?#UA# BoQr·ovہ[K#u|P{r{B"uzqd>89(gf1[y[r8͠68+yĹ{s& 3W](眘D&YPЩ]j)/fW%WUwQ&aj#m|a.=bR}xv{^gnVʓby~>Qzd20V- c",z24A!f82\H,s%^4bΘH㪭l+`Js߲{Kz`sv/qf2vX[+ d,Vr[N$W"[˲!7XݻupсSi#= \ϥ&Hk4+5A~ DZ3>վŐ5vS PCY[ pJTIbjn?hst?#rZ߉ٳ,A|ȾjWMVp@7ɦh].9+ѡL 6Y{.b1(S0)Ӹ9F{u?c1? m>YQqpFS-XcQLؤ]J pdֺa?J#0tPcd*s,k:S :s+ȀB:9c VR:9p?olkk~ Od]nMbeV;X$g ۾'@6cԤ#Zq*/;+(csrdfcPG<XMNJ6K~N7F7vlR͉s)JٺAЉ0 tJhyƅ/6X㳻ġmF!R[ٝJRoj%0ra<T)qKQ"MJm RN1#HSQH0̘!0 =JPb E`dr$fRG1ȱFAECɎ]Jù_̺R8j$͗g~e3!&{7I$7 ==r޷w}]բ&hv|,͏PSPHI>Mˏ$1l Mx3dw?W,W(\}t1?O?^b{}lgm|Uۻawo {6@+sHԮxݹ.kemK7À-ʮqv8an@ԝQ*c.~/4y 长ry/M+I]?~vh^t'ގcP{HUvz`N c|Eꮮk$(]pl8}m0Tm5oX$??GY7tPp~f&l%Άu`X9%XbrE0g!X!V@v]yEYy%| 9 @91gD)ԑWBzK D@)[.z'y/_9|̓?}֌%B Nh9s@ARG% ;׊#ЧO4ލ~zߒqkcciɘld\ zs3FV{m:үk2Gv  A5$d = ($\$di> YJ 0A8?#slU׈s1W(-{*KTg^҂k\Rr.*K٩,%\Bse$pNd0 ٘,s1W(-Uu+["'l?vrą#'NZ)cR-tjO5HzO.T5*eCmkVaZmAK/ ~Ûޤh~QPF( y˔!(Qz6ʨ{ż%pM0ї֑ǸwHݗ 7w*Fnl?|MkS*ťP|/femJ:TO/TOv@_~ET6plaOhA ) \$ Q!7,.WqCR7B$gdjz. RRޙWh8pFi`TInA`4KkRA"e R gsSQw)*CMzi:1kr1U{i]0%(atg*+NDIؙ!ND$4Bq~3F+Bcض\q*jiQ w*}n答|[l!rƚˀɥk4}ٷex=}6DHeٿ6}<7KlKQM!'$Znĕ Ps9jVdq^_|=CKwW6U\iݦm2 TQD2xdHc.B>Yk48Jh[`oHN;kd4Jc,Ո$Ct*Ř[TmY[i ^^OF ϝ~uq {l%\+:fy$fiÆ6JF_:&y&PCHT.ѮDT6@xWc5FB2oR_Z]} @L4h8HSȩ"a dCzE2R͍N)X ' mՖ̇󉏠S ( '(M fܯVڎa\.$!v1~Anc!x(n'A&(AsEbI(rkͬ I|0Kqƣ>Mw‹Y˵pAbbNߒr 01AxGU:${&9x0^@j}ɥMKHjTA#A*JyۆT G6ia/qR`(ZP,x#4/W8XAuP(\uF #ZQ'GJF#RIRЉ}q RПm־x1ő9~"//4;S﫳[?{]Փ`}}!C Ds~֕== &ל s|` `\i99 %m@!X"Z%ui%u hWvRy׾[Ce06jWV9KU7-w=we3<172W$1Cr A*h/U3s=Sœ2Cԩ:zB͚4v R .I餔AEG2 R T"'p m4vI2$?IoZC@ B5LG,Ѷi:GѠJ:UN:kˠY>RQ(dt?bc{cխ7at$qtƸȒom(C|TI9J6h&9ժ<6F a6V;E!2aE)e]iM eCS5%i&{/됾(kbR"J;H" M,X` ߄Po>+vd=*:4Sy1B'<VdX 6@xRNKʝE.Fc>Q%5j)D$!"YA@"pS1V@ "ӾӱұIyqC~еW X* Mh+ Y C{>)A:= P?_>0ۢikJwvڕߔD9D))_kpOxk}TFݳ]i|}jrЕ~ORU&}ꌥ{..XCQemW/W &\-t=`4}MU'& z}Q0@U h AjLx<)DM*on4T[k*;Zҗl6םq~W yc4ј**<"H1Uآ\8# 1)j9Q=+S(B[hI鎗悋@-EpG))"cDL@7(Cz^@IFD,ƊmtԫQa9}Epj ̧` C0s0f}x|\ j#B[reP, V]r] ?vps f/w$b*= 9jq&LG M0aX 鵌FMFSѲd9[^.y]Et|S5z-mΦ;|Q zL+ϷpE&n:ZmvE!hſEZJD]>2Il+I@sΒ~ 0m|(ONCXԗ$8<\ʾQ[:$>uQ),0 SXeQ,RxZJP;Vș1UNJJs/ERu=L}Z]Ֆ˖mx χu zn-v1$xMqTO~)cꉪf,3|O ٸpU"^eogX6Gpcv3ngqVgݧ^UB]M;_6B"LL>Ͻ&AYx*"s";"#"bD:Oq*J(EZ#D1n6ƞ&/RNq`1w:"bKh #eƽuH ÌМXR)F6 \WZO]Ϫnz'ָTy~mzimhlV*!N~&Z*ZWWߺ^j\Z60ALrhƘ;*֎`Dt"qG G*œi ~-LTOΦ+a5sb̗ {R%H6~%oBM-iR Sx_KWM͐f¸efRa/)AɢUo9jl֝ljo5ou^#apR)0f 6ΆJb"~́OO/oĥ7 ,)-!2_RE*zUUT.URNboP>\otgN.ٿ>gwg߿}0/`:j"AGP N"`|PD‹t/Găt$gŘpxA1Q gH, F(.x ?Fa(1Qt#}mcC .yzGÞ1l"uPI ŝplRH-a`+=c2D4Da4;iLgk;X=|||y}s;KscsyM(᭻ԓ{|CsyRn:פ j0A9hkc2csэm'i=c__(/r!c2H)ixp4H$`yn c=O(] Mx9P l []CK"NJlATǍ*LFbcJ\[P`Zc #hn88)Ld$&:łKs)V JumFfD0pqRgN'[gk\+.qwڠ%#,q}(HcG͔uQ" P H F&-֜F;\| \ܛmqǮx(ZC>^ 4+:g~ʾ<~0W\*d_j}tW^`!b,*U"WC+VcpWR HW\Op\b;z9pA X1u0pP*Q[g*5RqD*pDԇW@B{WJ;fo9){vzB?3\=\F~&SxϔA\>J.g.G\|V+VΫh'Ihq :z^N?$A'x8-z>LN1΁-<^;ZkJo$,ٴ/EMfl(#NWY |2:_u)}U[Ƣc\R?`-1J'Ptڬ3CYʼ&~ԼVbW+]>-[߿yc`h09sH[G“2Wa_c<-nF1,˖>e Y6;ŦBgg*<=`̱|:8V1qrlӴ͙W{ɱ@5Qs-9˝R 3|wJ"-H1=M.ѡhrZ]KT*ir/P#Jc`BU")QD\@P")V_O[ÂW)hśEF"B~䋾΀CA^ q+@A,Xt\HTyLjas"Xfvv(؜%{(\tri9 JÁD$WZ%%_"\qUW`xd5D%W!I9>*KԡU}D\@  & \?{WƑx@6x@+{76 vv" }{H"!5HN ˜awuW_U_Q\JUR~p}* ghrWQZ`3+PF{'1`p7q;վ?\&%;Efb'[JSwEB0 LۅvhV60 I/A_٤惡7܎Pudq%MT-NxɵvEǹ RFz|1c̩ o QZ7D)korxL ~Dpk|W,1 0%h*KձUVC('1UX(yEi❤<OpIJ59"D"vM:?ӍLw/IZ^_r%\kCX3((dQdyuF{WܴC/Ļa4y)H&߼BeM&ޖތ=e#f&?n' Ğ{5YWEIUF{g46=[h٢=Ҳ/nVZ"( F:"4'n'(5 ɪ[Ǿdܺ.]|`U־ݶ7^dXN oR(%=[igCi39:Lłкw\ݓoF抠Zפ;.IKa4%r\KM7Ա^Kũ)I{*UJ90N׆nEwEuSNFYL־<7[#ՠ].P\k80Ǔ068{33VKo\fծ}Җu|zW=Kԏ< c FW_D~?I-\/g~x.\܌Z)'XQ*4 ,gqWcIQJrBeCR\&x#\vW>6ݤՏ f'){\JjokD(9ԓ*IXY' ]-%|Rek;Kծ|vN V k&N6 ccx:etSFwF}LjǷ|%4iĕm;{QcHt {f>/.G[_236[ם/˅1̝/K|ׯ_8CO}1R<&g L^Bz/(H|oϲUI \@pt֭}/ dWF/~ϖ&;-kC( Tই| z E=oR6(?NF6#}>,]An";~~gWymon1eYgO^Bުg/U%I?] TX0N?'XCqp{f hh񛙆܇ݑX`EOhr0*sVA[B7ڂ{/hq]ZkYhQ/e^aY z{30GcalNOv0w骲pŗζzLV[V Wtgbr֞kxUkJq\h닋I.郙tݛ8ƹeޮ5m0ROYjl5P,<86ŏ;XI<|,meӑ^qziz5%*~2+g)I ;fPl's  _{>Rk:KΗwvթjӈeQ,eYD/f)(&@?Ӓ.ȠGٷMcA^ڇ0WX/ȑ?@oez -[0> -L-βD*wLS& {e_եEǿϼLFߗm'ҙP-Qv #6ևt!S={XJ'ɖ(]:Ԇpf/)І&H5Sg,3D6Va+WfVY&Q%y$tV[Qʃp-a\^j\x``rmh HIE$TH!*fg)GxdlnWLn.]Ƭl2 NzY?Sj { #)'#o,:Fo%7#p?Y8aY=]{( Y `-1!Jhk9D`.]GJ\=oo!:0B>ɡCP"Z0O;rT)On)2cҫ,o(awqm2`1%XS3ָ1:e uʳ F b']ܭU!v#~1]OXVЋ3XM/{ٹz-8a= t'Bz&3;b bor61 x9x]:XA 2`)"1꼓RL]L:E' tx[  *85TetH^ܸH_AŢт°< 7U}d|,?U66 DIm.(</>)fFayw6{߂*jJ@U&90| Kv2&֦YzL?Sw|YB$̅o^m-9 Gć M5i&tVS7> G F>8eGOWu/͍twFnuS[5ުua#b$-OC<2Fwp6*k +K* JR&~?O>:]a+hrQ-t"T-UOrCSXeqsp$>?{ӷz껷O| U=ۗ=q+ˬI-O@C[UUkVP߲jXi^txzEMP_o(_u(c+&N@L+VӦt4LZ{UϷ_]g;HA1c$ㅇ9EB`PBB$ h>FA;}>ua .y1 @ක/qNs!AoR( ,`DBf3,C LC!lƟtZә||OMxylZ52\10/ip l9;NٰbF㊝5lq F/uM4:*Yikkne]g|F9_R7Yr3J@[IU C!fH .(PF7g)OcPkndڮ&8r˨V\TP/Yv mE(:LrUFClNpmNVe8\u;VP5-&g)8om,rlKI1M5o)*7.al[N>0qDF;(p$SBo IwP m7`F/83Tؚ8ۑ=fb!i  +bfȌE*/^Ml0  F?\#6GAXILCL)U!X'8CASQ 9b6H Jqݠ* Q 13 j-$& f@8r֖D ,n>&b߂)_hKbDA$MPnC`Z7o͑#;"7Gv$EW`FjNXH+ Iz* %#E2OV)uŤq:5ڬtug[ryQw^jm\0 V7:7Cij|MƫI^kƚOn'lyئXlOr?ٖ:^Znm~(\*Ζ|v wl"|).!x|RׯOPK+_J㧧 D'rU{533g1>qBxip1t49IDũ pl%<ЪpۗF5O.1|,ZԡUs=Wd*Ȏ/+tsbrW~[y~Tݿ~mu\I"0SDA M)p}U8 3ͪ o'bZ.ӂ";[z}k`+&h:-/g0ߢzJsyГvrlEXRX1bsʣBO7lh6h8H4ЦA0LJa+",VE,^g& MrM9h& cVք9 Cq2BP\&LPz"+P%Z3+a`[TR2_o` mfښ-G=|="5Ki V3Hc<)KUB <S|CyS\ꩈR gb9u^e狮sY6JP %&154rg2x`n x -z"ΪQ_1b7K+g}|2b jQ+| |gPoS^gnG\ҼK\cI'@4\E;'_N#!^3U8ڄHK &eM.J5#2 k&d:j;g!x1FntEV1ų;/ÂE>>V5m? L@I9`8Q@'QT*DT-ڻ7Qq~ΐۊ1'C<5IX BH7WTKċ`!zb"rս:η^vغy4[y.xyyqu` `\i99i%m0hsHȼ1cBDumFPWfvI,<6oHu:]&M;Rf+GhfvKkv sIիBd@l*ev?m&:,!|K?4y}+^ɎI8]x5=`A ubufF.{߷?isԉiT~K*9H‘R5z ;9'* / x!c"jFm2 .ڊm'GxIM˫?޻? yɥׯ3Τt鋚LR77x|7ܙ,7#G :Ai Kf4#5g/GǡxaYPSh4HPyLSfZ%ka1$U84Vʭd"lWc1cYOPB JQD} i(|"C7|~1Uj/V>XԞ^" [ !>\7^YRa`pQIE.Z&ak eh4~ Hmcw!@ h'*i228$NUٜ;Kq`r616 %\aNكt<oS9w>^#][tt t$l_q ;R)|"hn2SD@0G5!$R!:҂o;> 9ǡe,/?9hk9&Y%H1 )qfIbpJGH%3 6b1tLPvx*[=Ɲ[u88RGՌxoFb^A,hD;0JOuҜjeoCX4YdPJx?Q`wk&#$!1ȮF".V,`_X/]M;nHnQ=ťYXaNpBeP0ZaFk*5>dQ=͠2*f`t}ZSzZ4O*'zSֱ9SBEd Kv4:2VRa`vd^*8;xxCR$\U u-z&r#ߚDJ}cRG$.B42ZGW-[O 6D6PYFx/ WN4ip1t4AIdD-Iʃ66j5<&QIi{b MQH!(%Sl Z-*XV`:7'^GyFN?Dkp_uH+_oȫߍG|Nj/^>?Mods^_rHr8$Ēҗ/yyfN@1GM;=~f!PΛTtfթJo~b<[$!&{ӻQ<Mәy2uMZ3<:x<)J,})/b؀bjP+)^I?,gW(|\s]G=Wٝ| c'آgß /,_zFYr"Y~&E>='S~s)ξinRQRf98vץz0}+QGUWM?UgHo9ΐ̽mY5'ΝLAMAUZ˗ꍍ^^zml l6_\r~4S lpgsXkV8>PdF%7@rԥ[>>!eai/M엃5:W' |U9x}.2=]9  e~oFZnA4Qr&M)=Z{Z^֪$K~M2wY~k|wםUV2PxyN yF4 ('-OPQ@!_fkQ|=Ic{ @`Q=%dq߷xd[cӱ0ѽ|bY<e`YV"f 4 EgY(Uqzc~Mޯڊm5XYew\:zIrYq-?Y?$GhdODa2֔r@ShM9)4ئДr@ShM9)4倦ZShrۦДr@ShM9)4倆H2XShM9)4倦ДWV#oM9)4Дr@M9)4倦ДiM9)4倦Дr@ShM9)<Ǝؑ;rcGnȍyّJqܓ/)W'##&YI"2_]t'y=ro]%k~htWܨtnӍzQO7F=ݨtnӍzQO7F=ݨt;M\GYՀ޽" ]h=="d]&:OD4mhnYc=D7A4v^ !y4)^4Ram<#2&~ o8 ư{kYy?I(B ,#*MI/1d>!.GہFvmSLy9m_t(Fa(O[pedd|Cq0jC0 |3'1F5mTӆ& <w:g.=tLAqjŪ6,(O***DDeXXBIU$cRw^[ṚdTfJQ拭NN|?'?*+/5ܥwߚ]Jq1a=2Fn))&q0[ǹ҈>~A8;8JKٶAdt@LhZ&8㒉h);~2pt39%xG]k5CdU9}d GzUAȌś묀EuAܛQ9 iE4%,pjejȗ/wOOV";,l*;NwQ_k/*f3'i%*m` V4{0,%ߦW;;ݟ}z%zeSjQc`OV˾]Z GßϊM#֕ڒ@\bĺb4|m1Krp4#ż'eof-UK7r](R'ec|6;ʃ6v7>T]ą#+g&SEH\/} e*F=*{:[dT&d$k9>dglB+jVI!gID~H]|'SJvOɉ9<2h g:)e;9rW:}&L`"CރI.t7[Ư'5@_]M*Ua!M(cqu( E8+da HS`ɽ^׽p8:A4E,fg׵u!y/U7';qt4o ZkArji]Q>6DHKj]ޝ4QxG3>uƤցsي hFu-rxUOx2@TIf(!th<]7 zDb e(q"\}IOO tEjչ.{I?`ѫ.?v]Hou` {dЃSՕGB`ƉL5"l%x)G@(Jfcm!ZUggUjq__(*B|S .HN 2^1mm1\LLΟ+؊FjQLilBkN ˽, /K`Rx}ݠSZIQm$ ۔D AC"039ɬ9bfZu6{0s[v5븯זl^!, 5`9lL4F%.*􌄦snDF6uc1XŁ"dBDC&!Zr?r4+j~NCWxjq_="4<4ȹ)17gPҰ̴Q.P&GXWnQ)@1*). )D Qs#XjZB!V|U96uVEU/_$v /GJ l0KM@ &SpN\Yř_ Zq_+C}?.ljT #ϘvF'=! e?>Q#("q&oW:soUR wa{;f/_xŔ/?y{+裟x4xP%ӏ^_F4-Ygc|Xu??nm!V˩,4cOi\5LRJ)2z8RE>3^, >eAr}M#tvj{3]Ǵvyf7%z9eQW=Lrv wNq092{bZ5.FgKAA~z}dma`k}KJ`bg=.h;ҕEgo4+o]VآlYٌpJ&3`^C`>dfЌ`PriREK~6[VvͬmJޚ_vjϦ wt%JZ(bѝmV*"T Tr6ѹre]QnRk{bƾ]~m{ɗ7?ya<:3)mexS=\|dVߵ0yޕ'l8][O~>a>rkKO|ؼݔFu|ۿx~x^!HgH#,Υl:&JN L tA:l](HW[gCLBX΋ e 9A#ψGUBc5 ERZq]w/>{TOEWEñ*yRj^;%&}51{񵪳9H@Dzpq=W.VSSaDQ5`*SW̔h4DxU+nP,8T0@t|r+YhF!tY@.Z+B'f+4ϗ^BCbSȧY&Pt^>:⚩A'TGT:&#é,*uӡ~_[1#֎9"s]PR%]]aa={:Mz8ˣ(UL^-ӄ(Kt\Aɂc\+52V_qif4~NZmӿw>jmAzn[Fyw{"GnөGz+/eOO IyGӜ C?S! ˽OA}psB_ eq% EgI,#F!,c3:u!cJ` OEp%cPZy3 ZQTVǣ7o sj|/)?/H-h|IYNC6 +MFq^dÌ6I2\Lr;Co_%?kM`\eCќAxhcbWo-yƁz|i k>*"OkD10u:؁ &N<&dbivtKOfœi&μ$C\s.]1Q&zІA]:fR( &% YOE,B0Gve2ƈժ#V| ~>M1 c:^|v$t?MkJi=:::`!uVq)˂OY"7-?~ȍRE :L$(J, @Øf"Cc3r p̒ېg ~iA_lb:2}f2V;x+4s>*q}c:7*AL?;ݟ޳%mN>~M *CfL YҚb:HD陓 rpIkld|\PٕΛ$m j#E++Dre( @MaxjzK>۶7liM83U"cґhְl4SS!ZD HtE.%ٻFndWfy/O$;j}@4[26V B('dɬZ@!ܝ@IܗFmEK66Yxԡ =XǸG#-[C2X#sd"J"pdT?vktbvmjo }x| Yhv̙ Jc^I21TI ¹Ĵ@n$ynu.ОZu$[M_L@ANdVw5R-B'^vboW2&5_Ǜ^~4r5| ٘`kXErchr5AL3 +y2g%Xde`#r<&TFkI='heH)x"~+ufnZP;c΅0hK. Gv O/W;C,>]s,=ze3\@&ZcM}Jҁc!sR!Dq9@LR[r{D.KR)5B; s1$p<'ǒjXV#gðmd$^d+L"{EN*d 29yA*Vy~26i/: umƱV3y 6Dn"B>;g8Э{hZjƨWF$c1sg`,IU<@q)^3epv%mLL!u62C26Xk ʚ:kD :NQ7 *^>9o]ʕV*U0J+$-1^\^܀^l^mAiKdVّ<ĐJ\ dTY2A84{,ུTH1{e@P\2&ٚt@>64hD,wЫ0Rz~fHF$rd '@˳B{7:z1ـAN0\ *G-c cQ knu3zIN,Ƚb뢔e&A5YZUND S*g@( zSڥvL>zp7q6kww az{kD:q~>Dtrр ,Fo,Y0ـ2'ea'ő@4[䌓Hyn5h7ϑmCO1u  ptxdk!G+ L:ԦJ4*)`߬ɮ(dE78)ݭnk(t}KxfσI*9 #qJYCZF8kNEٓz &*f`t~ZճVK&/ez[K8N4hDdg_?f3OiUJ7_ڷ.\3JL oǤkFW465xY.b֝_ _=qpH}675]e#~ؼn6+@9'zoB ]m0L$N&^_}E}ƾZ*ⶮbz'.K7ܔKO|ܨ[rw#1l}s'y y9M@_A(w(|B~;P7[<ݝ`8~j>*5M\3AV{:6i3X6M=u@3w:`ŃyPB0DlyEU9Jb GVH,b)A20am&+tf l'm`xPEC5hx+PRuo7lpbBԭa~HeVy څ;v| ^:fSP65Z٧)=MQgVqzBdi&PQ 4 !s\TyBv M۝5)4-дVeT# d{} yK hdx-dsm|keGA΃Jd#q n"j!)D2pRy+H㘻tmfMGX #GV@ [\yS:: 'vLF"NPy qry[tncT'86{fFP{7^FFLYX[oKMZi C(OZ2#KO㙉 Ҝ.d AWHbu+H 6?`ɡٓdVIVXB-A2Ok˹fuOhqo )@^}į: ^!M'~8pm(b"p '#):,u~} iK5Nږ<~0JgLzGpK@ 0L )~<Ƴл:<;fְrzd.U1 ijrjNnyi)A?q|O_Ʀq:Cߝpe{3XlT-\VRW nԊxP om_>qJͨroB=M[w&Kh>!NX_x{q~:61i='s{r:[v!8?_%?yG7(֍ڑ^=ٺaĺa |0 8?`-߻MzY|&Q{Muݽ:r|q LF( w;EmL+;גW8WDuR~|m?}ON=%|KOm0q?u'Sq?ovQSTq8_[';;<%! )}/?1 qw|K_h]1VuM?G}7ڱfCK6Z6y d\-*:ZX<_第/:qlM1ϑ{U:ϸ*b|h#o;I!q$:bm"mC_qt>e怖9X樈 ^25*R4>Tx܍l-@:j2G.s<bRwvL;zq[M[4$Mݭtzb߹-zֳo =-pt[Gt;vkoo0W߼v0;;VGw7\>k_\b}(@8թmeNesgu%͟H;C^s:-ξvO9N7tdDYljJIJ0U2GH"HD;TKs_MUJR!(MVYGW*׺Phuڏ}բ EK_&_Tv9d-5Đ`)z@|[-җte,ܨvvZV-Xmÿ7MpHSGT^'0$Wz*i uT %N hSJisG8c0{|ZPz(c|/[8+f$['+ kTՒJ ( D25Z 16\mۛ[3uhE:]]Җ䥣]}..籋.Gq %52TޖL2.k"eE6T#גBRFf$y|ibi)Q<)"wÆcYO~.x1f_^O/CPGg>0loEw> |׿>ZU|v5Q}[故e*'f<9/_)+>8I:ئ]zo?wp __V4ǮJ" ͱavWGe1O_ryVjud,?|_''mO`^sxɗKSAl?떙_^={*˶>y\(~v~ÄN甦fȪ2v4e] q)S?F%%wZ]”O=z -HR:.NN-&qeragy9<| NUS(0%(;nM*GNC5ֱ}1xVٺ\cEyp*t)*1ݚvl J{gCy֘;zz̦k?xZgZُO׎ [:#Q=]{(?ot8?iBO2QvJke#{[rUHЃ?Ư,^A2ɢ*ܣnR ͪZ6l߰4b'!4grr|.Woԥ$%1AlcrxMl*iNZdkZ$|9'gg7|%>nٲ㴪!ųI|csߜ/87oX%F$ 7#>xjL:˭:8DXbKj]!vw3[JCB`)tkQ< b0p4u"jD1)qJSZbGsuu%f#}7$RR֒kJɩHUY؎2ic`j%n -I7j: " m4Gfn!P lھ}of?iW}#I1eO)?) )f)&go|1\Z5872JeDXKOH?~JuYȧ4`VUJ-USVԲdR&;OlΈMϑ|"s|>{qiķ=i`twM#t}sp.S✵ n[idjlR˚S>ZmT:\n1Qq VLlFpڵ$)_>ػ\R%N@#FlFUjt a<^amQWǞm|m;7kE,oSx~wbľFv'ݬ%ywҫډ-w؉-ǝ{?xZ-φmE3aK9y|z$;35`j{+@6%:[~Sn*}z^u;{⳷N~kW{%,^15 nXz*ƅM] tz;Us7wl,v ..dq7'#\q6mXƆz7[PT++oò%q?b6͏)wh D_yL_e1֮*v _˿.e]WRE&| T% !V)DgMaONG\]0tqA7k^ח8iVXcu֪X~qL\3˵hjge`kg,niyn-FkwF6B~#mC+6Kwu|w}_j͹\SZ+b0Y;\e jUbe=|i ;3c3kx38fsMٴViʔUrQy.g0aGk832wŤN-[nYCېRg=`5k*)K›Lw(NQa"[a{+4XZn94M]VxFIm=D<@0aǨ+ *=AmOcoY Kdgq-{ (Rt^R}O3qއqs GU֬jjM9iH%9ꂧ tF,`®QG;/I6Y -F|ʶ_fj%$xHȔ`$k]X  m5񤽶Cb9[!U8C2)_`xb\B b&NEDzX:ZfBL/ oUC7.٥$ݐ6*2eQ0ΠVU曌G>r E)[1`Q n<`},T9l_ۺq3n `L˃:|C @b1ıD6b)|NیE )P4 CK釋u69J ^5dD="myEeZ$<0W .mXt,}V2 .G_D @˥`9p1G(ݢ #Jvg7[m;1P~3!㟨X3GdqN\t&k,Ԁl𶛀*76L^_EteݝP$&B|ޤLr# ? 4:tp(ƃg?qo=\_io.O΍ܙtpP [G]F SI6aw{h?3CpYQi/Z\ ▵j4|c Ř G@xᓿ6BQˌclMt^ pӀ;yGCqJt|b7y6;]6CFo߀H~lWʸ}PMqo#q}[Q'ys0GVbp*\mр|?>2M0Q` |πȥ0zbL=z ԜPSD? 豎zCUP&kMJ~Ȗk=Mvޡ,ÄL1TbcD]S%dq?Tތ)]2mF 0 N '֠H82ahӈ"tۚMF?2=jJF{{|t8 J4 $G14 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@Zn'$SEL5N^I@ JW(iH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 $P(D$p Y|;y%M-( :MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&^n>$$pCsot$(s$@)4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@ZN%\Z_Wgx)vy~Ai[w?\>!­\ \:Zp)kho>J\ZBpS;-y\_W񸆸k~Ȃ1{~n! w'~2"Ѕ zoiKQN'٬p6u@K>;tn/llꦻ_޻k_~1@n (o0XrY}O|"9_ גf7^ |͟V7?_{~TwK^'9A=";R7Ru)f)ur7Mw. ?|sS7툔imuSu}:<]8]1~MԺRϰH]g: ) EWLmbʝͺUWѕ/y#HW4rF4LCf͒j *HW ]1n!)mu]1Ku]JQsW&'gq)I&׺2uJJ y2R+ O3*8rtUER3]1n3δ+,x˹15P]U]]ƙ:yU7T誨}s;s+-ޞdV8c|҆Sܻ^渳 k\<^von/k!27=`tց_ůBl*뇯Kҟ$r ~ZίxHecDԯ}Wtfͯ/6OOdˉF,>7GΡ~3P֯ǟ' Yoc GO7l&=]Oo~0e݇7iO;;]gӾɧ\8[{2㺤Cή S?Qur..Ni{çwrLvr P,IW_'gqIߺ2uE+E7)bڹQZuqIAPcUW UU ]#GӖAPum}۾xcq~e-uŔ! 23θ!I&Ӻ2k3D]erT]1p3wbTWLkbJKҗ|)Eл/gq+M7LuՋU0` 8{3Xl~^%֫NeU]=ԣ6 Zƕ k/jQ b;6hy~| Zޠb@ZW Au$n` ݍ6Z70΢]S78HϽ^79)bڙ@QfmuZ:+>cq]1m Du):銁+-btZ2u]1ͪ%{P߾Al B4|~ r>{`?OMxë07ӀOC_o?=4 ƙywS78ތWwz=+/bu4ܮ:o~;/ח74>Nj4!cEg{7HCO[mV??償5{y˃UfKO\E{~XcvaO8_]x;u\h:ݾzw{bZ;֥v=<~w6 ES1.)ӆ[KLFkjoAͽUq%RtŴLjJut`g]1#)bZZSF}θD]eIg$b,f޸uŔ֪5w`,``ic@.:}9z{n| ]U'gU;WδUq%ec4BWVuS_1KYZrALl]qݗ\q&_L1;yٳ}Fyʦ  =rvL #c`trtrL;W2gɹL(t`kL<1RtŴZS6BritEMe+VX)mu]1sGM+NNwW_U&ZSW]-PW!R<颞 ȩג]1-5 )]-QW14F#EWLBnJu]xSuI7)m2?dJgTW gd ]ב+ u]1e'KUI>$]1pbtŸIB-O2Mм]O;7٤z8ɩwЫhq)[{ zΥ n:ٛ*\of~2XGkg'QRcGhg6]X{a9ʔHٗ7Ws)-ߠi]r 8y1u n/xzhݰ+.rt2[hmj]WLj"rNLFά /к2+OTfГ7qIL3ȴf)W]-PWB HW <uJӖ+PuDQSS]&1sW}z+Pzu%ZlO+g/ m ޓKS77̫QuSRֺauKّ+WnRtZͷ9L~R芲=~''92TW}luŔVuD]L@x8]1nRtZ?u6o`?:hq+}OSZbξHjfxn+m_WLYud ~u^T;ӦAyߔj9ʅ|4t]1.]1mp)N/QW볳Pv?'9u/k;rB#&FH!ˆL7Po7{-X͝MVxc@.*]L}jY]U;㝪Nht>+&0ÎYeMS3wmEo7o`bߤbxeYDgLx '9;2,}֔WKu1W†b2y +F6qs~lYyUJ*^'zњnC2f2w}sF=p= W3`pb Fkf_Wº ,jIY182l !EV m@ '5u @GTRI !aX29;$$2(30 Mkv2PtKC97 #R-V8@7R[ER`tE Nlh}2%*-ϧ%Igǣ/`7h Dh>)שzp4_p֡AxZDo|pm&cha|>ۖ3&B) #~x߾>W8to_ȸ(8;=D4<03m` dV0b 2̿+f>̆[Ã@:0t`g{ఘqˇTX[AJ@I1WY*IggkQE,Rd",IQ1T)b+t@Hm7g)Oc.*5XOn=Gzcg?m>0>v(Hyz?;͆i_oƥ #k3Bep0UUHk;wm&LdGzs\;bL eOBGw.ᢽ }"tD9wtT\ٚ?+iHFC`I PZUIez_5"ߓ.̲N˞/m ~igYȷLw#gb^G#E1X#(F(ghℳBT*J$ZF rH# be}0z)#"b1h#2&"3.#3k\6nI鈰qc[v<(A@xgn`fjzZߏ쭾/k'HEsF(@0QH$3 a0`8YOi,BFOd>apG$(D-m K-ɰtpB;ED`lH^ cD>mg x^J #nb΂wDðqg;Ь' 3R- u H]WI6NPb7ݬ`GYZ}(p- 3+p[&@4Z 2D+7څ '%!JxOSdz:M,OP2hM}Si*pC̷G׵ʶ? =YϫvѵCqtƷo=8ZDa[3As_eҔ_s^~GNp_<-/5>-_`ۻ/u6jTַe|f1^v'[mT=AWyIסמx6R+v_W5,|2/W/0#<a  o- qޔ-#M/M6nOYeƸr^LpL*Nw~z)6ʙɻ!/vN!ѹjfb%pCG>I3cOX\򦜇+RS&[],S|bt}w5ܝ:=47<]K[׍57%3 y%R{l1(SކbZLTYH쨨u +~C1kDяStѝڙ\.tZb3DjTx)r! acE'pF"B#bRJ%)JK<]EϷ gH|"cOC_N'94Wp1ܛjcVL޽$g)9MTkEO#w9t~$NzD:Oq*J( (a&!p>H%0P98) +7Zl=NBZDl M/[`qoR0#4'(xqG;齱s ],ϯ-[kϦ~{7XW(x?\L0uzA ~"*i1EKJٞ^ס  3ɡ ctZ;Ru䇖t <Ze ) \D ӉD6Va+3M?qiMOU1U1,&{7' ཁ` \GY8r$c$HCp3HVt;"E RrXJ3LǺ/j@tve=(_W`:K0#dX0x '$:؎-0a5#k1ʈ`j@s 8+(~R~!%:0B<ɡCP"KĝA9?b)209󃷒c֝Y%fKɛOZN  &LRl$C5.*ЋrKVJR%.<(. 6,)S´>`gX#2`)"gStV J1ug,Õܜs0Wga^W  N[lBCetH][o9"IPUN,Η}]aR S9Ic. fR cFUDQ$ŠT liNJe}fմYm%LavQmF҃x *Dc}b#]f|̕0d[W}E" eNBC&M:kb|{MW]Րj¸uayRSQgwIxOnڎ.Wu DPgmdW]mxeeVGRb'd6 :̿(_i^ni*_|xe}tMN-#!R |HǦR5$*u+)w4Uq^`$/~շ_/o/x勋>y0.`:bAGP`N|UkWP_[5U쐪in:zEM2W]|eB9O *Y'>^"*ZSRNhMGŸyd(X`$#{ȭd0_wzd\7cWEI%CAHꖺY<N8|z҈ELČYl5֕+lB FB.&dӰdMW֛N7:a}{󌝵%cM׌hՐ킷_}TLDCL|*]U9oRo2ܞ|KU<VёVڥ@<׾GjYH(0$r*[)*~sJ"B&9eT &`obϣ|&o.QT/Gy2:YUBhm~s^5m~LNL0jM t*,'*Ḩ4 s72*Ͱg슅1 ^j%v~QM{*;x/Nx|e|jOZd,qLim"FIO$Jz>D6@Qm5I A F`b239BљsRcnf݈N'i- j7]Q6 =09BffXK2֤,QYQr"|H̟D6x̜p_4Dl""6FDq@[%KRFdD Vav7A9eaGRUcdSEU*΄"a6H bZ5hUss7"~9ԁqqv)Eާf^+.Ƹ\pVۅr'lv%GT8D m`PpNC2>p7[yǮxh a 2[p[ܐLُ2\m t/,GHQc:1[dC2ݬl]X:uA4dC='una9,A5IyQ”O"5HNH9PҰӤNz dR1i;+6k}]}WWYwe7csQ=-^a=x:MjPf7 zo~Og{QQG?N=`- ɯ5bg'|Uv^Y *vS]?yX^>v5OyذD0e??,: \6;ly(y{]${teqf\|M_΍[d1B4eRC@H,N! }L@6>].k{аwI Be9+dQ)pR59W4ĶrgbJhknLU0I9钵E h|AaNT(}T5scOe?8vswsŧ'8_򼹒?=y9u]?aZ h&[gXY[lZ;*VXc+\LK:oλ 6_Beڷ|ȖS͝2,vCv~;L5r  ߋ&(3QLݶ񈇘eе*O(>J@>;;L:uD@Vs+()# 069&`e aDQdKY%RD3G=*@2]1. H ):@)gG=1P& ט~63wcw6=텐OߟM?/mo[]'{b✵ ȧd%t @ڬsp/FATZ9t-cޑܯٿ-Q=8:KNJ㜯YB] YFE%{iR:ܩJ773|#zxzzz^U[;5D9XlW 3Eڗhep@8=gʹBӟw!G]MzM@hz$Ik |PAͮ4FqhOa?qizV#ѷF5n\ҖxqhlE_æ154[qrpc,&T ^I)z4s4B4)8R ?Iܝ{A[6B;0  >lɪ{$(ur0&PF.DP9PCUy.h^y>,dU|jK[r AȈMPҤ*-V;eA̠URvSDL~pebmeC4ϥƼmf|ynū J6t݋DJ{Z$jC`VΑf&1QODQJ砩zM;QFW+PdDM腐9dyWE*R Q": .e̜vy wo8t^uN=y `Cd[wN=Ól12jJnG"9t}tC놤t<0Oi]vB[ dB6HE*K_t*Twԍ6E>`@п: qU`d'⯫ϾeBZP'n&ibN qo/#>W[rZ0b|z6n2U R.)S臌HD} ͩIq1\0~;.Yba" :`2q`DB+!GZ"{Wz+ϝҶ-96EB9)[rBj8`%FBIpL?e/iyxd( |y:!|QJ ,4UbY 'D&!(RfmzՖWk|1D{ԙqG)5ŶGsOXX[͘epe]'vbp*vƊCZ}VҺPTZ\ U5[+Rk•U5 \ZUR 5•(= bj.Cj=\U+kpjJ9 k<:JX)N\`ચfZU`WWN8(b>j1WZR}+R!|9pElzz' Fa>R p0kQ?\=5\ڹE8?v1oOR8ʱ|P]bZ$y6twus;_U×^e[Q $w (AgH?uu]?Mk.vs$MN4gj)k"Vbq|߶\]-~ pۤ6 "K{̮xR7C/Qun٣gP_0\Yywx,[a?a7uKP)hG>Be1,.[\ PJS$eRAЀY02] yi|+/Zv4I1Bz}׬3Ms:U+mxKsUB8YLrvy *Y-Fjʤi֚kS ֩J ޵u$_!tLތNjŌwnI m|(Qn(x%uT,VqD<7{l14MGᏮu9n0X~9Φ}my\$Y~'Z} e\ qI5F6%eK[rZNePJ*qu~_0!}f>%9δ$%P(X䓍/Zoͧ`:d083M,sdn)YrDX^ZcC q̗2,Z}j'vÌ`)zYS(nV(K \Q[ !ɞAEEg9h-iq mد8E AXXcė% cȫm^ȭi|DJE@6 o͎񬽶Fe?R`t&bec6>k5mԡ]Cq޽  Bŵ,>/ vb[MpuTXe &4 .'086 CS1&Ga`8`O/J+{b=6CAJUy82Fﭷ1%.qG"g#q0!wLB0-e)pPgR D,"l2mw &' m3Sk!0Q GA@*}r*Yϡ2$]+Q ֥̐MbXtLG˓"]DM`|EڃkHԆ&5T^אeh؇T]#.s,zϐ ra _u5}MANTm/]/d!T?3yKnd*ہnЗ$ a)t~w u SSm6$^'^2,Lr(cP.-i5Xϐ9hPn9B/\pI} $$\LGS{Xsv*Ae.mЮcD;v@BBbC F{6hKՌTP3VOY a:Z!Y5"č{!t% ^5Bi-Hty:Z'r:up;{X-*Rvb>VqV%ضhjM{ZY!]td1C4M11xgA Zm'ô$޺ཚE{BZh; HG}>;'oJ`Fn Q4Xẽ+Ϸ7ݮ^lklo^3)@݃ Ȍ!n$ٔhfFf u`}˿Bg*bq409ZRnXY z31K @9U0eз]TfY\VV$%xKTKP./h722tx_h&%\+RA\@ l+Ҁ *M =>b*DYo٪'0 F`V"mÖ}pp#m;JcZ 0zJ;I[cd@裤-LH _8 U HS]HǀLA{7dn'16,Z %ߒlKp t PEc1]Z([^JO%Fz[!@քN@Pp^4z7%à;Z *[̀z1 =peqr_lVhJ7a#ؽ=>dNK'qL)Q\4fK/ypUa\@D!S ձzIrrrdH*8ėԅ/]0pd Jk!]~A wW<"])2昢1Tq|K6Bb^ŭvCC8Eto\a `۰ci{Qm9'?DFej[}ө!f35gjݛ-';K~%yb:U!srpLqnq{N A@G@R': N uH@R': N uH@R': N uH@R': N uH@R': N uH@R': Nu!"  p-  ;ة脜@֫H@R': N uH@R': N uH@R': N uH@R': N uH@R': N uH@R': tN fT59dMܼ'5J tN "gH@R': N uH@R': N uH@R': N uH@R': N uH@R': N uH@R':N yMN$qmp5N;uHvfuH@R': N uH@R': N uH@R': N uH@R': N uH@R': N uH@R':N iٳ.d\v{\vŻG@ !zKY%Vc\yt%j\: ҿL}46rX]0֮;OJW'HWH}"1.ЕMJPftut Q\] uѸЕtt%(]P:A&9WDW8;jԕ=fPP]y*?pscM0ai |lKJW=9ow/^ɼ{r|s)w/9-]^ Poп_/.޽hWܼ~ Ɔ"yͫ1E;sWˤam)r컝ۛg gAcb9dԿw~׳޿#J^!~n_\֋f\bOq36s`0[?0paiO;yzcobwϋ&47o;79t:\'YLպnMo2Sش\ZY~*6 gu1a?w7.e+QԇRciɛ`oF\{xk.PF5nGY٢y< ~M4#~3lLTy';?~7~$>+>t@B'hA_"~-/"ʔt%Z՚fɭ~<]eN[R8+WmNWT]"]+1f5t%pj( ڧsʝ-JWCW_VDW7Z2GFP&S^] н.EW7&{kW){kyMkWu5t%p]^ ] `( U>n-EW7fhلc+Atӡ "i=K[QW@#TU>7!S/{n$tu(`럔9i%tu8cjCJW5ygӗqwBֈz{]mzu]lo?dfcy,s/܄DvSMSgD࢜Gi3kqyQ?n>躭UhEAG p٭E7l]7e4Vu ͊Jnk+A$W=Ztʇ2ػ_u֣JFwt%(+] ]lXS3(z4c+A  WDWLf=J:eHJW'HW?0O%0;6 ӡ[89Z ] \BW6c+AI~tgZ{_ 6m}$Y &IC v#9I0}Zl,Q#VuxX$uN+c|*"*y"tut)99X9 ]Z٨+BltE(@WHW&#BCW\CR@W ]1K'NW[w"C7} R&w+>M+ɜ'w\iҔ@+M>Oj yR^Eg8i ۖt(gmFg"@pE7ZS ҪA7nN*2+lE6tEpʅ:]J:AZN []\)s+B뒏 Jr:c s++y.tEh2=y: ]iӸ ϓl.tEhE+B)@W'HWcGWXg \MuA7fpO,J; ]!\t.tEhytE( "]9f ":5Į5tЕرu {n' vC{Jd2Ż]nzgt?SlkT0ڼ ŃgĎ´6r9K#[:RmhNЊ -8W03v܅g`*غCA%2uxr׹\<9Dk!yOP>xrG8h96*"BWVKej[tu`e"89U.u"tuX :"ZBW3:]J:AR\X2+ uEpy6оϧ+B,S+͵&#"6".)Dk!y":I2I+3+T>3Wg33h_l9 ҕ\ɜbWVn{ S`C eʈpF 'CCWrǦgLX3 }npMwCz]uB)Yb+ف@W{7= ;EҤ)j-Vx@HsyI:`F_p욥Vm]^ҡ2 8#@pE7 Z&u@(S[^2膣.uBgCWM.kn(JH͜Έ0JlJ ]ZBuut%;sk ]|ȈN%)ҕR]!` ,u" :AJ3'Np9˅P)ҕQ8Pٌ|BWgj'NWR39 ҕՠXN Z=z}]Zԩ-/(tS s U ZkS+D)곡+cӫ3 d}ǮcWJX"@WjY:$㉺B.cW4v[ jԲG4l>ju%Zْvz/V{s<)l L@kjrIކ_;J)ƙپC5']Iuk6Oт{r3;xr'quIuD6tEpȅtb+a]hW\ZH@W'IW+TFtE-ˆnߛ\vBK%Ne,:JfCWWg|PZ3 ҕj]!`q .\J9)ҕ DNtEΆ mr+B z+k2#Bq .&NhevB)ҕsZ]!?tEpU6Z:]JlJ mt swh7NhE_ wC 9]遮oz+[dΓ;:iJܰ.vne)w~A-bR[t(3+Y6rn m0膣*#BRB6tEpȅL-hЕdB:"BW֩ QtuR&lS AMA:}ov>mu[%->z޻'z/?~/{ճXaE1z걨Bo zqKFgr:jr{:kO=b4)蓂: y ɵ|qA$N esяrB +N\ͩ\>ҲN*Uf Ό?>KXE,x;>HgoѲ%[c;-M񻅡bt iT#vw(FpEZX#FbŚfNZ]!` \"u"jS+C]&nlBtuteA(""=."R+BiS+J]!`e6tEp]ZeR+Bi`φ̎MoX_\ }npU;tCWW'$t+3M) xn%J>w8є>4|.lA3D;a/l3:.0gl[?إNN^vQا{M-n[BzW~ׯSEUh;~}'|ҿ u|>wS+}]$W/~qu-vx˧_vWİ.~|ã!v=4%X r7NHFK {!Úع_-6 ZFo旣qCKsc)56^LK_ȼzbTБ=Zae,pqamWoֿ#Nq\Xxw>n;y 4?f+/EZ HUX %! u i%ފ60]1}ܪ< o^Y]$ .t[5F,?-NsW57>~ؗƿ~`[S~Hn2M>/Wiβ/JV ~7^wήۗW3_^0w~zbp=;sKB*L%ClY =:ȸD%WV8Käuҡtx3dޝA D7Wgp}z5y;g7mWO,VSP]XzX&4%&m,/'5R@ l)zGLO{||~38?~*Hqwr&'у׳~vy.FsF5 ~ Kt])x>jQSJ+Jɨιc4loP,#/P"V-(+iKV ayyPvha=hXB=P(`/,ۃdj3_X]k90<8}j뒃1XPFh]*2F9nHX1]VBY/ ! VDZp,BEƕ3` k=1رb az7\5đr.'yexGk$$WU bmKrG1b=xz}1ʃL)(/ะUtuVJK%QJRY <(r{d.E`NXZm P m6X[ʄr08] iBcIXtU`K*YեFX9eJ o9+!J$-(V0K ms)e>\#CWS P){a,7VQKg#Z3x7 xfC5ȿ? M4Эsڐh;^益\cϝ]>-铙77qY/wed\Y_67|u =#4{x<0gvɆ6"wWz,Ή0frl'#|TK!u'B@W*&`sGFr* ghVJ0(a|i)TڀlWaӇel%.WeCe]OexoClhmmMhƿ'o[aA=g=1a=>ʴUjECmQѵGAR$bj@T,#4硩v|AQKPUhmFM%wE mAZ+sֈ: ::kk  Û1i;/7N1@mI׋(eU# zzJzkrn!j%ы7[t* 9 O<}^Զgl?؄UiK 9 XYG \-X8XqnQ[[sU8 TN AXVeK`ЄV`J*k/W3 r&퓙_'wE^(Sn+&w&"EzcP]Vu)5gAYSs SGq׆m`*.gxs]T4n߾ b CO::k*v-AMOVkȿ y vTxs! 4 ]4eY(Tż( AjZ;x+ 6GЕ IU3Eb^V"`Kmw=9z9OoWwK)loS W/TuM2i5Z)\PZWL"FJzYYѮ3SsgnQ9PuiKtt(s࣐ޢJ.#:у1J FUW؂i ި>;N&zյlkqOxgٓgipŒV.͜LhVk8-X] FyY9eE!=+җ`5,QW sRRSRUEeѺ*jSӄ@mPhU\֜XEia"Y(ZG ^̰g%FTd"UAQH1 陮驪uЄHNMc\#Ñ!{p|b؇92%yX}& j(Kյ׼{/ztttuYhdc>g60+}r!z%ļR%2ӆ |au N'ОYu$Ʇ|Gݽ-6![*_t~1sV/,M[-<`R8B)jswJd3R2ӼL=yIHN3 q[^2ͯ[qՕ J*dc1qdc1əP2$H^xɴ`%Xd`#r<&TFkI='heHUmG[lRhЏ]n,Mp.cMo"BJ1å4Cg>8r.z{u $)KJ$bW7*捊 29yA*Vy~7H  ҁB9jf"2ms P6!"81+,dJv|hzm>/05V  (z X/>=Fryk{4^@{fArXϛcQ[kPZQ2dVّ<ĐJ\ dTY2TG+l,?|m+kq԰Qlk$s=Mgн':z&HHǖ%ob"Pd"L.87bgk.(yլ|&o^HGɇLCѧC?h|0~tDzqr:Ƈk ~V>.rҭ)EMa(+T#)Z9 ;kc^c[z=3@]=Ҕמ#NBgM/c6ߚA^ y :>ASAfn\{Տ͡7uuOuy5?[5%x8iuPAHҦ'Jr^2a?;iw"nG%3q53 xF;T)r:ԂƲN:V1֓ cn_hju}%] pe+Kx뢔e&A5YZUND S*2tB K\<ƞzRڦ<(~\>p3Fٍ;owvr=x+XX`AeNOM98p^RMtq4BwS Ϡr: avP8 :zPR$D搱@ޑkZtmf'?TSAW3koonƈ5$Db26qI$GDRD)32C餾5B\ }-M52{[ áCQԶg e@dKRJ[Љdp5.&* 2 J p\5$5%ORic`Dt5^X?st ?ihx}(\+jhslzVrTґ)2Rz97+d5"X}]B*VX}Ul9'L kiR;.Q_>4";ٰk/^zY b$F}5zXM5lg$^?F Fm(ۢ;GjZ %\*)ٽ_5i-*i0mOEָZS|" _b +W.@=:p:H5ͧT}Sa5/wCrLiaM7s8L U{]n_ &aTg~2'dЇt tUZx' |}eD|q8kVr~HO9hz&tnN}zl zqקSu0o"o6Bu ;)5Vz/E\R/)g~M'i֦wJzq4mA6{ 㽁ڻSd=/y'3 ALR{54kp 76v;c& I}~ Y-;bV6]tѝ/(GiTL;jiE,T\Z&Mx%Lm0ЗllH}~Z|۔PĻ{U"mh2[x&IJRLT&'e,ʨ=Bs^+Jp` NOOֺw  QW0aPR Epu .y{CeIDy϶[@EwW s/JEћgʋo4_4m2< 2^\ mڥD}5)M:wh :1]P*c Zإ<5W$1 9Wrھ"ؑ'QP,W^e딎B4 H(5 *Mj19]V{,fo,qO|Ȉ|C~0 0 }]fmˌ4ac)1z):orjғxfb¨4*YBX RtP IMXhu`2$Q+Fim%SyzDZA6ַ 2*4*|ɉx6E?(=ܥ. %\L',d$E?@zGX8W@SW?H+h[Qd'Qb> OpK@ 0L윰0Rrwdap33L vs\guxEnǹ5Ϸq,.Q;5koWPDmXp0}ݟ4L+#u~ S̰M[T*U7+]ڱ]Knӵlg~~Mo/^u7=pOlx! 33 :驃 sҫ?prQJg70VGQ!h HJaPL%f|tZY*Qk<Άk,B7\>`g{WAZj -cۂVm2^(wu,gninc䣙sv44-XYmLFC9\D^m:+ ;[[-ajyH'xETWfP(!LVyxyf nNAE<9S̲ ]{ˍ;['s^0RU|  W:hB"OCI˲7YufoLSe4 ؔ%e+(jH<&p#ɞgRRmk٬۳UZӅq}uhYN>zEihN6c&8L~Ӡ٠?~[34qA9)f#4ܘhb褰[ˢ{:ecu:F,ΞOJaR@4*ȘJd39ɬB9T䈙E-k٬hV\mڭq}lYkNkwMOs"D06:,~(BJc.6BPC G gm`5W!eXckh I<]/̇D+[Èf߲>lvibW#54bo\ !VgҐ}+-c>KgTJK`.&HGX׼]찳Q/|cJ"S\&P"=RVgZ{ȱ_i|?d1LL~F##y%}.J/GHUvBRUnU^#bQ{kp:_:w9=^8Zg^_~ōEKTގ"0f&EϩHk7!+NRaI;؛zǾPr?xE5Uhȝz> ¶iMIp5DُÎHI)q8r$ 39!hnjwHzH{H{戹H0sa1J ~* %",2 X2ͱOI1vеi3cnt7 t9>7ri y Kdb0|j5L7iMըntkWt[bW-/.!9mya<mhozvp`w;%, Ϯ- ?;#.89饻YbyZ}ut˟eo['p*s.sۍEOH:X{+bf >Vq dH8 6֘q91⿻ǑT-|&n@0_TeBn~݃5}sK׾ፂ7]n`nF&Yѳ)nl^έ^iK)ʵ*Fpusj^yxɖFgl)V~6?ͦAӈnz_Pi}18ב  Ċn*AdpzƜ"DG#q`c--ccK}Ѭ}gH `5d8nQ:cԉP SKT."xDxUSLUמUC>pcc*fF#ı H bِX$JT[=G0&퉒 t.g*g82u<*%&O^{iEa,Ա %g:U\$#څh 0N>j$F%Hk6al1svf,O, +/zn/,nq&%: edāE"AVC$$?V=x"*xCi>g\N'^p4[C~7v}i _:rS@µ.L4K`sKS"<*% PP`sfdD?^Ixwy:>ځ0s&@[)̩(&7b8kP 1wAl"&(Sw 3k8G@:T\USlW[}R8tףӇ8ly4r0X6z.zEB}X[xh=f ʔx\1j+S;S &,lU@7v&( ͣ~@Kn!D Ē@.(Z(JG/L܋Pc}K|P^LjB&Js] J$a6e8L-/J‚ g|ʖs7F^Ffԍ<~x9am,%AҚ1"s"al0h9cu:x]DMV%P q:Vu+,/,,$ +ذ^Lɨ~ _ك{"w8XG/dM9ѷG:͟CMx*4QC)9!C +z>#= zMf#4&U)zf$%n, *xe &z#Rƙf@H 1׋RRl~G~=q޶h~#q;jt5#Zُw8F4ؠDJ!8eЂIJEh5= ~!f0 CN%1Y#T\ഠ+ꂓ= (bHD x<芼o -pYUӳ]Kb}7MGDxlNpl'v+^јBRYT'(Df 1%A3f4O73=3 ϗ&γ9RG(ɥ#.CxYYbYt(Nqg 8޾0rJ\jH(AF oC!4C3!ѲGJ_+]ӢEgjta8dchP[E]{v#Y@l !;M]qø]E 1M6+iNe )zB \ =v2Owk>VQ8 kc'O|m(uo /Sލ]NĻyXݜr.fWxw?yxT''ӫ(,?]uo~9kݟ.'ׅ x,%Ku[FBHr;S{ĚO0(; '&avZ#=e E[e]t*v-?-eYiښMD+ty]o{=[rX8@ _淮áW~&NW f;:I?E= p'󦎋тS~S7E)I1;E>וA>w6Z?gKb1ĸj6n\o W^' 62QMP*XޞBF\JbӬLDF1`K-uk36;)kgWPy[RLyF,&b5O!}s말iA];_)h%_}fԴ)=ԃҍ{J]#Q㮖w{fjCwJjA}E QшjB2[iyzߖ)oc^41UEvӚ-u[ϝnbV6]~ ;,ޥ̢ՠA9 1 Rn'bR 2('lz*_Gk^?[Dފ~ǨD3j"$EtE8g$ AB%D#I 鬬oܸN ޖf9r; {UG\vM:~M2ԊkmE GS6+ı*YJMJ~DphJ\X*Kz_;KW\e?z,ppRJl>++* Ux*KB)%1\}1pex ZpF30\m'JvR*/2[j[o ]qgB xNpVin_0Ҙ^\q ,2C蚔j+|4rBRjukd6o?5)T<i:_vA7oJkC*QiDݱʗt\wg Mg]s #+(<W߿)(v/CK"R}B@3wlɌ:&DِeGc6diِr0^=xB,.gWYZWYJzpť\劷 \eq?Ҿh;)•`^ӳ|Y]ACK @wz{?i"{RYCi Z^Q)w{k`A@N篞u]ꁝP_yNiru/?ߜ6JVjJW_= q.{˜ R0[IpBHW ;-q<9+erU*W<hD opėyV^(0jP͡-U[GBa8W ptP[)MY öT&1bN.%eKgvH;6 HJ SkPļ?JoLMQOnQA›pC_.l#^ sU&Py9vXjaA+DP&G%:01T2I.FCL'7BE+-r޷J;؞*$*rX%C:5I:m,R[4QnQh/*É+1bDt3:c=29=p|R`w {)IƵ1S)1Z `{f-DCRϭu ƪ2%Tu*nWU[|WjHGD |}9.\ϕ"VqQ9W(uJA J{}P>7ȯ]u/4vr55JI8R^)E=FY LKjj#'p5sQAթH $ū`u(^X41 F;Z(Hюl.yZ>vjvw:WA$#t7^_ؒUzqxVy}*Oۮёi[#K6cw9n%!D r9D]ylݖ'ʇLݑZS#e 3xeTEXW:2M>i~嚒 S>Hzt-9wLY+Xb w n'_ŭf.-fDHj 7L+aF'e\@5^ܨ@rg'FCچըO::AzRhH"D(4td W(3hԙ0Xt*6X~/kxCpVil͠a;~Gh"PjqDdiu{+lWdAKvk'Θ nYh\U|&s^6 QqJ20*EXt&γ9RG(ɥ#.CxYY U0;Nqg /oo(jRΰbt:p48hQ`#g{PHt|L{ z״h٧]$VFQמoBȎ.@<+paxC}0DWB0#|kmD$N23QR{')EOؐhXƫ-p)95`_&:8 =^F5UͿ*H/D B >O|m(u /Sލ]42:BAq,~nNlpU+<irn2i$V;˽= ĦY 11xSW*]el2ŝm ڝ:q/=R)7r. Q͙XLCkC66O!9~ʃry.BJz}~A*}~gT4iS^#.V{tƻF]-&Ն>ߕ.&cM=&ңD Pj )oc^41UEvӚ-u[ϝnbV6]~ ;ޥ̢ՠA9 1gLRn'bR 2('0lz*_Gk^?Mފ~ǨD3j"$Et58g$ AB%D#I 鬬oܸN ޖfocU/h6`~6fW2&|"Pʠ+e 0`Y4 YeH-{H-zH-'Y,&"g{8W!p;R[s6AgЏjkTHɶ|~3$EJ|Jt2AlKtuOUCLh3HB+!rL]xF>r5'e,Vw x-} !q]I_t& dID)DfW8-&0`o9zw`\]O|D@ѾPsA?5k\v-j=h]0MEyg«DVrik[},)oO+@2*JEћgd=:HGe0kA!Dtj)$o4y r +3MiӮӔxL,<ҦE8@$87ːJ^š2Y0Sq<ݮH"cnS\Ö.K@XudLeYJ E= 22/Ǧ !]`S뷱\X FNп l[͢'#%22bߤĤNN hSbSt jғxfb¨4*YBdX Ra5P AF:0Uh6*@<#sfazkkGq20;3JSZtJWx?> soz7qilN%C8dK&N;0RrOxWd|C /]56^W1 U :+ԌtG֏煃ppS-qv\?N]gYck{9mٽGxQ%څ֥I.rGSeA%6}₶è27moaN{ŵF.+ \L>x8s?Y[v9 ?%>xo8x[5+GRzHW13,TaMYŇ˻B_oVO0V*Q\5ꪹ:2lqTFQ,.h+j㔟ްq/}LՏmO ҷqpoS^4q,aT4$W5U|L4S˽ؿ !)^}o/Ͽ9 q_y rU$X7 ?G݃GCX[CxӡU -ۜd״9JWV!V->xAK틵""=Yb'ٰ&}gK?g`>g.tu6S;ʥ1:<8>iɊth'0VGQ!h HJaPL%f|tZ99xbųƶ'gaU,a;3`-MS߿xVJWh#GHJJ*坩O*YH>P }ڙO^hUMKi]Z9UіyEQwU:tY×W+W(JM8}e,RS,[6-r7Uy7 ?V B4w]v_VIQi^e;wB^z./G-یZ6Hh>l;LH[lH.6T>5k4d4TȁUJJ^lEDVm: G n(EzVkC>)(ҠhL&0Ȍd39 L׀2,w)(.CW ,%˜; ´k-kU1ejz]|rR3"' m) FeŽW4zÔc`Yk>" aiQ"IȢ(l$X^+aDoYFf}XӲE1Fl?vՈe:iĵ$ }L`egJi ,32㚷;,g4" EτUhphc,j*U!;95 WH/ auzQwzӋk@"ngx>R:2F%tvʆYboB )}R ie!Ѵh;vՇe}hvӇPa5=r^G w1h(-g~N ~GaʶLqbe=J1PU0 d,PمЫ:bfOd&x /[Or?A(3┳NJT !BWWt8°7MXY~aݪ壩53#p}i ѕ,*kGgqiSJ}1k67r}oI}sw6"/wW-߮3|}?v>z͛^%l;h8\m'|o oxpu?הt[-=O${m?,yڞ&>XEro?f eƒe't:u~}̝Pe|T{Ϭ}g*jT+q\rBFFb.K(Q-جֹel7xthxLm:R >Tӈ-մL9?飛xY?a,nMB82(D :(+!Hs1KNx[ z=.YFԆԊ[6Z2hɈXDi.QZo;e7i0>_H%~Y|~߆cxPYe2#!T%쬘.H&#*$" k|4p'pemK?d%ŢJg\/̫(#_՚ )ґ`YȖTd (X;km"?+m91"ijACS$;Z,5qUΔyQd:ω|t ڇW>^31fON|>9$HF0֕*UTY]- >et?;c6r0Yh !␜D%C ̢P!2poO'2$=] 6$Y&pR O8#-zk~FΖ5;Gz4t;s^&DG-MI-QPGt0n=Jfԏpٌ8wrZ|\̕W%FMY+;ׁ_R[ |&"_4A6= |R0 ԚͿGwAgqE+3#+=kLUx$D[\(Ȍ*6pTkFT"-=}('7OOflKʅl}V&MtO)jӻ*tk#}СUCVU?^ЂO#>ȓGz-Qנ,xaY{H9U0?֫^]sド*M租ڤLt3UmJTc4d(ˎgW׶GYR?n3dϞr yî3n/p!ku٢UQi1ԕh)JdZRjnz[rEH\by=9UQ6V/Mo?'?vՓGmC0᷃/ \[ߨ74ysٛ}nsɗyo;>F=Ωh\ʴˀ&ʶuϑ878,"^HR%u 9ٻKSthTriz> h*7:E5JSq%Ab9~)GT? E/i :+wOثueCS夻-Q1a@bzΚ@q]w1>7tɯO߼/Jo7lU*Hz#"#!f5FBZ@uw#!4IHSߗ"gm[SR~NO "+=\5gIgoXGcmOǫ~ŝ &]kfҮ 32𪟮S|nѲ%;psm7)>7]nT%ql /BbnS;¸> =#PQC^HP=0V' Lg5mds+\d*uvsqJPfkkmN7S/U|,z7|W-n?cE|]@{A{v\o3.eoFusoz%7URvִYh"\P&צI[ ɔ\HnUK%h ZqHڬanoFPqn3鴺}_ SDK]a׷Fr=1?Eb&pb)Uf뀬끒rWjDhY{nt^5q5sƖjh4U4jYЂ==̔<~Է:\nmS1Jt{І޶,h2 ѡL6(dC.XjD#mFM#d hȻ0F>83(RAUvH'/ 30`oLy?.v.e|y ^a}zd~pvM&!j>>&2 b{PT8xiT җ-ƫ9D%@vgAhqW,fDbQFJP07\qih}0<__NNv{괹M{vxr/9sgE5 PwC< pff=+[{ XQ ;Z,fW:5WZsLڌќg5rFh bL=p id I'? 9k9j~Bg ڛ+`MJTCۊJv0X4 $SSAv =>]߲Ual+4O XQW Ժ) }r7p6r%E*aZ0zOJ&E=FHKCH4GbEנ\Sclci0Ih z hN>@୭ܢh^HkѪUVm(ʧ=t&1jGb+Z{Ok[r#F_v *6LLGgs jX*އY;tڠ@[|M ~:c|`#Pֿt|k?;9F_|kڎ]!\޷OϺC㓳z }?d#.`TP?cZzyǕ9Ojz@]9`zت1uMcT\I0|ÆL =>+dϋƵ&i.O駣H.dˈm@! 5YkD9)6&Ѫ! ϛk+s2efrS NRCLdURp4Mv=鴤*}BnjTNN)>n8=9߾aW?'Mp*R>^ӯHiEfˈ[M'pj:9Fkwrtr3VWDWlCX ]1dBW@Kf%iWHW|i|sjZ ]1ܤBW@ޫ+F++횚Aj pZ ]1Z嵝BW &[upIONW2 ]FJ<)eֳvpm\ ]1Zef5 5b֯fQF#t *VDWL֮.ٵ{(zt(h++6z=ը+FԾvЕVz(JɩvJ6GtHe]=%]ipھWISwY۴dvd1M>MԳ,i$NR/-ٖL[dDt"KdxUԹ[U#hi Wi1yP׸_T#ǺI'GL:yJm$"ֆ\#iᓶӑ RE(idx RhI^}|%LH7 `m[%m ToDDP70)-+,uk jB7$stťVҴ孡+ o ]!ڗOWΐ"TeDF/=7t(eGWCWR1K5tpO4ʁhy,N,4vZCWU'G?l<]!JM;:CPMt婧Do(KY (2bBWӦ3+)=~ .m ]!ZvD)7OzzA 'O@JBȉCI&J+UoUz`+u-jP],1-O S4LtؖYtWOZDS G̀فJRM&Pn䫆L?דCƓCœClkΓ{Oiíh]`!CWW5+DimGWgHW+DWذµ'熞l9Qvtut)3Et9m]!\grtButut%FpƮ:M+D(s+eLb<̈h ]!S{ZfNWRtCHWھʊ5tpj ]!ZiNWRΐeV= juh M+DYҕuW'y.]!`+[CWBw8tŞX 'TOEW)a=bU +v]Q'Ş8,5|?3HKܤkͶ2PoCB0#|JklR q3'Is*C R(8$5 p)Ѣ ؼ]HBX¸r3B(-R;1pamQ;Vt(٩WQ;0U芓;;5.7m+DOWҘΐ B5cIxutuFte#V;P{S7c%7pdq͛`:vdbr3.e~Ksgedh @ nm~ Q}}l2Y]Ů q~2+K\[7@"7O)641aYpnLB9D I)\I$p33NVN•Jd j/ߟl377h@n`*ozKJx%@ewie½M`=G'pp~慃 4ZX^,&wp޿4ppOa,G҉`pyV ^teۜHQpGD?77|7%8)9ΗwwcЍ/Iᾓ;< pG|(K@;M6_u#f@?ۣı7~8NA|Hj)ٮ'yP+㷛lXLH< j"(pY;k -U$GQ)$P+0Mv85"$LqA EzZtQ1Y_Nbxt!kPI7DR fyMCH.u%݇Zn:, ,}=?߿ټQ|@eQ e'Yϱ-Uqo,Ao57d n7Xߤ }A/n/6 asYg1轃J.(2}AX-8@O`r9fͫݤ0V_ z_$%\%[M P²&Kc׸ g.1?R=`t ?!}6>{Jӑj6J"Y25 F!eE$uBhzB|ףCGH:#S>]bTX׈Re\QBpjtB@ 7mqEytdЇX֪`G`(n`Z,R\0) $-K΀f5e֬kF,z](Cr %"M? 0ELv^s]z㎞e&yΒqmpڜ(1$%'8/%1A1"sb:eiĵaМI0$Δt@;S)βԅPHE`PHqk $D.3@' Fwű<'WV@z^ja4FDN\LV20PjH;Ǎ4ᣱ&8OZ0Qy "djɝ7QQtcNҕ ^84犡L2ޣ_FVi$zW0t^*'mf=x߃('WKV=uh*˜}v4~8o,j8vk}uK2>`l@CyCt.7_A1QZՖaUCrHlaAH[|6v VAq{ލNYn&>#yz2M/fp?xߗf@> &q޽] PUJ_oYII6Obfv[>%?˕/),&w?օ%¶X`P ^~_n^q}[+peN{=F-Հ"Xv!r:t劷H;]bUb}[A?n!Eٖx\x{u8L*8-n*-<20nLpލJŽ *ʹx(ĕanO F\#!d{1;\_?> )wcF<&ŝEشI)7^R M^-@)^E,"]ݐVik cj+ׯ"+X܍|8wH6?4/WuΆ7+S00jJxHf&>xW9bG2_63/@vnڕ^ݰS0GFu,t#'ǎ <"Bt΂S/VQPNB=DI)hӄqLԼd#x;5˯Ān_ѐB،Z-{RPs=1d!PI2Hfj!=l!,iq=}&m}kkwɓC N;jr\फ़7UxRۙLTDW$#`ўMm%*]vI>]Y Bu=0u+^8 A([[m{K)f-`4e#ԿaLOT`9NEoE" as&!P:Ffsq kuuJ.%%33xCmJ#`HTDLye,wVz=S5;!-U$ 'c`w[8:Ji > !qLo??{WƑ /v~`:MY,p!KE*$eY>"%HSM'm3~޻:&-9\䪏N^m'r{=Oa@v|{?΋SopZ/뫳Cؖqp*\VrVGe@:pgA/GV|&~GGxߛ Kt-;/fC:5 !>1VbI} v3N8w܏FBV㵿|@AbRBl8sbP%QjG9hпϸg2tDy=]ivzwHlUCgIӝrѽƚ]S۫Q54vu1gk2-+׷x&Dޘ U%[R-L֖^C1քg+07jr)Bj仫a&E>cdžn.j5ҝx4zyRfzΣA.YoBEX^?|o6z/8+_]Eg&Y=6^hc3%=m5DrG{[G\WU7DRBL[M]%LN2ZƓ.\n]nಗ)],{pD)L\! 艵 4HQFxhlrFjL0G*9J33x@1 d.bl)_Aj~/?Vc?0u"p0U<SWLtxİ'lj',R o#.&xJU:X^9j > HjQfrr!=hVYp[̥ʙvuP+3fE+,/iWy Qna(Av(!1zՕbƟpsgqnyM^@oy#f=ĥluZX,Mg5HC0J('i9M&pG)#jvmQ@Jz!R &[~8<i,XN`\~m-=9>8Wa#e.r ?AhPYtm \:qO`T}oO{ٵ._+o ]OgYVPi\s>i?;oƖۻ]rOوޠ-wׂ,kmI-]5ښQ9eey9pTۧxo9n=kٟwNFZ[eVlkY:^|ϑܰDhsQeqLhoԸ>TͼՏ} IoG|k?pv?ْ#1_%k'0{D-?JjBMG s$B#yÛ~?w?;&NϛwouO^p:j'0OA{M[Rilo47by^|vU. .:\ڄ>o@Lz%!%#G*8`Rп`䢊Hx\R$x8Mvф j "cL`)I Hl=pYs5Y?iedPP 6y&^G! ¼]tTb:SսΧgY<xҠDtY ф.G)3)Ά80Yop+LѲ1<+ƄTs_ gUe]LUNOb !tx&Ji`\gj=wLz7yzXb3u8Vznϝb>hQ:~mv$/Հz1"|4X ?omTDɨhƷB0YM>:rkrϲխ[9 3y$bS[6rmW߼ /a ɲs+ب:ʔKZLy5|]ZS 2*cx#@Ҍ.To$Ȑ"|8h5HP 8=S/a(+l֤~^O1CcW( [DY">-p,N: 7ĝ.kr=/둼1ZK%-;l %rc<(Z팏> L9WYJc"ޜjvq2OE~HbZ]bgMZ#j;fù%l LV&PNH&=k%O4{^YPg%!n} HV^z[%&üEC~UgN*ܤL&@Zh pZ)CTQg) ."锨G#,Jrk)v'Mʜ4yq_! okwW>lm}lTW+plؖlwPe"B7R~Z}@x@}넩8^y\(a+c\;\٥'yry㲖rʉO@\&  mTBk ՈTQYIc>/ F9ZHΔt% ΒVKXL^B!B"ᇖb||Zځ0ʦn 謠ؕskmo^䰂??̓y@Lr4Hr~0Cv̩ UWM!|W?'ou¡G?h|4ػ?P/~4qz&evm},sr⼨%Sdg"΂ϼ:iѫ?~P{}ܾ\"twr&'U^r(qj}W{|t1)T6 |Db>:T"07"VXIm% .&T~v𳃟 ?MZ%.h9 pM 0E$HA}D<*%8>!NGpGm`(9/&q.[~gwcBu5}J'W]w\h<2]Oϛ{KݦEAB@pxd(#\H5p^~5R**k]A!Aq6y3 ՚1w8R g ^y<;:i9>+:fRmWAB8Oj\b#Cggt!UYdc*#71IAd$$ѡ BTkVRɢ=/~\^oQQ#rhG2oR _)j9/%Shh\(Sȩ0-SdGzE2RmGCwH<-rXB[/\\O4\l1@Q0\YɋSZ'3T=>nP5O2roL-digޙMZHn 0boyA|<*ĨRsiND,2QS_^ؕRrhXq6WM\0^zyIG#RR,9#LREk`ERh/MN# 6W!HPG6ٜ?]{o[7*B-{H4.^lZ,m`1L%$7q/X,Rru!9pfHj3Yj2 MF!l۾ʬ5r* Wa堄Dԉ)D}r0$%Ax4 o YJuki'❰<Z'Sx4vO(4}1 ^τ`X(g@druʶdĔ /07Li 9p{ILdsȸ'" (A?\29F&98 ]=Ϡh۵n\FoqnJ;iT*4duDq`є)r88Z)4NR4Xri9"*;y4pEWPzJW_ \9A#B=Qk:t*T gWr˥'A'I;a{z+<{p%WY Wd&~WPXjNh^MG{!y_^j`?~\/?^I Kw3yq[ ;_&uK37?ShLj|k0^pOνVq/5Zw,_1m$ZdP.="`*)ewr>\sB)8-WvLn5lHjw8vVYq^}&uӉfϴ<O^|ͫVn@08}G>JQR_}؂qȐ#Gcr9CPkrDr_!'\-?*:q,pE*~pU/Pҹ#+"裁BWDfPUW_ \2n☴B=*~`/"j8x*THR˥8 Foʲ:oһIaKzXd޽}XЇ%p3mX-r>N8gY'eNRL2*0-B^&>+s;XFkA~w?/js>ukV?N/iah j+eA2iӪ\m@7KzG[^]b]zsO?^9.34ζ">ϾSO=5v4 z o1@:bJ "`ѡђe\5$j - DiR}_XvzTOŖR>L;#[^RTYzb'oE.`[gNEcT4vu]nMh[˷uwi}IZ=-dw7vcUT}5ZH*PkyuYjfPjA*2 9HoҞ6䂠#]c9+ 7)|&j[#~dlOWi []PGEelӫMRG &ד8LO~BA2[34qA9)f#4ܘhb褰[ˢ{zdc5:dF,ƞO6hT0*2HgrYr3`[FȹhVLCAָcWԖ-P`7qKgTJK`.&#skZEDg9^ƨELD.8RVAA,iPb)2"FwcEZ@8fuHָdW\-pM^t)t:;eC,7)BNf@9-.>. inxfdy7(nƕ5MEE\Dя=BGaʶLqbe=J1PU0_`w7KT7NwOg倫0iJgJ#}tВJn<ƜI4`ׂCT#RDH/Q~ r )WGIl7B٬ TP~oIݓRI(Bsr,9[֯Y( \yib~hKo.]?]|w\v!eѢMgVt@ވc8pQ I1d`*ɸ ^W$19ڊ^8+mzi\NY4) 0(Q@$el";PlQOF"WcPE-[V+gwŠbZNp@&o56rt|Ȉ~"?@ `dw$_Jw01uΛ!3Z$90*Jָ4Y.VwԮ Tb%JOjjDY%Жt$%$1F{B뭓NZW\3b^\_iRr(%OiO㧋o^s˕P_?aY;-ݜ~U&Zw؛qꀭĞ1wB-{?HǙzDU8LuҤ|&2&u-!/䒝#%O&x') \5{6^W1 Kq= ŻuVgᴹļ^#0<;u{zٞW#<\V`rzxP -6\>/ըJxݻ1۽Z_k?'}bU1횛.kb.0o6s+rnW#hմ.o1>Wɉ=)}wO1nV mƧ9 ,_g`k{lsAuXvxq.#cMѸjQ8'7jw67#kDuRxa?<K80!U/Ϳң9/[^5QSϏ0TCs3y܋˷$$"<_o^:._£}O.S„u$X7 ؏u݋ڱ]KZ9لoӯisk~˩+^GK&%<}۾h&&{q}Ηk#d<>7=pOScI~  lEH0pΓV8H~cu4h-h HJaPL%f|tZ9әPJ[;yֹiE{;ݡ΁GLʚ[.;wU]ӄ<)x,b`|AK<MU+lE)aSysH<xT(,sk%rMu*^5Hcvh'@rQX&Rz:evs<Ǖ>䄴1k|k'QZ+Acжp ͑GW.@Xm[YN> xstnON/s?f2Ȓ$;I;gAR䈒lSӺ6 KxbWy!4Bm9&jI&.0`Lȋ>St&m)8Qeh@* *D>ȜR{ g\gAAG rez:;ҜSiԎ+GIkJXCYGK'8!9Ҏ=dB%RJ0Ol9Cvms#98G%Z:,;osgYi&~ƙz ī,lPW}|Wjo[nճWxne®Ƌӽ\Ws=i\=`Ia}5WIi39Q/*(Y~Jc9ҳjZg={޸yy~GD{裫l"ۤMuZ{z[w[/Ur+Vagp^gūPuWZ7?kμRZ/PE_NGE.ѸՏE|R'M o`V/ ująj }O/(ݯ#,w#JU_P. D9NU Z}XdJ(1hpead_:mOyRWGnF/uV|ϋQp8+~K<.n$-x?tRʞz1"<Kl$}Ԃo[ \JFjd\nb=c1כrN$a|9jAŇ8f-8Ċ(+;ޤ#m6ޏF-l(˾)qw5xg/wW2E;gt\Zj|:s>Si%i;R阍qPSj4 |ջ0LQe""[z-B,911TN_1,Қ VJgiނ:"q0$hYY# < oFSs5˸=g}VuZ˵n[Ey{=;C>ˍG˫]&)8 Sx9si-Z =»[)y<^~ Yd) 2wGm$FF10fh$yvՈT:&Vf9:+KSjE%' 6q:>sػ:j1*;NY>c|5IK|z-_섒c$")*JGTD,d"R88p)MDKInv$I#T%ڔHZcLH{1Į1bgƈh5 }JmbRqV怼F^v>Ja}]yf|HSs?s9N1 Z&zyxSf]rn\R (PIm) T=OV`>!,*p&&25!ļYd5) `R$bx.qЖ 1,:QsGԻPN㏟ęd:j-~7zmVn$@(c@PKb<51I1rG,9% 9kHDMtbքn3Ke;TY먋\8(!8lz:̤֌ɠCJ53qvkFx:S[G_={[8E(YYdqCJC*1<ܮ#! nj4 htP[4igM0o΅H+ :iN)BdZ_i닗33G23G.}ͅyϱ6ը\ɳwѤ.'C7BʤPPrC%Ou|0d:=?$"8?ew#$KQXvV]]K}l;4k-ٲyc[}`Վz+S^(T*VYcgCLɡy“3 x O7gFµ6"&`TR{"R'lez)M7sW7}y»:%?6*Eq‹؁wC7 ut 'M]] yy[.|O~S/wWff/pO߽D`ql/.|^lraT͵:mڼ]uƪ'hknPM5@d ⿮c(ZK#IiPJ6®eArf% Y ͳ믲.VpsQUZ ˦p>\<RcwW9nآn4q\pE8ˑϋv4㤨a"WA>w9mx]b1Wqx2nXuWoׯ{\f9P΍P-gwBl.m`ąIl&E&.- ܁7X۵ٶElIOv93s9՞Q>꘩Ƅ/ya^iM!yS_yU1Y/'֍/&yGÄ(q6@l 4() 0+`\uEkWBq0fq?}P%nb!:NWBo_mvx^~7,$pԗS$HJK_OqkwcNY¥ſ&JO9V6)kS{Zuqn[RQtLǗC A7UXh 9CtËG&u".ø- ^.'T hYMyC O Y.]VE}Z~wl6zýbl-QjX0I= [:i{xc/`Ͱ 짼3̪[dw}y`u[nVY.j]W4"j 8g"j@PNBG(z!-Up0.rZS~rO/ʵPվsEu}{]1$hͨRܓI,ל3 BEe"ґ$һruJll˳t?S[{K6hzuCg8RvNgHxv"VqQC(uJ\@ &(ݞp0 ӧ PNkUNL0wZ%>)RHȽQA[,QJRI9ƨ|7I@S ցr(^X1!'G$TN@']F.ݕ8"3osq8\M^fI#\/oKz;Ҋ}nn/$NSTZYKp- Q']Q?p Qxt*|K|IS#e 3xeTmiK@t+g*!)i5%} [.W`MX&tB8X w n ćPx~6: ZaʠVyG)kmc9e@]GwW 8FeF?-E_)RH9|Ic=ϚSU %:$J"7Y)h"":푎 ρB^q3kB[9 H0 *@ޅte~rLP3ΰ*U1IY>_zioˇ>_o _9ns 'cNƵ5Dm~|n=i٬eC&2g(YKUQęF~OUEz'HbMX ,nȓ/oև?_Tl)fm0M0T}Z;㋟{>VwL>do1.˥hl|Oq**\q&88ߕ _.9ʟ^sc!,cJ0啕҂B;s B8t݂um\eiws4z)8 s}0*K͡,-nBtŘ+GDB띛 ;]=OZvc'3s%aDgVn9U~pEqyHYV#;yW8tW>c%wsK=)'//P>N{QM1>$-]Ԧ:pI "궈C݀YR}*K$jZֽMO3k]:[V2Qy" &oÚd.Ȓ@ aGhM^%Rn8ܐ5PpJ 7d)p+ L2ت:kP`#,bZ컹RBFsťdUXjv(*K jUt5+QHB%Wcj*K 3W\YEPv8_*{Y/zFM|ru|&Ify* PL TDG#\1B3#2Wf+|Ӫ^1St[}(PPv<D]yW|;9+/)YR*uEPŋ^WWL.U{hC̀z\.6-zZ϶hRV[LQTRxbJ')z tfi揗 籿 њ/Oy6#2~,I$'b>iCd#AG5rTYGc}Wsa=NoXټ&LSvHEci6(lTBKs郕Z) * p;D)Q) JxSh}n.$\/"2?3kwM\DW C,y; [GE16">ڿ:456U2S~Xos'gfW{HC礹s6t{}e f_oxoZ];3Pzovj^ +}V<P zsx?] /&)xBL-/gT0FᭃGݷʍb[V?r[D9.Eb4A 䞣QFc I*!$q+thDdxk!-2ObS|׻?msO >9(D"F)Ji,ihb|p"9fI(o^"⯟r0g^;Wm >=̡Ywg7+J[oBT.s7ÛWmx1:))UҾiJI*y1-|~6[,;xI`B~EQ(rAϹC$JbSPċKIznv$I/)J"))O5sN_ؕ6FlM1"ZrBSG5dNGrʜme:vU^j> Ĕʙ$ʶ g8NLuY]'`\RJԔ!Hg?;<*qA(*QyN""QHtD h*/{L0ǃR$b\6  cF1+XQgk,]>t֗/|xaKl|RG)$}{ @꜄>dB-%$Q-1\ +Z$^&:1eP8kT4*c, kԡaZ3&*yb2Ֆ8OkBx8ܤV}_ST#~EYʳxnͳ\g%{:Or3$߼;r%g%Z5;* fS7{] ƼVGrɾgK b=Ym&[} %h1PNO 0xH9t(Ci"E,H ΠONj-Ɂ=Awpi٥\"@۹0;raּ}Uvc]`ї@Wy.shš]i=]2ZQ|gh%Yζ^mL58+bĢm$ U&1+ɩZ94"fvc௼{ԚvMv5K*!*GO3I=L*w8K$ ] "qrhk! Ar[Bh )h0BP[!QD @5q ]=U DX>9 uN`jFYc$Aѻh,Pn~?y"T"fAb$.Hm!AɅ:xܡ%|Z9 V ^1޶xg+d݇,yGz|v{ {>C=s; 1ƈ_¹W| Txc!(oחm䮛%ΕEV NBi`VFÄ2^x,~#&;\b#urt5'0LT2`/ 'g^hj0ց֤7^_ޑLsKNZ մ&QiCc/ZPF^ K\c%I'N@`[ zm=^^xD < kW)M@!*$FTQ`VhظjGz'xr>;|šUOk 4tĨKۏ&h͗Ic$ZB^  "I.h]&=Rtz^ pϥ+Ni[I1T\d!ytj R(7<@HƇaXj-^iybB-6i$E7Y-8a)]wJEL]jl7loTEL'PAҜ!lWzGJfKut:*R(U.)R "ƣ3sDWS 9ke iPud"D ܏G2.d4ErMITa%ٮbR" PwЉp24] ěόP0udm2bA#Rs0nB:Y !gS&TreS|FK*g>yCI1BdMGpV2Z|c{c]5ዮ&<h+G u]aḑ\ƛe~rLP30/fazjzioˇ>_o _9ns '#0'[|߿[OڙH;)YCJE];ޠ^T)Q|E&ҳjBg=W-띌#7u` 7TE!OҿD.6EZ:k>v\(4~fRgplCm0T}ZU㋟{>^~9On0PuϹGc{7(~x_oVޟF}<=Gcg㩨4aQ#?~_Ko(%8sn/rJ!me"e"eQsʺuԶԶ$X!1jB8U4J\$Nՙ,v;lr,; gRʭrYxH$@(F߮Ҏ;gnG`|[u 7)|"Ify* r: /9>T+#]^~edzb0`*3J.E`y坂qY\w5&}j[V$:cmbr!9qf8atɬo?=J^5 *@DRR}VL~ATLJԝQE %>\rK@Z$tїHz2/)+CwGa;EIvצcڽ?5;fO+*+ל%~ ߽j>>]g\tWzgh0G bNꚡyV_}_vV|^Ӱ?7ٶ{i=eZ~CEVTn0l#~Mt"Z u@a-q/Y~N.V\\$e1iڂ0<_Zί_X^;+^z0`R|cA]IrNn;jKf-AkL2kQ**]k4!*n;ڈf62;Ӧf' hAjTx)reb1 [|p g$";aA<"&T-g,zŖ;ݩ3ԝmx$<N9Ԣ@b8%kXG1D`"z&T az^@zG[No\ӣ5oj}u2E (6ݢ\Xa'SxS{]ֈ]z:c(7 m˕A!gH sP 4oe}v]HmK ii3"D8#)xY@0匜LxD cF`͎*LG M0ƌL@ZFL&Z ihYmAh(" 3Y!2;"FF1s @M wAr7dBe^vz?:Ɏǥ,at`I!2ܸClP6<zOSy38*xp64>djs93&z67W{-.6!Y:D~,@"dx.lȬ0TQ0Av+w>rJ/!E'9U!k?I(NQ-7I"݌̰!PQ?p$iP1!%+H6 u^\IrO0i ʍGךM~:=lK7;R=&K]o4;tɔW%+Uh0x5*\q>UF+H/d&f3ᐮ0IYmL(@bؖ6'wt^nvąՉʱv~Up M3%ð_R_v9ok<za٧^}j9g獙^/ ?aͦMuC/6{pݾm|}NjtHY;06EuK[|rȽ*sgKwy~F,E;MT 1kEer[nrc[KoOύh(Z0NEIbHk8Æ 3C]r$:=V)nM}l=NfZDl MC,)aD&Y'nG߁b~3ξ^.++)xS .{ݝچʖ*(WJ^I>O׶R+o#$&hiF_Rh y$)N3֛Oyމ42l D1MO<=MwY81d^ m-EBjw!m'ׁȓQ<-yDIJyK9v!-;׾bdI!(6xyFp$w`1ÿO*`]s QS3Ƿg0!87"%1~PN*/<|T$Or۲=@HD0;RLݞ)paW{b/tu^kUÀP&TY.*'W+&+;BoXLm3#tYZU60ILrfL-b&aVT{nRl$k]i@hҵaϰ}UɓZYU~q뛏O_:>&N~8:=~jo ݙw#@݁֨i:M6{]:6o(%(r iԳAyAmzn/Gץqn+-#֑rB`8*c1#ENtomlh%/Wϟhp03 Z ~Bq'\` RGK!JX +0 Q ߝrU'Z㉍O\?{I}r;3cFj0n;Xܾy3)cz0W4k,?&mP+ )6gF\D);9^J:K3J>&_m}7&,mj[cR;ĬW?G "#Z{1[,AP2VمmLr19HO/1ކ,Q@C+| vJd/ˮ \ |{s_4|M}Ya9}(v/I^ztÖuMķ kռ-lOv'i=E4:j7;>hAA9/(g\GEri:Nz-ߎZ(14 )H1!P9iX`$:^?Zc,mwWK~AIc%SN)͔,2l 0u $ JUHPCRL]QQ8H$9JkyFR|X&Hj,cz`>=rT^9@}O<+>|VEME'Mq<0xsD0 ~ׇx>yn2 wV q7*"`] DJf.Y)\#gzW/|v xĐSB"B6fx OysMNUbC6oү]bhjf7W4zݛʥ4V9SBy2*_C`IzjQxW`M \WZ"T •R ֈ?JRX*QUR\=AB \%y4p*KWJ,vp_s0pu+) W#<0\ݎZnG2ⷀ+^K!xVi ,A36TL`󆒎:v.3 Ld#zT"[Ƣc\g\ݭcT>,qS9@coѯW7Ќ4ja`7K=6BYq :JE0#zkOo^n3;Uf3Lf<`s-:'bePwenw.$k/9v¬3|%gS4aoc*θ$\dUji7#dG3$@%ӑ NOlUE=Lbxh#r@ZĦ_ooʎS%!˗ϲjAYq|ds$?"2zU걨@-JS/zIR  Ƙ<JR] oG+※=x~?wl 8#r S5E2R!J$ʔű&$kG}]U]Jh9;tJ(m @hN)7FJh>tJ([ztfL4` rDBW -!NW e-:CW\Q]`qc/1n]%R:]%tJ(.)m]%6\* P*Zzt%hM <UKJhJS+UM+j ]p2 ytP6)ҕ֔&X4$)th:x*亥o^t\m>Ak7_ȟ tZH'h톲+xt%v+^0W{+t"kdj n;aA=/AO7.fLq ufG碴CRT3АhP`JoCbR6Ho .cMDVoxzEW9JWwc$uʎ(uKWOb5`CW%\𵫄Ӗ ]q$HAL}np j ]%*d-]=EH$UES**[z:t%1ns,CW .i ]%un(yjt0A{tb9UB{S+ҼAt5k1rJh>tJ(jꛡ+yǮ]ԏn'>}wCX7R(r-]ݻ4梕*&9pv M5F1^ίioh"; Sj۞m1w|0[\} ?hr}#3OXMJꝊN 501ugE?[9أ2PW?,q<{=)"0BlUBS:X l:E !aXρ53iފP&`"d>\Fn].@(҃/ P0JX`6`Hmc}J=8hNxЎ'xL ' /4g 7`g w5]1` cc LT 4-VŗΆCW=0\U9\?_s*>Ӄ7fzꄜjgZTaԤo"Ҏ-٧Wˬ$+(RZ@C=] #f L6cʽ Q0!S&J;f2Rʃp-1 PaygYee rH"DEa86 ~Z΂, s 2Hx!@Ј")lB BiLѤ#2\*밊a,%V5S*#"JFc&=P5~{U90d !ɘX3m lNc$}! GL&!0+rJrM]I=-Q1}fkBuAM*?OUw>E3ПVF oGD[흌ϟmBvG4 3AqWC¡vs$~g"rf0HNF;e%w-_fi>.ԧW^KE2:N$N%=r4~Q2zn yD|H:НCf#3z 3ȡeMNݴ$1g~2Raog/pbjށ:i>U]F r:;ԛ Jy;`-pM Hh:6 v@׫1ZJ]}qeQjq~:{XX̴Q*Kg6(Y8S}ђlSBWJH ^O΢Vi]:=:QE,Rd",^[cR;ĬW逰 9HysQ b{v UB:+Ws`n8٘AҜZ/jPT9ck$υ"8 &kA]u(BenB`zlČ RMC~{m%ɱ5bɈTswht1jYƲd>AB{bh 0mFQpRYU"FG 9{{WJqZDC>=̆~J~V:aT۶(կԨ6ؿ^h<KtHMpV(*XŴ9B$5F,B0#BXYL^jʈhA #(H8HY/;. Mqx2xYNJ/kgݸAn7UxZyg>ӫm}cү~N_ґo_X -F 4($qEPͼ3 a0@^b,-#< #-1HQZ@4[*3a/O EZE{% dD>a 0i/%% XZYh6.4mm-9R^ɻEu0:MmέHqs6Iڷ#٢*5REB&7NLufP̌v\JHbF[6H'BHҾ]xp-NeCS119`p壘&:}LoE{rKMZ~7\ !]E+$ 2E0;|RM J^)Sl\-IOS -k#4R&ǝŻMs^Fq|f6>v̋e*?O~oOV\RVk 'o_9Yl,W?,SVzk3f,-{n(7tFS7^}quꇢnbxzy|7}90\i]JYZ:2t>^eBQT$l [}>>UYvy>\au6EmGb.al)Q9Ͽv$>L֟c2k*A2(;7 ?~j{[|Py[=$f+d9m}ݎzWhDOWb j{b0Tt%"~R  y-}ZS_b< ,=l7_M0+.cfH4 x`L74L:CX-l}͹CaqKiOlv,>0$73-[hxKfMx9&UY$\PPXXfٯH iNu;-Z9c~ڻy'jnuK0D SB:ŠE.($HD(w0xDLJ$kG{Y(,܉}{N{ZC ?m(/$<NGT3E1YŰXG1D`"z&T ǡ%^.f o龫~occO_ߩums@Yw { Zn}PJmޏ͛Qꂥ׸.*P"b-$"hiFbl\{0]U~șAc߻WEo+OXR1G33ГCgǹHwr~2A80Q|>E:uC 9dSDc=RL]E8n1<J`! UK jEv27 X/'ɗe}WU?MkXf3 <uwJE֪SR)вj 3>Jx2aVE<}oٸ0!$MηC#[#ZS+Y|4\V ] DR ԪJVWM1eErJFRϏIyQ$.*~DTfKqCC^oVFJ?pr?wdvwM7t?]G) MA^G!V{zwjT}˭5uo!5gg5_7=una}lGDH:oH_楗dz~Eu}z; P!5#DZ.:=dj'Q7JW T ՂWx#ë؂*'!Qf0J)[ 9(Nٶ׎)  D9Ա6fATE\d%bϺl$]3qL>kHW[J#|KwU*KN|>w HD %l|xD)eT<&/V?.CwD%Ƌ0_ qUSG1b%`Q1fL4}Sխ,N-&|/Nc[]ɹԵ \Tu+*K*#e_ lՌ `a#26a4FơP5jjp$"} ;v[->,b_tq-d6DS:_2t&k@PK^$J"R 1a۠S{QmaVBXlj뒅h!3ʌ> ؒ|cL-vMs(V8jV[V{DxM)I璳>qbߑU:fJD)Bml_u78BfdIRd*R>z*^Iԥa,36yA ]5Cǡ[D-h`%^Yk/Nb!(:"om!H#/_86&ZV:;Pڙc@t|JW *%$SfQV>vS-dcCt(᠋V (xc7Z(U^9(K2GMQؤJZfts @"!',R`3J@]2c9`8Z(B-D!y`/)ec&M5i\cȂ۽\BbJO=(+-gNv=J{r,i*PaIuNדw^kmJEaQ<o=r[r%KO~f0g ez,|6\I~ S\Owlv=|0PzO'a>Yi+ ߳|T8]K _Qa l)nU/0loənAm>#>L{ʼsuNtt7X]j34׷>t6Z5m${׳VY%<̘i{L!CL[|v.'I]W RnBXs$vt! M|jL=R <[Yl #Er `DјAH$ $drJuFڶSUE/udҤ`J!`rL[(Aޞ7?~8OU3X4Ot~n:~k8_] "/ SMF0u,1#^}"*rsJF$ڲqtet LWR$.bhMd&HB]EIb E%PRht Z`EN::[HtAȨ6v%`)EcX)5i`ar H颁U-Se2٣ɕ59[iY܅,yjHGOc*w,~N|@*EƁF1&֒PhAAP@u6J.Fmym 4xԳźW|au)g4A2{+'%ak ٴ]6ŮSZ $dj ʓsAh*ƃSMHZ`Ѝ8{xCW5W=wiG$1 J$ZC 6cvbB>'/HPhq!HjT*N?a-%FVmVZ֢͈ʢ*/O2(#j'q-ue=@ȓ1u^eaib*B!\1:HFp ՘x&t]!MWJgJQ,ɥDY UD:c ʹ" '#lQY"̨m2",[%zZ:ȡ+߆r:4A(kAވ9Tlu>c9T;Bcb.tXg5!]f=3D8Y$&OSlоؤZ'v4*Et>wQS eE |Ikw_[r|Zn8(M %%|Q bE2ah8`]JF *Y-9~#ynmk:dX9f ʀpcJTeϤ`#Y[}h>gO&op8 iTA)MdG A.&A C0M%vUcd2fUd $Gey eS?a {LKq- y弍d!xD$EɅ3#\N:>5N!x7,x7_ %;l mO݊j̘{ĜZ~)_OKoխ*;+ t!;#S]Q:"w hƖw$"^:{e?QsqvwMI˳H:\i/+6z4r26?wo]~n6};]Mw*]o;58ܪmk|;G|U0m3npR=PLGTl:3Zé=jݴyJ}SBP;P:H"tTNS=Ls`}-} 8 ɂ3S&e4Z/,c* t 2N" =!!xKq֑8k{nk:kݷ!fm [=(o?B'z^ [uV>_n=_(湾z_]~4/bYV司8MqhX0cx#Z Os?k4rZJdVh^dbY臨ـV؈77TGbTաhpKo7wkE kn}޹N'corX"w..Ś?b:|ß&ˆsw$ʨfnh\aE}7g<_tſZzd9sk~E-n1IƎaPwQ+' &1 ɃʆQGY漀)XsAjE,$0%S֡hAD2ccѩb_8г)gp^f%B 4Ώ[ EYv$ Ĥ04Zc46`;Xzs6{>O8f/޳Wh t*}6FMBqPK92&8Z8RPB1*o]Awχhgq21hy0/bc:1ڃ6<],yWjm?-+#8):[\ey.Έ `jŌ`_Qb_Q`_)FfLGp(. SAIg-!UfQ#l;D1&8i 6%)d{b*ti0Fd mY!b-FD2틻1a :1'xLyJy=R0 s "b.+G9OEXWI& '䄰2gKpi,7f$eJJu<ۋ oypܸ _.0 mjz̓)l\n.S Awn%M;՘k*&#ݤlp,kY|;?^7v=lm=tRWk& _S=dͷّ&807!Sk&gKWMbmbwIʼ'ܔd}ϒ5 )FPT4l8MmHTO` |O4p=u?i_wo,=wXkx># #ϪJ_1\j,zYi${6~$_I֡FVrS7Hew]|ݎs;]j4|[M]?e6/c=q\L`ਗ਼q\ jO.^YG*hzjNdu8I;A/o`i6gҬW ڽy# LJ\=;<{#K;[ѱ8+[B+VWY[3ju洷1U"D^{WޮF͘"E*QTk %Kۣ&)Y*ݙqc%yd]Z &tgW{֪0iu8 V_½[I1~~sjk}WGzVSVV=gB%p" !DݯrqsXw^|N "ܷ^$,kDsc\`j(X#pS՘Im__ dfeO?WmONnQnO~{AU;O{GQȆFHkJ5e"^eRmڼ߁<+u|퍪ۥd'.{䧻7>dMޜw4u1V{˛[m+q/܂N`Ŝbbd,/;u/}DOkGO{7ѫ \`oh1t5 ]+^?_;+O.bj}18] :] U 2DW,ܔBW@k_ں>Jt*DWp7.F] v2=]G ]Հi)t5ЦJ2{3*5nIS_VnpKjC/?&_82<+LIWBzJcfPAW'}ik*w2a?8O\+x=^v!A<}0t'u_Y0cgu=W7 a9ai)aMnuPZ Q78ji1t5.v'wIW,iV5ׅ@˴t5PnƼwCWIY]p2n1t5ڸh:] >ҕ[] ),jMg^-@`OW&ɂ _T3wt5P:;/ oQp1K֦]͞%]Z]QrqAtEe9tE)ݧ+J[-tՍ1{'WUSGlN4ãb0C>KJ(So?得<ZNOֈrV v;ӣI#T l7vrlmދj9}Y%_{Kt㍧]SGWkɰܡ>Uʻ;z~BLniQg*?[獻 ,9FGnԱt)JfPUo6āLn6S%Dm(V-8ys@roڳsZ4ߟ.?|gux\ۗpYj1P797Gzr5 !{ƎH'm ד_jel)G.ZVC8|Ԝ+ܸU=Fq Gm!$;hZ\բG-x4bM$fuuw6ɜ|X)d-bG6b|QQ8: CWs|R H/F&XKbs5!JJH38B_2"$,ȷP)Zr5ʓ`O6X.*M*ٙnB3Q# kXSJ`s "N5.Y{J~#ڵYXkCuvilшz](X M!1rN3j3b{uPQAm>1h-@sPR ԦUa Pl_m^bF\jҘ1Po8FEN-2Z!pBB%`߻y;vՙխv_/Lg8_*jA#^־ LУXh. LÛB /MTYǬ %h]@hW ˈc*,ųΈլ\`2Q*,}Ē.M1|OyݗBU"2:@vTmtX8jP4"%*Λ. ne0 v*"}KmsѷK%+u4>J.V`edߘbUq(cqwP6jN~ "Ŭ>&Ē|+, p=Sa4”A᜝J q[RTeLEڱ",P>( )$ŗX3XMU3zE[6_XA"cR:7c4XTЙ$ f5ŌJRH?<(B}D G תwp2 ɶ5`Ee6ݠDIx{-^4k =lCӨpL΂̊mrܭުiཛྷ""IQf J@yx7Jp03 ڪ < 6b(<_NڗˋFWǴ'sp& T #Cq 6#ht5TZ_c [MeVnƳVd]/ s%kFCoW7~(=1=>2#uuZs8nHJxKTB _5j?Tv#!C[ wŹ4kRmN fR+)J{Xq 뛷 3 V",f5-'7 ¤cxY'y%DyŰaxaa sxĹf$1 1tՋ\:'X: 8CژJe?7 I7mL1dn6<H?YU5g("ֵ̤E &Sip d|FBArOktAIIrcUǃ7_ApMW ܙЫxdAi"YV*׵f@ T7 M F9`;8=z*' 8 Nr,Cɵ+FA_vq(_u:81cȹM=77-NYDMY=UNgH{f8W6N 5şHV(y $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@(a&J*6@Hi&(@0'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qhN ''{ OM 8FJ-N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'h@Ia98$6N  q;RhN/(o~+g5Bў f}_~n-O>6W0puWo7?0=`f\4:wώ+ff+zhŇn/|9Nj#cUợ@CCqC٣O?~YĘ%,1f1KYbcĘ%,1f1KYbcĘ%,1f1KYbcĘ%,1f1KYbcĘ%,1f1KYbcĘ%,1f1KYb1z)Z3jJn7of컟 ׻x{f3Ǿ&еE.|Wx;ߍ5S+\V7-H=P2*Ns6N)kЃ~X}`a15=Fн+~n?rN+ 9紒@Nr6 mtşV)i%#<Q*>Pbxݯ5ޣ[ t?gUn|QsƹXs/2ԿTAyd~}.o~<{\_g_`^JzimṆNȍU wUXs{in*Oi#.V'Nvŷitg~qm\wZ./z6v>!З[nͫi&}^ ׀S\Us>[>][~wN6o(?֤ԱW#uv1WN'&=M"x_}OG2Ȧ0Ιȩڸz?_Vw'L?BOF9xc~&~wO?K>yBsΗk&Fˡ K_mqr(FJ_%'Y =**rXpaW*6a*3f+Ȑ+ں!Bg[G $C>YGY[9j6C1o6M[ ?Mgí+4|&~zfm/YYfY5/G[_vV6{~9>?øt6\o}?.&0[ܥ]ɿ_9^Klҳ{-d{mpzs3te.i'#~Sv7+Cg'/:֐TM?Puv-8Y-/6oc/࠸~:sԃ ?7޾њ`erlUbN\29L(]LnH8XF\SmҦ]2&a$W6rE1r+ۖ-]x1ʕKN+N ?B}zE6($c+ v! c,KװY"ZJ+숪x*Pi@NrE\\h*]ZJ\ 8lp"WD[~tEފ\P#"`v4"Z]F4>FHʬ6rEM2H..WD)#WFz 0\;;\ed0vLd+"Wu}]%;xCڡrJgj>C:'2[- :uҧ%M-g@\:j"T8iNyimC_iSiOkF.$<l%n K #2 cp"WDkRrE6\P`8F7.rE}(rEɉ\P1FrQ2H>r+ GWD*Dp5"ZJ+"WcR g#Wk٬]+~(]QVhFrEE6(\P@[9%lp=v[\!SIȕzß Ȑ-ВIگGYZt3J\mT\V`!(˥I^LpL|]/o9%{`EvG^/Xʘ|v8 KZutr{49*#p^xʣL(Ln1J_8X`#W5"K+,KǑ+kLHld#W""hK+LN>FrU*Nr<#\FXjrM0+\&;т)]2(r5B #ڟ\0Y /\h]Qz'r5B<Ɖ+|pc"WD tBJ'(*Y]b$Wlxk"Z_F ar٫\MJװ+}]"Qz%rdz3UNy3Up]&`PKYAVdȕں!9+bzN- Z%n4PWNry>O?L6fpv{&V媽shVWxo>VիeEBv{nb'+ӪemBC_dYQCOCpwv([Lexgq8?FrEp"WD;T<\|"W"W䊀b#W-.p+-_Z$r5Br]plpc"WD tr "W#+e$Wke\\h/]2jU9H8i6rEF+(]8U'>x'DBCr"W"W'\%J[6rE͓At"J'Kc+ \d!+R;(z2re{v`Lr>==6$Wy'6C]:'$^a)]%Kbٞ. 2F\@ KϰU~:]ֈ՝2becvHkJ]K; l JGGs0Lpp639,drəɮrEѱ+M\P[s(*<G+Rd%WH7.rECE\}]`#W<#P|2HQ1/,, "ܡ њA"WcVHR;J+Zjrcd''$Q&Y\ 8FrE'LbdhM\jrEE+v\\h:h=D\]*?[/ pVh@O(Ca?!WNj뮇d:gSt-1Ӥ:Xw%:XMjmpJEod;U@`:ձZO6/ۈl.\'<>iPt0KXa!FԻ MzPz;}9xE]x7|`ob<9i_t|ab5[ryYϋ!Z.8aA{~kۑs|Wq1XZYt?OS^WI)T|Jϴkon ͳۗz}.ʅ%fL6\i5d(dzɴrK)3bI5n-&ܔ4.H鴬yQJ\plpS"WHϿw+^jrV)0+6O֙(k 0l!ܤS|tE6\P&p+O2Hq+#Jy0J:VrQE6rER;Z[\dpr4Ggp"WD2HI15f]ndvExBdՓгT i@9N ~P&Oh@(+KB\,[S(4-*ߩ@tV aNMmp~vBb7 pԁM@CXG(aqj\!5|pm"WD[\e\YkpK$DOz2r{v}*UW ]**2fލrEz_TR:XK)g1xj Pt)xűUV';\"qDr `^͡ZMN>&nnD= jViV|DE%U|d~9ln:glL\o\WfucoU:^Vm_T{矯Gׯ4/zu/wqP#NP}u-<Z_ڌۿ}<.x_Uga xgx 輱ѓ?Z`ivQW^[_)]~DΩΞLԟ GZt8 jvw@ R󖾃*Ur6Ơ@ѧTW;D&f q\֚ך?{ƍ /ڑpiTJUIɾSֆuHʎ~RX19t_7 sM/0D3TZ,7c*-p69@zMq[i".quD `lHx[(Zo $Xq n nԪqЌ^ n` B%ɘ+K.T̕V͕Rl^&d,$c&se-]ۛ+ƖDI*3!se "seɕ2se}{TDds͕0)bݙ+Kɘ+KL&weUH9w͕"LB%d̕%WT̕. 2Kzzs htv-l3}Fno,]HseI-:`Вk]!@!vsedWc7h>yISa*x^sFsèx+գEoTxyq3TnKOwy1/ggӹm:HȌ,]ZbԂ%-Z jbJT.XUT27y)'s({LI&2JDc} 0*c@͓D4*&UND (9PƂ9S2Ӝ1Fr'cZ ћiRӋ4ӊ ƛYw&.= uQenLWsq!h!xy=$;( Q^dU![ .L.ߜ ]xX^9͎Ӑz|5CSnP+F,G&m@hp?3_LڪMѸ9)1!Wv7ſ}oasT dߕt\>m֟Wc4Kkч"PD3~u\ڿ@ψLC5zr~>茓ݽI8,jg/(t2Ɵ|3[MW~q=v1ճw_/'#vPq2aIZ8)@ eY6 /y -V~lлp|wby}C ˂^f =$~}WW xR6Xw%  ! DEYU=$Mi?i(օGH3#/Ez!̳|BI)'Ie4RٻMf\>fWCQIG# Hoi] {% %֓v`vmDNb)KNp.v, w)!uJ(Ê](αeYi[ޓ %O}}8`t,,5dxÿNYYoNyi)Yіk%Ԛ =jFC TUʊ&m]2CaFz '~Sˉ-(F>N8oNcf*VQPVP:#*!qn"oDoώ|T£`ܼ{]lqwݡ= Z|'Н:J*DCZs sjs :+7ct1D+ 6$05eMB1HM5UiSYc)SyJc2O5u@1&THdi_wF&F0 I'1Ʃ$A$dBҌ,RN%O88]@qcVᩐ!IRa|=8 bEdƅ1qhfcf ! ,ӹAeш8f)baV驧VܘUy*ryV@Ap"!k yd?LG{e?h42Cs `_xtdrF?h ~Y+2 +9N -)yn|9\'"d_-Nkʫ2|QҊv^y+,۫MGIPտN[IV?u .Y3[}NV魛nwK8Pp2W~hgˀsU k[ojb[n[i~^`Y:@e٢0f`lQcƬA yRܱ'=³LtXҤaҔ,MڤrZ_O-|]X:H[?ѕA| ?wY"M*³fI--V}5d)jo^H3z 2Hh{SzZo_h,X c٨֜6S˦2В-H]SMIU0-˲5,U-q63ichqײ8oT$ڋҡn}5dx7Sŧ'g8?z͌Qmԗ9gmIPJI٠1^pU#]󖖼奝mATo;ُvB1Oӳ*^<]xnܲsϚkCv|ڣJF̛r^_ϧ8[ԉX{c5k@(xzl\=ыO(npE}2-DmQ-iI7\l:ӓCz{0ooA(_o}Mg|ԧl.?]Ǩ3 ͸͗7d|uq[h36D|(ɘ3r^kHџWPy90U D[q`dN+00r\x@~A"< @6G!bI"+J@ u/cF,E$dVi )+$wу7nP^F`RX-:CfηbA A7.=p@۶F⋺F+N& CdwX8C9ݽtDd;sڊ^;5zˢآzh5/yHe+]^ٲ~yk9`L u];qROŎV9Y ݿַ-HBm``?ߜ*(N"0đ4H75Y N'lgYO.; |/Id|7v%포{5'd(ad_mꬅ:Jk9,g9T8 &0$g<7fߧ΋GL#q$\*{xqCI|[{JHOk W1_q-b<:*T]cmi>81&`VLe鄃dC>Op AXgZJGTꠉ$j(60h,So:NGH+(eDTӹ%8 -1IT =N b.i dpI` uBQh'$H(PVҩtE4_f3(Ŗ揫 2nLMݏ@"3DIyH*\wWyIWAvLcɦ g4}rB.B۾@R(jUBCã?C JRPʅRZI0s wAÊ0aCYxU0(ZpBNWݩrjƪ`*vN_c3մse&yTR9v@:^k\mjV'1]RҩC Ѱ-(Ut6zSpBʰ1-o{ %Ԩ&fi[^ּzs%kkQoh6{cF]R tvlARw{MqCRs!r3twbPp8 "5Xbz$RE*!P=fN*`aneT2;1zuZ}qB LK倐QZJfw'_ua2P5+c4 ]o۔sҜ _ Q7xn)N6f<~ k$Lq| 7 R ;%qXd tGb`%wgqmۑJct.̘GF4t-PLc >뱁x]u ? Sco0):ېaF$O3k,<ri뀭S|~ gaό'BpL9>QVii{Ԑc($Z73YBXi`k ij@/UtNR 0^mm^N2$s \HGb?bcȅ`* d%f%i^|UE7}w &nuNdi,+3|΍e/Dr6Ğ8}V-5Buv12-sK#$C:)3T*ɉLogdP^؅B{zV!1C HR"a͡ʢeqɃSjb6ʝ:1k vw=QuZJĵXEeʹjcE "lt%t"VRQQټhXcV6ƅYN`4Q-)(ft~΅${:8땜OHG<Lͭ.Bg+hdo+Zi-{|r?i+}Cm *{* ղǵ%f]h\hBaNl/ѮPMg=RfYVJNKSŒ=_>u_Ѷ^˩efʰ u>q\K vS&IZb o6~F${,fYyf0oHFi"9.aQ-ق>w2nYV -n/8o(/yX(orNj0\)6p>'<-NlG#~0]M|z܌כɸع>f i;PI0M.dR)sa)A$w*˿ٮv,FFF ylTB~ mi VvJaaMyg_pLP+w5˰kTT4WRoR&Р&Wρ8 CW )rsAuV_SI"+:(`Ņ8 iXiyHc`SaƜMkW[2Jja 4<^)95gb_ؽ}e_\SD q&Oh\TTWkV-{|oSB"y4<#g%W1Tr6:YmAyt :u´,ƶ 7&QC2٪@]ROZJOk]f<-!|ru&fUc,Ƅ4TdF*K;U֩iB. pc@5f'b$xMɔ!C{Q1?gYQ_E}k+K+t!;hK_ G[_N:yIe@I~a6-{\S Pc_3{ R', k=ioȕ0d3l}h,= @6l$O7a>߷JMITYLiiYzիzG]ХJ(NPVi %8xarIs[T2LmXPqں[Ȓ o6{ne$c*Ź|c 2B5it.-#3nxK-oe~t6V ?+5v? oPJNae_f-n@yxhd1fj`s&IS}aT5$ hʪFсrwv}<,6#FlW^?bȑDJOm%VF.e[tV zƓ0ri:Cq#X FܮkbwIil.X)@QvU^k`_,h#H1%#7YUaִ}udNK_\1! u4Ԫ4;cR=L"&%( [H:0w4e`Tz [oq:SUj;uk Wm;aWcVaɤ8$/SV|*6ϔRFx~{>:sC9=r<)WfI\׷'8@mWtKtfبX[= i0lF?|4|<   bxL'i$M4> l|hn'bS,2Lދϝ0>2=9Xכү'/ 齮L3?O-oodL|:Ț)CuqYM}R2e&4lԑI0uzۖm-,jT7yiRs]{M$$1Uߺ'$=u Zt.]g:#ݷs.7#P5 &gҹu, rj+#sw:Bbn}KϷ=HsAG~5GUK!ȩz2 /M禈.v-p3­.'JHJ(Pѵf>3;V^~.{9ATn[9UAGwPX 1B 2F*o=`LEΓYH FnssGҹK!="0ŒE7BٓT@ Y&45%s"%GR?c4mjnKx+X mYp/UYFFwi*(E=KPf'JUuo˞* v/0 Ғg ]RT ?䯵\Ymι~t>4@sE~SJ$}JDhާ™#kJwN7wY ̛U+ Ȣ\B'(f 5+6P+ J=u;+*-\b|vu'0ۉR6*DVN}Q@q~s4%G 5ςeu e|:LkqR` 4,,q'zZ႖ѯ7@ L@w' RTo-Vvnn憐-j8F"\Rf"9 J6fU) Y8׮\/% 25WMrLz-pVe;T-H\ڐ|@.|@T bIF5g,1 68Vy1M(xh)LC6<[ɘ|YkǸ2H[?IGkGyAJB*/-(Sw]ž) Nn֭=yxZ?ԮRwyTT=!2{jC #{©3u)Z~%O9!6&mDqAQ[%K9 4jWc=p2 'Pe uiD]>SO/\q֑k֑rb@%7F^Ҙ+)<# 1#aC*A*SCGRa/M͗]97'NơV2=t|)>w Nlf~ tYߜ$l | $TIX&9&;l,y_ӊc#̱@k^iu[++B(,kEpuu9 _) ,R6pB$ ~}([Ȑ( *#;LԵ2UjAtkR@dd4r^WSkQϟ^#y* ]Εu^W"KG E!؃R}fHn\T"nKn,QXfeeJ2r+V°?fmȦw4lC3J ,0wzC% pk`Tx}WrS?(sP%I!OvL#?5ltG,>Y|P'QחH(}qEjc)uU 4T?]9B%.Bk c> a *K"p}O}D`#@z'iSp:z,{V|ǧWt{k1;fY réa7F!ԕ:i#aEF~&Y e1ˎY<#F=RӹJr)('߼$ɼ !H91w0Vz+( 1 D~o+.LԱRS7p[iׁ]ގ hukۛX`+Maz';fUbV~PUy,JӣO|?zzv&y/*? XX  5[]zGOo2V?Ny]>MSL@Y&?jMA@ɌUm el=:@}F)(2?&\~ڕ<&3Mmah)2tBF:r(UG3.\:l-f=sUϜ =a&t1DOfleЇ~\$b.R+Ó`@'g񏐰D* (Çy=SnO1󏑮"\U>9IkgfFX2q'Qk^98_9"gUۤg eWDS%=4~WN8a@.g3fN2Uͦ&g;sL寋E2a){`5gz3[ߣap\f $B=oHq{HC^87|Jg+1 ukZw.w-7|`H0( B1n9Ş0 . B 2{0S珚w]k UÇ?H 25(ƁTV(G<]Č ߋ\j˄HJ(#y<쯌2ueȴa䦩{Iq0RQ_@b=3 EjmSF,Ea޽&Pg|md}sٝ6gv(x>!Hm;S+PAr#Ž He}?D+JpJ uaۛ@v.&q<J@y Bo9|Q6CB#/W _]:Ft Ou V>C"\=Qy%c;23J}?)ډ:]68hEE=k;z((>CeGE((f19.`7τf8J%XV5'\Bp< j+4嫞nʮ']fda0MD lNtDl-m1 m͚}/>'p.6m9XCo\ uO@N薕L,NO\pAQ1SE7}9*cp>C=@U X%Xq4QY&ӂ-6-`ȼ7.]U ΖOSN0:^L}]q3!y *"jP6%2n5ZW)ݍ.(Ӥ5)@LL=xk3Pjd*=b6 ḧ́]y=ՈFAGᇕ{q76-q^ eYonl2:ʥ44X1&d x*꘰f/~iQ„M퇸~%כ Z0JjZ O2*_|Uxka J_Wl، N >Ik?E8)'׋j. T#%Dn,.=|vR/3B|vh|@M>P]#_gO/6o3N@.qڭtS 3N.|9{$9&(\uo:2Q$S<,N=-$M@Ш( 6aHbF{$afન *Lo!W i[HYFF,̠Ⱦ.x&2%5~5f^g*f x fhø 6\EW8O9Pa)/ ǫ 0/w An13C aX.۪f@63c}-uմ*\.i_8& ԥ~U'Tڄ?[F0#zc-'\7;L&C(d2bkzP+?.j99Q7W7F`ADĉrbjO Z#C9k hN> (ϼðWm1ak>&HwCAWT/r5 -[Wa<^5Li!F'BixwVn m`b A iJR\La7^;Q|Cuֽ%N]o58]\9Y|-&rmZ9QЕy1lSTt= 9 M3ܹ/V4=h.O Jf;{ZRV)Uk} 3!Qsz5t-IS[jC VaTZ}#-C技\f*e:b.\=V\BF$i$mس +Y~[naiVGAl}{V3K, q 5ŅP'X;.ک}k iۆ̷V*Tfc^!IL(#1EF3V\{口9tr2%^H;4r/=MI(ګ 2|^+\,9hf :(3^$끸|ɧNc~yWc}vǫ {DFp}>p[5ZϽ%eZs\)WC„u/Kɻ(_?RxyV[(ѳ,C@InC|h&sN+LY#Q! bX/zf.GF 7ͼD}D607 b跽At4WDZJJ{e.;wyëuvq]ԋB2d|K3mqB v7&m#6ʐC083=ιeMy:5͏=15CUnsP%IgT]a$ƍtCY35&Xf!+VhAʴF&o5%d{F39M",?Fk"ܧ)l }PwmS> jj4{O,]N ̨ip.N R.fUDq7t~FPΦ ˡ"I`0=Y^3Xr[j`h%9ϼ`C 5H0044)y#s+8q N mfW2ߐE^,X ˶aA61-)t2.H?OYU_Zv]ଓ'ZJ%ϩKܳ/;v{bKR)*w]0NSOG%񿾿&F2S6I~mqc^!Ki[Lo }ίF׾_kEΒG"Β?/b`(8"Ih E 1b*$upV7 gu!D2P#* A@o?R;u4b_ȮeA.v+\[^^2+tWZF涅>K.j;qHrˮ|QZ]"f愺!@ӷv'U|QG]hFƈ1rӉN89oA>oK4Llt;TGLȦ]¢|vN;o.x 1NBJQP}t"D\"`1?3Fjydݿd6Շn{EB3JAH8%9N|04ӞĄ#k KxV഑:\q+WȫXcz)ܦVZU[ aˊ༵s=r wʳ}z滃qMƇư i.—̢:U9(,Jxa0&APe}>hsI!]G#3.gsQ@ "z]fou{̦mח¸M))ue>MU7K(ԈIJ^1nߓ(T0ga}YF=?rmK&Ih]mqeB1z%2I.;0ʗRnMH8cΉX &b8X qzXǽmPY5=rQO ADn)zHIh˻ Y0h%2ӿ|ۮpݱTY '&e37Xab:i1ȪeJ[ \]ΗjP%{0v4DF(nZ,,! ඖu.(3o/EX,96bG]U/_ZqICq}zCUkjE8rDgbqSv%a'rw$XCf#8C),e*鶻V{ZNpG>ΊË!">+SY{KL=PtWˍJTL!UK5ri ,\smdy"FatxHR2/!rWW^VkMOUâ8:IF53Uߐsd9flxɴ )ƸyS ⬨qqD#m]#ܖOSĽ#»BPڦ%M %\5Z}дY'>ifO2D:WlA)w s˶ V26uQdRh.}3"@xSԃth*W}V7K£(.܇4qMDk40o-1_,Cl)گU[~Wr/zwWhL2e ;PifJR ks>`*5 6^H!ۊ./ cI@nEϮMg!v#vg0]T$_4AP"l-83*?UdܖȚa$ԋB}T",'Q>Bښ`%2βAMGgd`A9%2ΗGeKrr59PaiPniW:ru kHHJDAuN qW: ~PsNNhn~wW [d} n SؐH5pc̯ݽ~OtUspB4(hR"ۺjP_RpګfMkM' &8u#Q(HU^)ܚj>(kz2ҰW7C pMo0ަ^*Kna:oKUNѦgAt˦f(h7L7m_  IEC (4ƍ[^CQ J4&@9lb" 92&sAIt0` $h 6+6OZ5 ?n:{L+>` {zn;2G%t+XkCZ,'l"=ۗ=veCW"|N}7̔\ojp;)j.L5xʊn,zTHU3j)hl^v5Ipf\_Zf1kRhwx/ъ=5xpbƙԅx):XMvtj5]̏:x̝De}!.K$L./=W/&'i!]?;=Nc!._jmPh9ͻAR81i??SNZώWءZ]ϼ!vc";Yձ}GboaoھB+MoۼյM:<~v-,5Xs{,H5}C6xs vd}tE7kN6xG \v"6&ZzQeĆ|L';Nw;,Xt9xW y%h (Bݗ/=yC ~;T5Yo;pAȱnazq=3D*0/¿7,~4XZjtjyboXg9_{7N+ȅ=N`)ȑAB\mlqcL*d߭ 94z>|}F= u? [}_WUK .,3v9n4:d6xmzBH#ӧ XcVM@㕍h$U;"4=rްC m!0bMƚ6T}NGZoMO6B0LX&Hg[E!÷E!Ki7C0)^N7ژLö(d[Vc3 BT5`>@XG/}a~%ꪙ n|>t!X5k4%Krlbʸ|YaȤ4\]Ffn>u( eN3WuqMo"B@2p>s4_8K{l`/,1혛1%i Q=`H-pMWӣs8=2CR;bGEdur$S+Y,:>WY/ޥG=GdRrZ8v} M~l JpA"'iA64.-GQMXf1"ǯw` 5rCV(JF¶/hgsN!jf" S@Uۂͪ CJiR]1f14ɱ2g]\3P&ԍ]lc;;gF8 4^XAP>' Ӫ@^ف!F8هpޡ 4^E(]8 銜h-k=M0GEw8̫6+/ 3#}?A# { h`l V =mPwVl[5t7d %!v̶jh~Z~X)cإq8(;-'>K!HΟna;2Փ/p/Zt{/Mmx\*)x65(oԿv[%XO_̪BTAN΁Tܘ!XI|*s Z̥嚣AKW J_=(V7ؾw[m2Ѿ0ъWMY*2l 6>%ƙ2HGd$ R9d|mʟV\rPU9(6}JƥuH ONDbK "  GW;X} r|[2jz?Vm{$UG$Fm{^ CD>=d;=ݻ{W}ۉ.U.;ҧ6\*Yz筛;zál<UjHx/O1ܣL51ZE Yl)\9 Fc 4ZdARKȬ%.!Ycpܱs|@QckFTKk(W:=mM"'R#w~#00/Yaz(,-^@3m{.k0,4+lqNrP@iIA  /kzZҘg#F\'gAB 8dܕVGɬZQZo@ ̢yy B .5Β?S^2M; gƢ^Y7H@)ʗ*i3)Qz4lcPBEH(bOHmk4l1,z|bHufLA5.a)Qa CDrn`0b (!$[!9DM@9-@'#mqtʒc\O y2HӶ%\#t{&#0|,VܫL*^\V)!1>ug{gi6) l!do@tb.Cn+ :OTLUV"J.0-zRL\;B'7">4^JH&qr!w*K:? 8+ Ńir~TDɻ蟰6wBE<;+S+7Z.%ZJ8~1MӋ| s8rV.ao8Cl;]},聹6L o"6ڼz4eGSz4mc=j4@A}4vfk Q&Apels$^dj0 }IrTT2QŐex0FnVE8wީyƀ}&r$3Ɇ$I8%(z6d*"(f {a*ۏp@;~~]ޛ=r` 3d)R1- ms[ϯ7㰦/|m]X3*]۳`,Wuh< #S /x{Nf". S,k3tmcM {I !ْ+QTCQWl ܕ|О*&& ÛtV1GI #~|9w>]Y[¬3Y6P:\*VK^Rd[c!W  64h YjnDlB\ZltRJ e/YуEil#EuJ>" XڋBY4NgL#R: .Ϡ(T($a"A`uso|g14{:7|e{~9xNWG4x_n'˻#:#Syrn {DŽ:L6Dei06JT7c"'z"G0, F0dCQXx 2mr/>I= B\U1R9Kɚk@1y9lLu BrK=d,4iR*JX]]o+їffe#^bzݐ[jL iIR pe"=DP02k wb76rH'w8l9t5 {K+ɵل KfO0ᆈCL adB׍wHsˉ?00[zA1MY$ֵnzj&ШZ 9h!㍾A)|\ UJ6Z!C2*2i4g JZxhEBLƒ/y+XBD,FP 'D2 G (4|艱v43=Vf ੾[woFyS`AVQlR x!njmlEWMZG#Hx+с Q!j(+1h=")'qrdXT5W$zUe *hzb* Ĕ &9Rϡͥ͑2ELLYH+ka5p;B^q]Fh]Lm1A|*ߖW~ɑ7W#/I$%2@vgeY`^0tM؎c?Tng]c`})WIE~DJDdc N2x FLT5%qZ姯R$IQ}?fo2䚺e+،0“pCkP9zVlB)UJY̓kʳ*1laJh'4`cBgD֜#$,*㦄SBo>v ". cgޖ~9̖ZE;h`62$uCs׍!'}FNFY+To( ÝIVh.*[ ik^ADpuɎ^0/i.Y=)Δ?nbCݞ ¨s PѩV\ml1Q={{UЫ7xlu tF+eo#AlIډO:z7ZOÇr:"21{Y%CT@5A8V6%5IbOJe4%Xcj%R%@'pƜ&BY@w8A{1fPmCyxOϟV s* HӃGL+묀};Zdz~oܰ{X@iQ.f>a!3X< 1j;vs\<*a~o6Wnqq!szQa> ۼ5<kn8j+r=??qߟ*]eާ˫Lm |7J~/A3^߃#u8͋ۦ37ZsPӛ?{Ƽ,a<2|x)_Ώ] Ʈ 3TH[z^o픒pI侗`֠Fm¼VD`+ H?'{su6/Āa`{YO/k_N.v,® gѻGo/g勁D% 0 ~tV(<'Vcë5&J?Qo=ƅrEzu܃AI~ 2$mnzOfۛdO߿Pvb+<;<.CJF˰~ ))+ܝ8?LjzlrE٠5ԪdVP-U]5 }F11yೲkz۵K t>l:{w6lnFkO#TL[%Nrm `L TD0%ob74J&vPlzs ()ŷRtc[1)4!6Nmɦ"<*;&P)_cب.V. bܫ0V|bD 9!x-csiV$R:8J^켚ou鳯V;5O,㳯!ȕ~\bnFs:5X n>֠`=8[/r=Ux20)~,q&ҡtxiU*>9algLtB-2iųS`kwGL7M_e,,5S T|J-CB[T0K*', ]"}'R lM\9)RPD+ODJKWX"Kx* j0ud7JzQղ}OB筞qіm여\;Fg'1W/W͋_zv~r9H/("ݯwj?=g@aPuR2|-r椃$ p!eXڠUbaa5NhC!CHO2)I}13"&';JV/D2j!^H&IזY[Ǩ"NŪ5bjsf ufEPIl)B_+'"ff'mIA(y) =E2h#m6HRnCJjfccPVCjRQIuGB$;FATS(Dew_Wh.Kcq:VMB-N A!ױ2AɟI<C&͂(g^ ۆRMr $8[#( ]QmbVW)Bۃc9]ުs.}/! TIu*`!!SYH"I1y|L62.[RG SSfAڣ74Ѱ\i1M*F_kPb1t(;K!zB>Ѥc?)~[u|?/>Vv8lGd#xbMY푤AP\ ^l,/w3uzx.)$S1똫1PHKR AoK}cƈ ]B zJ|ex$-C:m~p˭G  SSc6λTJy  B)ebȄU=oFg#41~,۟5gIn[o\0N >cACߧrLvpX}4&bN} #Xtp8ӸΈL1 @@ 10̥B*n͹.5)0lc`*8c$)ǎ쮳yl,)0lc.d\eΊ\v͍zE?$v2Q;٫?C,GguEn@)= ?֓~g1d 9oChT{2Qzǒ2`7>{=A^}]\XH\ [fj!i>rlٌms0$\Mfb؁YXluRT V~)yYM@wn};A:LI=D3b'*VRsJ iOKS`&BϦZK?vK34j[CZ{ޝQ)Bf·9fxBFW :go>e?ף;!^;;+.\D!1(^۝B\=AEC#tΪwSx1Z69 ).C{d+`^Wn|$VC-V/1jPPV9~Cc(vznHyQZa.ɝ_m?=4y$r-YCJS  imbX`s,B"4" W 猱] 0yz88Nu^}~_ވ}sEe2n=T3jblY,4j |G&U^CPjuyH:u4dـ-x$< f V&X7Y/cncY\h,GeG.ӧ wN\J3s[R}30ȩw8\7db;Ȅ!`1S=9z_шHn&7Ȑ]e*#>hhU A>|ᆆ5i.<3h͒׃{ՙԢ~h9V ?0ŋntkj '̝vPʮ-J2$ѻAK"F黒!J{BNy Jco[`4eǦ{%ex`j z-x>qđ»;zȳIsD },lO$[:[>p#=@ T?D&ׂ\&xo !80FԤuQ n`Ѿҟvo~wQNEy,q{ܟۗ;O= nZr8z1~u? Q57we/)B>36/S]|Ereʱp᫿~Y9חwFR8E\MTvꟺCqm='Ty_g#}9av5F?Q#۾&<'}P*IjtQr{v<#ZB恒c o'= ss+vymZj.|z] y64%UHOPKA.\|NdZ^e~1!)t4|"]^rКOGYw4?@UAZ_Mj#Nkp(X;7kԏӏ3[%+нk| %R⤴zƾD)bh>f2 ]e ZZ(rzD{ИimuA1?cق:B`t;fF0J:yA&V;$/CV 3$buqɇ;3nF< tj,1.]rNaYa96{Ÿ[wjuxj] y>5 b q)YShݺ5 <+KbJ1UJ۷햍ϒ0ql-EW(F7rS 5VM3>PuPCѴ< 7Ν`&`J *@.=|bFbΛ*}AR0'V=Vz$MZ2%*쥻m}nKpv4ۡm^ǑV_z7z}jC(5K4WGl=gC:bvFdSl,*ڥhk$>i栋 "2Q5#a(|!jqkmTrO$˾*BGXѥMsmN6J ,JG:piS3JRbCu.qqtd[vUc$ aBB@1if(K(=LO۬$&z#hu;DO-bNowO`n{4VprD?d4Ey۵G?9y`zPw޾>}^43\!c)CG9 [ bK{e\FkFh;B^kk=CQsIFu1BP4ȕwCؒfM-I l,* u^ᘨ)ѷ_]\ÆB&UTNJ]J8X](LBlZ"11aanlpT>2Tg%-~3(ՖNK F)yl_֊>fl,Кߨh}RMwp8Zӂ^(?l+WcfivJ@Đ CjfB%Q,jfrI|' d=1ļg M!fg"1Ai04-3q#ȅk11 %q>&#۝Ŭ^Ki~?m9wQxV&Jd͋o9`~̑|GQ1r3fDf xf*g5 c #_+cHsF; )}/g<W{"B5yEHZtڧ{:?+5m|I_ҪƗGAܡ[ˣdg$p+^Ij>FC:ٶ$;xNl't1>QHX(^ 8fN@J($50ajyЌ6PZSȡw^7SSY20sfz V!r-d450;F4bښ'B0j|$``;kw5wӊlU޿u\t30YpgDF0D]f6La1Ѯ9t&A߽J=U;c79WtN(F;z=D@1 3B o_SDcALd|S86n̻W>}z99MaW4LͳwG䰘gQ0b<Ҧlu_) ̭* hX%ͬ@T9Y  -gc1)h3?=}N<7${[bECvq?]}$+^b9$O";3}*K{`Y(`8*o :I"V|M9ptER1ڔe"Zx(Հ!r}#$ Zm]Ί!K6V šNT3MI4UdR]`ȇ-.F<-LҴn};.~U_NnX؄N*IDF:)Z9OD*SItY*L[mQ5VHZ)N6N;_Nlq{;ӎ mJ4֨xgs#OKZ!o ]Ir [Ynǒ6һW\3a\Xn}Q,n,0+g dV@bdJ`NMbe$#8| b d$('W΂2J=]kھ 3O|,^9r& sw<A諬xҚHL{+EB߿N4&a'rP[FDBE;8;%xRBH #~@rTI ]?c}pO;xƵCHOuTր| .9ZV)bֻjQ\C%!,r[R[.PȾRTjyd_9#kȉ$qݏTY/b;XAIi;͟oҨe3Zi7kWQiGB2!Wx_SWlK:|TPOVතu3e[2(k1jUCewʺdeCjAwh+[m\)DFn[z^=;5NKOaC^xHRIҎ4#h%>皶>AY>ru&jgr6WT-P  jWdÜU+Vdmw|h&-Z|iCZ%cs d)HwbȆ9Kx45C0U}\Lk-H̅Wҷ"N_.И|$|;8ށCNb۝>?~8䁴Sδ:,;dӷi NIp)ZMX8.ɋhfڀl_ށ'86oqعVaƆ9=\tӜ=tFQnSC^ '=h&:2J ۡA>Lzo>7{4FʕCNYi_ko>㙥-3IY2$ly}aCe3^neRM^eY5!ƫtH+@,-vtYpXi=_Yg-PFp9-  x9t3|& [Ri}A,ҟ(94=G8 }-Yvo<8nhĻZM(݈Ne_t܆ $B]: //YUbI@0Ddxx 1` )VK~݈{y;=&Zfd2f1|h~ EDsM6|uKZGQX3/"QaERƝ$ZP~UĚpcDIWkVN Im+ LCA!Y[9 X+vם %$PB谨@M0B5U Sz>P(ߜؽ@IAc1~GEZ@8C qS>q|(\;@x(1\mZwI\L@S9Dx@.Rɀ6;d+#ZN}i2 -ze!H)S|MʸvYF,"ݸA H*"U}a Țx7>FU[,; a 87M[Rj\Sa g+zg}4HDzP9f ,\ XMj&|nЅھVYy( OϺ5^wMFڱp&Α$A`qmLks<~PjO凯C~>eoB~o+d5 o+5 [gtHdJ"O7q=Χ3Z(a z|'(ҳu\7]nQL"blѧ]3 ڄk&6|㵠>“\F[NϖnyJrRH MRI>ߵE5/ZYb#7(ΰ.H;hqG+)єisj89臄&ek1fje:/?-F,pzH́ƙU?oUiqtzxOHp)N(Mx'lB_a7d78niQb5yES __û@ߦ"{$X R eL/~VJQz£ mn0}ӄ>n soiy.9c-4XqsW!Q["?yjMQ>O\!'<˹}_>O, Ԏ$?qVq#{S萿Ԫ\ā&:FwιN81>³]FX.YA D 03筓ɭ˨fs߫".hQ$![!gݗ;=`uFݎW/]% 5w}-뎱4\)cs}xs Ŵ ?O Yt9wUX psҨ-;q4x #@˜!QknKd"תXEw#\#a0Awu{Q0:[ ecYm.@?+k׋*bG ^m-CSUĎb"qIJL\"pf(3q f&y]8GH{nR!;੢Xeec]QXڃ7I7fAPRʽn6+o>V!%4rϧ-? P<ګ$|n~[b 7> xqoB_|Xz9=TЎYH lep&Ccy s (ןZ..|\u-~кluQɷV+{!` JDdBǒ+/M˻`I*ev"I"TYC4u"(eB")ǰ~K@jcCd'߿nAHsPyސWZ3A㵁0ĘR HNgYEB d3pd>#] 츑!)X&”!Y2m*ƹ0c#~p}xGS /NUE-aD@Ɩ Y0wտN@&T~p1:8aT%yY F&!I`ʼn1r :րHQKTHP%%hXL@T`^mpdHmi{=q͒8C+Vs$@Ka5b6uuBLxX )yB/JS42kHj+’QYV B6"JG%KKHHvN9'(XT9dh:6NbW .%_f% -?T?Pmq/eMs5+eihHX|8b5u i_`\1 ^ >1g!4?=uxFŸ]Ш,+!&^N hD݀:xO1<NNۿ3^y/k[:@ 5HAkic9W\͝a,5<URó4(íFL!H4"^K2NL׀lb;kB:۶|rD-{̜K}rVT)L]*C.A.!6+dm=8c 2#Xeg14*C1rBnt6mj}-*P;F_~ji6kPl77RQGGmP9E 4\t\ztb^|5)n6ZhM?(bK FV Vd)rf g`!C׭NMC#@ACˏVMJe?}˦P5?A9 #)8 z Ԓ'ʈ % tT]K3_WNȝkor˧QϿ]ڦw~%MC rg=?WS%ΛjAYfjtxNzFn|7tp4*WS߬6`pIq 4w$/dU1 S.#>Z~^Xa>7ְmYB_f1>C`gL8k3R" )inJj^b7d`լwԫwɇ%So]S(?Z8ݛ;o{3 וIoj+%ÆZU9\+F`/f@  d$N7UXop fsfɓƒ %#׈S#)Ӽ K0) 0 W*Ii1UoRH:n"qV Њ RL1#1u3z8JxP4,Eۓo\_T(vjAn…J̏3jA%t@q|n Dqjkc᫂Axft|ӡ:M L_8HKH*3 Kʯ|}vhFW|}0[Ečo'B Y¤W]K?L]ދWޗy=(_=R'gcσmn=rvqp}Y𵹔6 pAMCG|1=-dU3V*r[ .U:P;Uו~(כ)ʃPK 4%㭠6kw$t&C> w3%HVªN|&VyD_X٭YW]XxUZ+~g'=E(I's?z)( ;;fΒ1lx$U-Vc'o?秿 jsӟ1}jȗV`9[f.pt+t7Pe|XG&0Q)Ey`؁W[RRj.JV5>!tӓNbokok<[p7UBx2`Vy+Fג'Ow_|)-:az5Ҍ1P:#T6U")|}RYZj+>IbΙGsN#lEKYhaH98=NU&`$RgK9M# 9mFB0hqG:I'UiNjU1YlD6*u8$pTji*H9XMZ.p̢1U퍝$ g?;gvc, >j-Fxn 2X? XWVGCp Pc;1EYvb?dQ)HiEJ.RuQOչԛw96>C"qA>ǹvJg8CQ[00."vsYSh!C **q!"Lai^̆x KηgV3-nec39= r JTjb*r2 xICy!7`8˔_/}?!`VDp_FK h Ac WH_ @1VڑgkkēN"!Hx#IH%urE(ShZrǢ (g\/P{ 魐 0jO_]o:A"<#gB@K4OᛷoOG_qJ^Ka䎬ƤWRT7U'C:U_q}p`hxӛ[_O8Ʋ{XXvo]AV{{9> HXemUkкW}g^cPTʈv%3kDj1'&^:+ RϙxKX1vnfHA ! u}H. d_ *C{9׬3 7/iDһ7/Nj].ty!K CRC7WƁ,S;pBh6tKz/~sjtw6g\YT?W79?O&߭=>7զ%~EwCf#c\hI#T/y>\'雱tkꙵ ~{=6gsq"2!4Is[4qsŒwg. Y_4$)V魍<{:FdYppāe#2禟kfL5zxX׿:; PNzж#ʭ.%S['_#[i&imR-~zd\R*)ݟR-}K>F="QBm+ed 5 %|&T}ⳠSÄ}z]ۡ/_;J޴jj燳=5Wg;P3ag3HF 1d FL=ECH-&HhX!¿c9o%`Vl#DJpdEq5N?}QVy>Rsx%J(?Z*b/>J=4RK:vqPSj0)k戲Ng0UM"Ms*V8TufR"VКJCD&p#98Z']*PyZRL8STT(y=}4 OɆt;X6}/j<ӏ#0r{5J5iҤk,YA5GzdA HaP2tt wq^gA?ݛNcPF.F˝gTaVQF cƭsG@#6JG+o+H~՛THP}< MaL;"1q6ʱdqcVN9 שu<蠼K!J2hK#pNyRՆ`bʧxZ:O֊*A' a8_pICԮ> wC$O"YP+>cz҇{msm陙C'=^s?q!5Xx!e(`@y,f(U/E0r j1&jg㬍I4B&xK|=]Ah+ΟUH%>RRxNg-NR7v:iuRB)":nv: i%xv nUe+6U0o8B":2N[^bȰIR ۤݱ*S6D?V徰E2h4g9ǘw5m+2aj }5Yr/1 >yEg=e{7L`tʚvgV_Eg7auDWD)ҩUWFfBeo5dF׌\  KVDVLb3CMV^'vUwkLFZNCRh<$J ,>9YՊfC}+ţsHs-9&Z/zhxqd0FVtpNYY`@GLilZ/,zj-޵q+bXx_69bHs-FRk.thd؎Xf}UEXYsn](f"~~NjCd]@Rȁ2F̙B*Czw~Hc</y/%KpK;뼺_:yztc=ZO(fOGlHe*B&V?Dc ~jg*Eroe=ey>{=>e2!bBeT0,} YED&eVʎ}(ʎ}]#ez+cZb) $d22@C1KΊSw t%4ٗ/ۗ75P _"4PD!`l9i6F?D6@yæLes +^Daa!)J#x7 KL~0[knuջWB}>]|~khšͫ$| X@ۗҜ8Ҳa >ƾ7-ךΎk9?[uJwC7׏];DǦbLݭ/./\~1\|Jdzɲ#vڣPD[U"ƫ÷_  n)4٨ =߮wo w߼) 4BLǤe&VyeCVl͂~7hUۙVx1Y!.aN:Bgl6ꚙ v 1LޕtI$3B+'=Ę6j!=LkL%MA5yŁ-4aya.ʛb&`ŀ4gb+'I8~cEZGӪvg K }d Glp1лTxcíyo6zo{ŏ{IWZx>_oe&ׄp#fzvB0*/UC̬ׯyu}}O*)ŦbPN".;S#qoQ@ 6za 8NϷ(L:FĎ#mIEOjZJZY5@5tB<-Ɍ)qA#Jit  if;^ws-ƭv۽۹jjo5r>t.}f-6t{43f"65FOT/ՇŴOl薯n{|$d:{OT)Nimus=Ih#.ϟ߶my;N}i3kygvG#-o't7\8vyۢu lӱΧ : &V8ACSA;v˚$}2 UM>W}JOÝ&vTiyD !C|ڷ81&m M\iS]ۙ * m4W?|/aڲAO;Jwm{+5=Kw=xAK\GGWϡ$!85% g"yj6lȐ:z!eMz飫p{7&dF6CwwjCGWޣ?[V)<<|{y>}tk1ۡ CЀTKFCR[|Y§.jy^3ky?_E!f]dHQq\6TE5'.c_4ؗe (Lo|`qjȸ4ucre9uScb4̥;g]J=K1HT(R9 gnjXfNO͛ n>Vά8gNg{ 50ǒK0b[:=_F-_ɕ$2NʣϦs&_a(DC/bF!_יЌ!t^{>wW뛫^?gg aY9B+POC}]Ey֐6 \ ~Ik;)3߱qLhc +]Z$"c!Q/!%o s(yxF};ki:WjŪBGA8V1p])JHޑhmYԋ?k Sb$1$R#ˮRpc)@*bՅzo>^֢(9ݗhinE 9v($P w0 ANPАh$b3+lB9ud(ygԤEL2bW`Q&CG??yƢ&h4cXl/9{ -El$ )Y/;H1e]4®af F['T({F=NǂNb0l̢O!H\q3]r<=+Soj4F+s9{=TnnF9Eh[]4^40W5/M~: SGrZ. ~>ݞl_Y+]ߙeQ+Ro~E¿ˏyG m<뗮W𘾺YyJ?Z֞fW._j.h·|z7V2ۯC9#XNl*8r[iC.H JjL;h3Nz>S\4옞+|5V/o5QQn,3fs^/ǹ>|+>]_u?}z}y^^uMyW#1&zoia#/2-k3N(TWfkT *ˀH,?6v͇R"7`%.(wY0Zi8,ĬCUq%Fhȼ "F$0 }:YbuƤ2͡Q0:/rps!*X W u"0"<~D@1(:;`!zF'u@t'E]'|_R¸2qb?IؕM89/TYg-,;JXI&l(βk_j*> OKг QEmm %wpd=)OΎ;hkjس+<Xƒ3Q: _j' 0zh$(v E[0vwnk*ᬤ]o7WÍ#8ؾⰻAr}I`Ky4r?g$<"lv;?LObUc~S~ffGĎ wlK\ΎA}QAGG1SAwa~srDJMɂww"19kR'1ǵ(4_=AB=e 1m "COcb $"4Ewk8p=@$Q=NMΐ |J^"iOǹ$RK}2)WmϿZ^yid8<Ԗ>Vz΢鏦ㇶ"zsfvwʢw"+Z.%"vVc#chU 3q#i;|֘0ʕjFS݀= 'pz߷M=}QZpfAuF@ &)=rh-Im`zlLϊ{"ܯ}|AWbfϑ +3Hz: ע"J#'8,Bxq2xaKZVIa/OC2%I@I'c^'aG_p 1)Bܝ:¼}  ܼE{NW~'~E/_\LY],ߜe]KKDXTD~Zԣ C=\)Y]RR.H)X$!) ~nl@ j9aԶŦq:mIp@xTXqV^?9in-68x}]̯;wq$ fƃy?Q#c֐36a_/mawz?+djzBy$y͘?Xa#˶Fz\߮}z_ 0?;d*Mz9w\<>qET:N)4ɞ[+U -"A|*PXɭ-5v9?>^vý!;~?9EԡAz鶾MF/͆1朶뎋xb` IB\=cr:aR30F|d8Gha ںvQq(D9 ~w2n.ǧQ+ ǝPz\k#OYMK&!wUQd]b5ײ,T_Vq-ߋǯ)![yw?q l\Z }ۚ88uCcۦydwOW/Av8:/k*aOᗅ!Gpj!xabܣ I@RYg4PǎjG{y=%A~oìKY3,IUVȎ., 쨺+e۩bH1`@ PJ80]R 2ԀX`?v/ BDF+e7JGdІt+952}b:ݰ䗠zƌNBB4%@2 PFQƪw}ĨJ>#;|;FAާh)ܛAoL4 nAα R̉wϚNusQ(*͘5XLu4k5av+$(yg Y$R `"EJC4׍bi"%DZ Yc@0xIFԋT &B8\o8p ZXub@ZdRLY|bsH1)-SH]BBf%9n YlNr11Rۄށ)ݲ n%$䙋hL =Xqj7$[.1;Fvr0Dմ[6ڭ)xƲ9=;wMd[T(J__ o,P &Мj:baPǘh  !2Cg{@F{`pJc伊VJy<cLsD'e*C^A 9@t)h 1 S11#)qqDs6Hz[%n S7 @8{E`ָs?Mxq׌yc݄Ofy$Oo#`c֐s1({CfϠzi*!>>ngz^D} 80|f<]ǧQ*>p_fD}=.Q?꽷U=4p8.`1ItW|<@S`=z-wo0rH;qQ>g A7M>GϬ2AzpԠ늝's%"g)B\`KY8qGV5 z< k O2jKfLQp {ƬelmKIJ )qn0Z f9sSPNIS&$5dċ'SG+ H7 8imG!Zk:o8 9Fj_hz2vjo`}ѕQk9! {V45ID` :'swXq~9D3Ɯ!h ;q&fv„]J-pz73Lb𙭳$&[SS2й)ϧ'aIKVE7)$'^siO<"G<]}]49^.Mmd(dS3A`̦f Ť 5SXP޹\C`0UP_}޵m+E/瞫5ߏA-n{4MBبvH@KJ^[eHI٨ Zg7OM}tiwD.d]9381Հ'E92^ fI\j*i\r>۷yfdQpBnF3WyT>Tk 6i~88#&T ڦ:5^f+RMlI߽yo R @xgt!7$s |IRnj"5(Ttd1sEvgrZ4_?>* *!K1.\YMQ=KecdUq=e$J1)Q4$(aRofP)*_xy񲮷8).C^S_Ywb;GcB"A%I2(G1qKh [fQ!,B-7q ;I}x}13@"W(x[G =&߁<֢7zE+y9 "z ݬEL(J@Q ٷ[< , 0O ;q!k. PҌb"\ E?96l//'k=;6xV t88}ͫk΄i} aer +5!TꈹcpC$ ~BEeP8̵k 'C\{z8zk"˽(vRnI ve@| /?B,]_ɴP"m >Edi׃AnFfPr:IzGjOӓghLУ1A=Mb2 3,Ia)D#4O%TXrRTL[8 PO9ԏsML.i(wm&~ yC-^>=UoN⻵EQӤ FJ![yG!~OvT}A'hO6LKeY6"pّ=^wŗ|C:5 ]_D |+o3zVue)VNY@" tȭ.5Z b(zRS&dxTFVdho(OU^',wۏV1wc3dJ~vg镁sT>Ri:j\s[a 7vv2r^]1!$m7}8O ʀI>fȀC5Ko $$Ɖ`=Ϯ8QYL$i@ez~ 6gii-$.WvhNg,Ԅag f=#*C" .|@}Km]ڧ/b5bR'gw%ܱk^!hsAVG!``O<գ`Z-~H~Fҁ1RaӘc`Iq!@2Nh"c,Rf8^G~bs"R2M8cER$(6! )XT&*F&T8zIU\QN[or͡3n[1G^BkVYFT|r|yhL\-q4akAKswZWҙ%&9տཞ i|V`U^xd1 gwQZ&or1 h*$"Un*VfUnPN sB:g~!N-V]ϯ%>ĭpLd_fE/HLE>\ۛ[lKrm1OjjNvVO$y=WO%>ںrjk=(!}C+4TT Hǭz95 ps!5Vc͈Al8u/sB'/l~Rd*O>k[Sp2)Lmz]6[ҀjNiPKX[pt!c߈^9C79 &(Nhe'ދ  ++1/\RԐ/*y8MgsӊWG7[B _]}tY̓1z+{w*seNf*W;cscKy]O=ϾUϺb6ީړmͽOW%( i4 W&:J~s-xѻj=%}N &ZK ?|ݏ}EB}զõrʦ21DMǨVvi!Z[+k Uz@ZKڻH)J.}-OiqVNt??~6rtX_kA5AVZrTޮMoyZ8A˶}ʡO{Uу7TçNߎ(h? WdfZ-*Vq*3& l/mujItV~j rRjI=PԒ2 ʄpi 9)Y*ۢ Uã`͕hO@ZTP3n}=LVCi-k v -Wcmd'Iƌfp1BŜ$\?xw*Ea"y jB"fS)`?x)\*L,PeL`r L%e  JT"xQeH?&L"䨼{xJ$֜/$u4n>4C Jɱ@rU7Wz`ftGfӗIn8i[5+'{n֍]˵kwax)e˪:HIeBcrHGZ\8ҚTCf-T`:"z,B($I&C!TRQJ6%lƪDP^ch7*eo+ f\M/i!*z դ;qae;):k 'iVeqJ;g&=|~gșG~.>lao?&1y]J)uW3*r=K^:jf? =h06fCAKq7#8q>B'90XKܾMWշό 2$Aא ц+` +Sz ^9( h.h>*rר 'ImN+ r2$_$|vµ6`*07L<zFm% ?2\ )H+,XjZv^WCl6-(n%?KaqMgNgT b5`ߞWv+NkSq\h.kthdzPE+"Gcytkun`сwc6C@8FVU&x\h5v齽-E;i%9!,d7V;O6=naWfS-vCkmآE-b ԷbUj%[=,{Rܺ[QR=)?[/5,Eam\V.R3 ~ ~+֞mԺN;BpW+^HW>}D/v!Ejg'=K[G W=Kk">C}|~ծxILxuLR~hAuk}%QϓW-cЖT>GPHs&?= b5/ף maRv3@Ǥd\;M**WkqWyqèuXUZ{]{td]’>aZOiq}HXQRYk^ŁG-p^܃*:~-߬)#n)#~ f fgX: b\py%*ʛe슃O*oY`2-%@ j]jnso\E4=^ sGSly4FF6/HYe DoHb" Ac$C!1$)BĊ(PO9"wǦCz6.xVe`. 3V7n`XcqZDϵj< aaca0c^M|6yֿ;Ӻ`F!(䘪H1i(cI&đ~Zۙh][oG+,r~@R&kÎ/ Ř"$Y俟!Eo B.MY$r8U_UuUuweg/g"sYgw*TqBa+ye5VwwX=uܡ: sΛonowٴ\S)`&29[dl繧Ic8&߅oԱ:,PXqcx̤ŲSއIb`8xn < >Qm@eJFP p՗}e a@R$\&%7A38&(4i)1\0dDžX&^4rW+fa-_aTHQ-E*B2fQSA%YR /BONOO:n {\I2_kRzC[]rmYA5{ACXp"\2$N{}!Œ*roWA1@1kRHH"rHМJM7/KԜx*G"^zv`]"⮥/S j"rdQEkԇ&҈Rd;7\ՈlZ|w!5@GQBm(Uyj8Ixes+8NFyX}PTNMZN brs;'*99j*?^ZJr9QQWz {d'c͢!}QŜw~?IA*R;XmvzVK^EO;4nL !Pv^ = "6C}|x\lM75ץ%~X&Z'` I"W+tyy<%i=q^kWVW5W .ZW]mi$x^kJ[tEϋqu'X[fV;~4m$rNBͧqy1Fu׮AM j稻q<9&yy2yq>$aͣvNG7@7=!qSsR6o5IzQ6ӟ<ܹpL?d8x_@QH0M-Ewvޛ!+`ی4k7qJ0`@mak,D&44]F2d}Wq\!k|J|;. NT‚5hLσ)k I-`N0"Y K7a:ur0/NC2/X6E!AT!P4̰0DL;wFL6.&%^h .j%4߈|5b^6K84GptBmpޏJ, $Ae6n7 ɜ%jz.K<~7}a2-PHg^Tƶ=?Rfu%Pر'wuҪn*qTZ4Δ B DR:T`4XdQ^! wʜe[uhc*cPH}?\jLS֬߻@f Yb+JZ2 &fVɷhDjaXFm29[$r%_P)ClC&ȬT ]_1|<-X_+b[WŽbTb9Ƹ5gyG.r]~h0$;wߣf)cJ I(YbRr*EP|L"[E(1Ba=Jk Xb|]GTMc*xV|^1lloĆAkBd?3>ؼ{& 8&n>SpS|gق9)_]E`G_$o{[(=ỞJhF+|;Tӷi܃űԪI څ6Aڃ3T*z@+tZEI屡U*"4HftQh-=5̦tXZUGs+xnx 9WjGT'T=o*oָ SXWa@5Ź2Uv}\ю@mW9&ۣT.-KkXeVU:kRzC[]rm,ՂinǞ{}E*KH⸋]x森[*vBH-侴j1^owߒb.ܿ*Tc)N fN1J[KIR QZAh0^jRSYQ>[#Owk)y>cf3ᤛ7I:>&ݞɬ7$Y;'d}!N5|AK|; > '2񮕍H܃PӑG0z'G3JChdfn L{fQ<gP0 ƾ5 ?w~T~FUu&43QL4Y%X(s9no9;;g6bPoVo C̮{m+oO,%77G> GvPܖ68?ww?/{xib4R:l/,M CQ+͎!!TE(:`Lmj:!T=a_0!T|""SL ʥ[%SmvZb@rw+O]Ӣŋ(;5鞞Fsk v_6,7Ep <*L :pS-Ax8xTQ[ <G/st3Hl>ȯG}1^b+nj#z?%VF$) :W7TL TX]k3,X}<մ/hWPuVMMNdVݺ9sP31TE0bw Qat>L>5+G|TI^ BG}1avt3^WOU@=B|;/&*ښw3qB?b+d2>4Ru1\ A7qg8sǙ>?snFaWY$XywOOlp*0DRmHb+9OyArJbRꃮYU&J͞U<Պ3=_{K7F˵0ӹnIEnDz!=ڣم.d&ם߳G_F)TZrNU+P=+7EPx"W< +vmBrCoUE{ 4c gKQ,Þkn)9뭐T۔!+b+4ZJUdܽ ]b+Lc6|Iם;eםCG.!;EGS$T("03ÝJQ܃Jd#CRm(IQǴ%ic* 1|2Xz Z9J9rZi+imV;+Rf<+BUƔ eLl)Zm8gYt"hs}3ggvu⟧)f*K&rk:ɍ47֏=p& PDD ,WǛx̴su;A*oƓogʊ~f8c4% \|:yPts'd3  7S\. ST3 TVw؉m NG(&AH \Zl\$K*pFp RU>b^ qrLպAo2uaIR#S'|W,kyWҎTf+ fc"୦L&cz㟧fRfTX D(;+Rr'5Yz>u? {xxVL& ;kG[,.Xf8^#/ǀil9{>W]f$io*/͏8A÷_.U,H%դf>{`=hb١uwOܡ6DhLK*zd˳y:|< -%\Gk(×q^ Y+/djHK)e/YO'-ೊ=r5r=c8>,wNqS%c,U*pQddK YwiIC6 wcleq2Z HJ@vΓ|ף}7x"!M:TU4;8-*fvF> M@ٹN14@cg.GCiiC'atNKymwasԴ3T-Wk#Pz5s,~Ⱦɟ-kO?L2s^6tn~+ yܽ&0I)<41?FTXwx.3d985NzXj@0?1X9;[6MM晟a+*K^5˟#_'73_CJh&忯G`]\4YQ|fDP{[xmXr7{KL0~>8;XMe_+|d%_Kŷ?,3~5SKoФԜ^IKBh7{P["$2dV&RafX4V+t )}t"L%0ӛ@ tE \߿?NG?]{LlvS0M"xxh 1<m^J o'U.UCnp~qz gG uY jqky+'ʽ{zև:{,kޖSg98]E.}&bq]X{]^#./78G \e> b\$տ(r5/Bw-Vj>D\-YEـCMɳ_HVG`BT3Kd1NtE,bg10m/GGX[Bi!،cʼn A! %ALGT9P>T4X{Rm󪑺ϕ@-CZ (jcK<+oT[,k_kP]}vs#zYXA0}9)ؗжlnB^YU C<.T,3-jc'H+gc DH $ڢY@< s,}kpJ[E*Tȿ8H,c O`4Ԇ)hGâ /3CѸm{rlmcĺM&n@+>'v_;L^buϴj)ubik]P]~y: Ύ9\]k 3Qo%ѱ 8Fgh/*0K۴mS/L_րC^-k}_Y1Xkw_9Qq7 ?_pw1T3n?7B:/PW^W*4j_x.omC% Oh?Pe.@ 7y}LCOpP Îſ; SYNkD`.Yl7Y`n2V[!L"ŬC '/c*,n^୿rXFΪ`2h=t짊]\]w7~0 l5idEYXdn>M_~d8 ]gKr gkVL}eϖy <3B zda=Փ|_+.܀lyA[0[hQS6XI,-ІbdHĚ2"t W-/WvhVRvUHPin4,v1(ʣHJP0h78Xd"SJ5sAl^pJ9e @WʎƳBzwZB N+S!Bn4Z,)-\& a.٧s>a<8Օ+Eĝj_"uWJ܂:՚˪K7CO:2RZ>HVcJ~j:xlVSՉg~Ŝ1^+M -h s8$DZ#"%"rX$Fyj]>Jۅ㞎{hWp~]Wftb/s<1OcYێøSI{0 )}He~!뱃{x-fW34=8.0^+߬>Ήܔr,NF k=Fr74!߹Wkz7»bPEuBQǻ?zNKۻ Zջ5!߹3Iv[y}[ :wԊ39B1ȳnVА\EtIZ.^nTQwn-GriU[hNG;us׻qMbPEuBQǻ8n͑`nVnMhwA:%υz7%b-*:ޭ-ഴ-XЪޭ UtKdE Ktjpxm%;5<.{9tvIo<75(Jk lZsChrp`7T]զ|C\ܲRdwv<(@9rq|!x>{Q֜~ԄJ!AUr'DtPUJ:e+Ad/TyxHKjmœxfq2v\ҁ[rMM+28vs-ZP= _z{ 1'K8>8IE8:8p085J%x vddԨCs&4K````x>)^1?~Ǜ eL1~0,҈IM(%Bq8q\Pww{Ք_Åfak-֑#vqL+JTCmL0cH`J`$ԙH DT{XzCxTM0_CEJi\=0BG]U#v$O{^&l c1M99/5F6#XO4 N^4b '"%F"Z+ƞy(cRD!dpa,m#* ev*1$"L1Ċs/Z?PzꟀ!hssbI)\m\"SrVV:K4tt^"]/M$Zusy;1`;vF/6&vO/F טv-Y)Mh;;kqxx:K#^Z6U2쇵|뙫r¾m K& 6Hp'Mjmp*+w69/YqeB #BJ )/[fq䂽:zFQ߶O8|֎Xd:g>}RK(P/_s7Γ:Ȇu?| bFCw>)q1CEGDZ+)u.l鸏ҘYxq"JQE:X3e$7FqE8G\2bYo^2 'Wx[z@{M -^!~l:{7qOlA.# YXqխtXGV",'s _ :01MHGL6&['>Eȟ{sG1;"ů!Q,v;;v웉@Q:ۈ.:OT%+`эv9KՍaIWdIKk:Iђ"QOq\Ơ%H!3$ExHBHFx'(&>gUXF春(@#4\ hB)H""YL@3!Aq+uL;i $'bFD] ]_|ɒ$Rg q c)D4CVucBZ"ZDc$ç@ gpZY 1u jފ/3kq&6Vz7I6ϺN_D]{&[KZZTq͙luG)"Z|ޙs,!%5"Q˃Q5y0&׻ӊD"r ej@Ղ'pJ^fuZ* DfBq"g dltKx>Ulhu}DNNޯP҆ :+a׷cK)AJ=|6k.xvAm2⼂WtD CtuڲtiVoL(B}=VDPnU#i(1|hv:;~G)DX㽾m$عܥ 5җ28 /Ѱ}?f @fKvUy ԍJ`aQ <d%yPmOr.S"L]l?'Ns1k^ȥ.Blvs7&4,K|d#Nou/Ae믎օXZw߾a"y$0KGp#1wO\߀No=Ovi ZqQ?=gӖSjQTp-/"yᣛMS#9ʱK GMc RQI~5ٸB+ED/-dd̫7}%:pM\eJoAoǥ>oIcJ,Y=T] 9 i.|"$#s;%iJ=c[cp a!-lqAtS`bo~s@sJiKښy8Aaf8!vnT&83oǥXROkHx@3&순4Bp Z=^}v_xjgzծ*1 AXZM'7ˬJNSa@JW'o4Cɝf z*;X9@)(d"j_XTy<>?,dYsnM/7R@ތ[8$ǷlhiH쀂K-{6- a^-Srؚ!GV]bT:\>[NC}f6^wRs`8/P O{AsAqp(r_ MbѬj5&Eyd;vZ/2 Xې} 1"CbuY(?R0N]M&%4p>qxB⾓ZY]f5J, h8INB ۿlKHdaWĨqw~QDka5e0@ve^q*y[B|9ᢡF eXX_zJ%C K$G]7CK-* %&_שFE0?VeK\EP B%bQLWCGyw.|\i-m1&ӇԷF:GQ{xڳGI;U}&`JnlCqj~~t@robpP[\/-UFrAT2fc(D~u6> e/Ji8_DYf38XN#DHˢʓ$ʖSzB>/P D[`^I@;wcY9)y#TfIC𗣶]|((`I)7t %,xӬ!gJOɬmHe8:3F<:Rzx4qmr"b-<@Ö@IÇ}$:` nYCP -"1˲4@a# ;>,br g"Lex#.;#7$رڍ̓KaE9u_kNxt>Z! 7iPX e;1f ;.:Kd(\g@&Ar Cn-;x7/#m-5)A<7~J4T=]߰ HiGj+US$wmb夿u=zӉlcEL.޺Դ) ^ZIJӰi:O aBYԡ\xrBO I@+w#Juj+u,L*dXupO[b |T)J~:NץMeٴ8 4{E,#fͱ~! aj9^Jɬ#=H⩵/JFg/Ӆ W -8+! G+8_ެEzU`% W5^hwCG 7RYi3_L~Z-b-Uٶ%էjQ^*۰2uƕtlFV?kT |;+ez4W\!X`|MoFx\} _O&jys~Qտ3xq9^?3ېu{sk` S@mMQq  [@FO*t}'-3 8 oK͕h-wa S;X9/m*pp,z 胫V`fha9+~`dT*DgBdžwJv FAm,.$'D-S̅'B8GAxު'W{I09˜r) 3ytp!X^a"dm;n Am}삡ɔ]@j$0TYPh3A;[ь5T xOfpF e~+Ҵ?`$6`Q;x: sm <6FV%u'bVD@ƕ0zi4^ݍR&i>RϲBNQ44@ HI<#4ڂkץq4{9a"28F,#8' 0,yaEh$A-jWWȀgs ޕclX`]];8˜Ri1"v&f4?CcѺ EH5(QdbYĔo"5uB{*6tFx&jjp?Nǟ)Lq0ΣBSa#r~5зꏗQ!~F}b9PGhuY0&Bt^;d4_,"lwHz퀂؇Wi^γJsX(`8=JWT0_=: [mpFF,7 7~y:>Cd^5-d8Y$ 1~ v8$bbgI $(-r!K!ś{ 굍؜PHIR2,,HyirBqƠDYRP4M(`@vaӝ߻M@r9unbo/ݭ8H7aE_oN[0VԂB NHyD4f({  L˘ )ŁR#A8%E,JqcR9t!#ʕ:faϙϨ;Cן6)! ;@{>=VDgVQWvD8qBF$yLJfAV*qT 'P5t64ͧan4Kܼ}. 4@x` U^.`Ɣ~8+l|jrS-3*/b=p#96##^'zQ:-2Kf_@EKg\.ѯլ>2ίu:n*>qPn$tUby:VbI;ӳF*ꏏg/yB(})=qzl56c}鯖˿F?s-h},Fi< `eXspDdZDj"ba ,>C%M]д 7)R9"8MDX-nOqqi;.k&~S\Vq;syRs|'|}Q5*p19 oLʼnEj4V_9&'}WvN/}Fm c>eQS5xt\rR}U^* U%.*{ޙs}nj^kwҧTLnDp[6G(iw?z?9Jطzɛ]?:oã7nʹ]hҏjuӊXdž/Pp,6*22],Pb :&i#ׁHԖYFV;I鲌9^E:VTȹc)"IeJL)ҭ RԱlQ?^8)piOo#+ڝ#gj!| ccm$"FJZ5ڔ l]Lf 2Hf72g=- VKG"6zd"e`ꤠ۫+ o9T7ʑ"Uu xu.uFĝ+A0\Z48;'H+NU!BW"pt>7 C*$<\ nƳ; kY|i<-jĭAܝVֵ^:$^-pc af*\X݃c9\̂ UFCCg]qM,Yl}wYwO[s;:m ňecVڞ pa;t  |vT7R"噓6^ɃQN oJ]L,ZJ'^^|q05bNM8᱌Uzv5!KEZOY,ΕxG(@ox @'Fl;M$|0em9.1{s){/`[ЅB(wxeK+3/6X;cQ;j;ZO5|Ir?mp8`vTiib6SLC]Hcz8+^>,64v BmITHʱoPF)3$um$FEExBܥ(׆(I:\PIע&Q[R4ii,mю %9^imq_BQFY,S}]46 6Az|| uOQQ pZș%IJM"iB,g+'R̂Iq!$H-Hӷr۝]`hNKXKu ro`6Ԗ$$@kLZ[TLrh “"re!'7{ON&#:2~{Kxrt}~AA~bGț䇛yMgxC]_^EUFh}۽2d=*zى?=oZi&YI ,k>9ʑbL'6 vg %BTJ4%Y&Bp"jލ_4tR^! 8ʈ4xۄN^c!Xp 1%&Q$x[x54hXbIFv|bT 2L Pн"1E  ƨ90 D$-Q%ϡcNS} ,]6`tCFP mxH/+ZHZ9\M^nܧ9,]9RMxDC>1ARя2QbItġE,~"Sj<4jqZP3| 9&UҨiCRc5BQ RƟ ]JjCH-)^L_¹jkKւ:$( PtF0 (I8j%QĹ1=i9$(A-< $(`L伊Z$\8'S@\|*F2(pJM{>-N߆ϛ^~ɁHsr>.B"٥ȶvdΔvtU7nɴb\9^"WwN\˽o|_;::GeOѰg \A٨ǐor6o61ܡ DnͺEZA^жo֘MP<ÈOuDxvtVZ__@,fKu jCTG ˣ̕/k/$jBWc'N(Y7q:Ro:똨~yPWZxPhEGBHGR#4lAY;^1Bߖ Ǔ|Tee0F+諻A%HՕ?Ϧ1{qu([6X50)tf:<\uq4>J6"dm[FГtnz lD9R<₵چLof?. @`d7A`ϏHr&[v 5#th'g2OQuP7VN#H{Ck/ ᥇${leeo7++ Nsx5Qn=N~zWv4'/{ipfӬӲ:{;7#w0'I(V@#F 4.ѽ_:"5uYX}(J6!W"yҪQC9g)9Z¬N՗>&QWM/{t.x2n{%9uwk"anGbr:&i>W2qB>2t}vͼU]yG>6!w q{ͬR/O {# RݹhklݰJӤtm=:o$1sźÁ:m {^?OR6ep->QEt,Zd"D.iVɠhrxJ"A/~LC< @ ߰i-\rRqx,!bF-PCOVo8OӮ#Afo-hht|0Q!{N~Ȋ qg?>$P-vzgL!78>B+3Z.˔DBz-%&naE)N ʗ3t}-ZBqțr䷜]А;z7-6vkkX7/niXA>KO@fNE> STz}؂vdKcv*%KYV[|\]!kn>'Lt`+bR.RN`x] -΁.$Y2^*x^m%BZa*: 'Ur&`Po y7.R)2!if3=^9ME]W)֜r9=A7D5=_G7*Tdm<>2sȌ٤Cxl7Z噱S~wTTġRʰ%8w3ǁ}X-m_]ܔVI`Ji{w9ʟd򷹸z翘mR7e ! ENV\M ;N//Ͷ;BaKjѮXܓ}eL->"SJXl@lQTWCZo| li6[Z6c֑XіN"ޥpsyԫZKYQMRKd2 X=OMϢ e|YH UVo,i-.r0a6${J&>sקJ;.fJ;.v)5qJ+R+yh.v6Q0p|WnR!G1c{Ԃblv*6]I#Zf/S6L#9n qДt1/yvsy>|4%(\^?|گD㾢Š-Av>7ɴ걞Y@z߀~UXWuAK!uVQ4̢_YR2ZoB UDm"K(;O,Df4ݤHc ᕇ=y:ĉIg[`jɽW['e7broiHty#F-yƳ`A>4hPط !D۫͛UUh{^ s =xT{gS\;gZ@g?Eɝǥ_;3U:ɪtU$I]nnGa?&{JY |AUJ gx-yϤ1*>rSKb${Fz qb8w>_]~ebε`5XC Z0u o+82?qAGdc^CZ %A@, }\zH-X61d ;u"X#ڕ=N75;i2G+Ü_(kĢ%x2d1gM ) ̾E/ #'8ʛ\,j`WS}nR}ob/,$:3z44LУ5I|\#ڛfY$ Or$ Oz3: O$I*ASZTBb +<J LI 6n(}PQ3G7<E4pU'3$7RV"{exAx-M&CH,p(5P@)WzV QHYZ$LtS_wk"nUMd1),"QO)sA=xֻ L RV(uR˼ +CߔPQAsiHRD'Uq 'oe *Cu@"2Ϥ +!Y.b$y(>ķ)X9AE>IF|2G?W \~ǡI\ `,7^{+G2+x3 GPvp}8  ~uVy!ǨN,#I1!O]R4$ _ f6~ej CTP']K\?-a8"ђG ]T "2Q1}Uc*qqTo߽(Dm{ĶbL.eےGQhHA^*!I!qj37+YybAceB/uU㏚tT4ssR!Jb*_E!,s).&DS+'N)Tc>.sPd0Z!{ )$@UL%5"s<nƔV'irz n |+D<^ %M8دU'@SHЂ|45 uD S& kx׭" ic!,8.(u'C SEB M$zqJ2 hƄCJ̷(w(ƴ1]tAk{;ƃ9Cq3^K ",rp+N@~ύɕF350ԳGs)ւa!'uB@T ">_G zW-TCH57GI\3Ch.YC[5R+7/+tbwJ.F`_[i]s3Sf`6{ESVY "{:uԇ?ջAE$^>59P O3HAF0>T?%HAn XkCQ7E7Ta =n>.ǽv6&M~v?%W q.n\8#Gr3/a0YkH.>W8J\ oW?2= 6(? ~`YBIىz}4!X/XHZ =gW(I|u1QP'1ԩ2:ӟCvCvY/0R-uݙېMh6~| sT.E|7LפZcq5gsdXl b{޸Lٱ!Oyn:EGa `^a/w( %}C' bC^kP=7$Nmt_O zNwu5~g%oY!$ fvzM[4@b˛ܒK}mbzD6o|vt`Ϥ/2X#w3^JGh*=^U_;x Jxmweo<5-U㷎(wϭzg'>1U{yX}uשּׁ&E8*812^rn2%ئ)\[]9RL³al.^V\;c+hxk0Ze$eTVWiQ/d&ћb!}NoA]xZo3Nxr aX^t[pT{t)Z4;D!#Ý RJ1;BzU{cXk;{6%š%`JSXG @H`ϭA  wD0Ӝ7S66JDT~Mfxr[XfnaO'wVA,ﲮ ?b5zߓ9x&75⟋'SϒO$vrm&̚QO{Dɛ,ٌ~L>fBɜK/Ǐ~v1Y#iPUHwۀt.^InُL3]ȷZuHrh\$-@F_A)HsTfV1#!X_Zj FX =L 0j W SO#O!هkeدMA#3^Շ,z9gǓ "c'XdB<lX~{c9~mT6+4(,w-Ne8\XlC4Iĸ&7x5&Y@>=Ṟ{ ?,P%\,׎P .[j.o-vnvmzj'8Clkpk1;:o~Pkiܤ=,H)SX{r%S(`8Ԇ @HĂbjr*(ͺzH*Th`JJ i%j2Pb=8AǕь i*YGyВ4BcCU"`TwLGPk:.S:`W`SHĒ0c: JhaIG$12OEޏmGk8DT1/F}10Uʥ 5 Jd5C%y  JUゕbj{8X+kJ< l"|[qaPcnK41qI}NF*=Ѹv>{zq /' AL&s[=Zu'4]ryF8h $f!l0 pz˞$9IByLDԂCt@m<7MaHj0j;&‥_VHXbC/_݅tf{qAA[$"9Q@9,Nk+=Á<Uz1L6H~o8Ono=%SD!?[YAO`4 {3sfC;fJlG'S-vkbdRI~_U*DRJ$"rLDaKl$hP 9|%ѱ< Mf% ֢Ȃ`cplbcp4c? 0 "&a"Ww[>9sI5 $[pDf "Ʊ"Hktī47t,jWw.HiR*F)yxTR8h-E^Gy.)C /9玥,JoN7H*okQ{v'rgU8@f¼Ծq۬\AF Wf5-'I+|~M}'2œ^ra0gp:1sg%Z|&K:WHBN}d[ w_.Ujsr*lbyGņguOȕMt. %k@Kozhr3uʕlim Hۢ}ћCE4+/J/-_B5_Wh ,dR&YH 7enyXί0uE.t$2U$(81L5i4/ȡ 2„ ;V.N"Q,TQXDLILRbxj͎~&YΨR(F[eL5tCL֝r:u53v8!&uK6!YZI+#DYƔŀcDbpv\ cF*cd̝D]o3{qךoi \wYFX>K T抉%Wܓ(_% C~-܏y l#~ P-!p?cK u@c2[BZ8[D?g=ٞۢm؄dsyCM{t֟ :ܹZ7扮r&ٿs2Z'us˭-'cydXN753\#|_t~1!lbB/&|b+}tf~9Ym wwj\ø]o/3H~M]'0p_ {|m257f:_3E|֛߬|*e3:+wY<3ۯ4x-Nj3{o4̹"a~\>,/F~:3Ő>@c|yf#NX7+i3㑉c;ص~)g?m *6: 9o͋; nFݚ; r4^mϭiu}f0ZN sλ ´~b;FRJaQT<v6'.iwis\G6նG-NA=. O4\kI6:ӫ; e^3ϯ4ֺ \Þ]UZ/_?76y g|U>-,RX 7rĄb#i")_~E޺¶RRKv qp|2:> =&T!fc$63B89%#5k*| [1^B Q!j^ !cfϘ]0 TjL*7ä]q^ڿ%O7SŭB#'7}D[5?7C cvØdgN侗ldF]|I^>LX>|gg!Bl IZ?+3Snm;g( diNiYA/|^2ApHSfc z +]!G4~wk_Br2>>Ԏ>m!16(c7Nc;!li^^C[.~ w-R@t= UNS;[Ϳ3GtǘO;J߁G@!Rt҇\_֏3nf:Y׿z=|+q%:j3׿v^nkVnoN2*U{`·w_"mx/ѹ3,cM\sO7NnD*~Dh-MmgZZyls;k>uK`ׇZWk(k{-5]QZIp֮ZssʽHڹ~t^c+s&R\zUEvk0Al=zxs82U}c9sN]\Rʇʜt\Rs:'oѕ:Ibd ޡg_D?RS6:dVF3r1tC爣z{qɵݎк&WaEIj}⸋~Q"/h'[g툂>) f[9C78 ʚfP{)ICIԓc,<{4$l5BPagPN շjǖd's DF}0{"YbH!uPy ]𥊁BdH\c /RǡgHסaAW]թ) s[0>:gMTX rz1(;ʤHSuo䅚'(uBC`/M ]Wnt2ȆoBs8?+OC7@0scrC%m@aѳZ!d~lN\Bj?T:P[pPw(*yW)]; 'cqIXvX2| v`♛իj~dd`dmͺaHx$ej&r*kxqv+ԧv0(7 sI*n /^;0zW/NZ?2aW7P߿,uBHO;:vj_J-=5Î p+\a8ERH6 Y+'`'xT! t$+^Z⩮kz 'gtxA:D҂^^(ՠBNb)Ŀu K1!+8e "kK,PjQNb)%QTRY7D-A)غi{ e*!D|s;L t,b98KlnM0—}e:{Krt/d:6LA:O@& G > zkgz<#xe=X`KqzUm6?d?? #Y= rol<?8Ei"G|-~~$EIrq^g0è "Fޥ^ Z7n>$EIZ/۷n2w X"F7fݜZԺ3Q'JY7i'V(廵vIhi._vu}7r""S_$EI՚$G6MbK\ZDp 2q q"lk#0)#*:Apc2Bt_uP!-AB\Dd Wn3%@peb hLV8VX[Jb!aź1SW mj8Qj߭9NT7:n$EEk otG#&F6Jn% Ps!-y3Q'껵a2y!ݫ&;׾kTȁ9^5@ZeCWMȗ^j^`,VJt?rJjF뵤 SW#5" H{{ !ī&h&: kp#yy>5Ae})"'#!b~56cb!S4f1f h1WM`/Ƭ!b^5Aj޻Dc1f2%hbrjc1f]Y"b!Y@1KLbcw1fs1yԻ{C9Ę}hw1fEĘA*kB{1fUw<9ĘA]YE1OM HŘ%a(b~5!ٛ3$ĘCٯ&'ی19c ZŘ):ĘC٧&PR5ۢ3s1{Fzc\s1YbTjb!Ua!DC9ĘiE RϚ\oWELh74|t9-hUG_2M>=áG}?^C_Uw*j¡O`|×Sfqs9x0 "sW?FDM :HdG I6NG Bf}.;xa\Dgyx%Y0>~&f+"V.),G9me#i")ݙ;h0Rh$q$568NRGSGL-3IɄaKA'1|>T `Y{XEp{8%=`ƀ038/CȤ !R+0H1p4-RplC)XtX )P[aBy2f\Ϧ G`R.3'f9rq\\XpXܿ!Wō5)HvOƋ3.}YMi:ɗK)]zlbi5;qVf"s9|`OP@5w>J,C%#űF&IŒ@7aFX̰ j]i-ܛ /Oll$qG A.CykH \?Ʉ{|ki6Xx:n%%5(Y6#,421#G%V(1XژKlWqC🺻)6Sz ˇάڄp1:Sw3 6*@A]g4p!(A!`n TSP8&<2ٚC=)+\,$we/ !J3@tMSEFX0v8\(HTsN0 T],bAdR"10^ ‹TcĈ\ѷ˻0)I"^_vn7ie~_WL(rpƂp !TP3rb¾=K<|37yHjd =CI@! z0 xpyfAQmbPҡUg4B:DJuxd$!/l3beMm\si2xqX?"@;3h!D/q9kى u[(owK3 h:6"ϙ"_l:k~nWx2o][R}/Ǽ%i,*̒_/RMDDp/`ï Os}wW  si&p=7sOn,Hիp%S.9gK30nxBdD~ٲ /tqQfjJ+ <'1`CPIYZE5~FX~ņi0;L\ogVJ~dz2xV)ʬ뱙''ش_b| Kq〠b!:W\^*X-sNO׍ACX)t]b g T!fJyEr t[lRq>5$oo9ԤqS+9-˰v<^/3wZ_#{.36PXmd!Z\<.Jvأ`w$ŕ%Fei_DVsf~L@Mq2z;$2{Oy4eB3IW_69DϜb­ʾyύ׆*ưV{X` S kRghM{ֆ@6c@» c&H6_=: %Qe(JPd`B "֞2ͭ1*%{~r0ʁ5) HTI ֡H1k#f׃@nFRQP2W 'UzXUJ-qIi9$S0TÌxprG @UAfa瑫JcWUEg"1R9zq135K?9w?Υ)ژKp}uٴ5Z%]#P \hB;29,=ﳄ`on7|ᗻi|܌Csk34ͻo׫?;tq=pD:7syW6aw:bRg>8 7#>2GaILwW9NԮ"aVqD'J0?a7ۅ > qIw՛:ά~hNa=iNkxI=\\4yrLV{P6]#1ߘdΗ \m׮JbB0!r ?~ԏ~F08w{Wm0zx\oS^RǷrjffJuJUz|8k}x@&OW`EKӃ{w|Nto|*i36HgS*(gȄrG0 d%{qF[G_>} Hl_snnwYz™7(r86&7pN)B{sOX-`GZ0}aQ$4(2[XdwYI;猷qA?Pϧ$]לٛ?[̢TͼZ.ąstG⽰hD:2ӮO add4Ø:UAFbImָ.+'ӾZT3@&7i_RrW}]I BXWvMlݗk& Pjmupa}Js%] ~6% H&cT]ͷI1IOwɴHe_M'XOR+U- =+$evW[WXgYqb<` j+R^3w`9k @LߏǓo_-cMwp{C1%/! PtB F _f4W3FtۉA z|ͨeT|,Ȱ؁<!olM]I{y0߲؟K\;~3b?n.$k$>eY'd4Hk <[Z h=ܨ"0„RiJ49JD'/5KM_]ebU`z:J:JjUV |Zo !G2`C#8hLy@;duӁuk9'=G2Iɑr&{j}\RQ(RQШZTGQzvZAi?E UJf$C Da+q%l}l:P#}l:-RqZdAu4'EL?J1(ӏ1}UM DAuAIKL'9Ck+d1OVy*QJQci:P#An:tLACBwL˧,Ts圲OXP]m;:[THt,Ai$I!\ _Ng|*D"i1 Z)&{ΙUF|2 M 71 9H6u`Cr_F0' L%. Q;rH'DlZ2t6M䨺ύ1|foJz+8W<kzywܓy.0 A]9"!Jtucweg!BJי⣁b0D% {.Jyż\9H g=Eh9BD tMk !`iiGAa-';Bf0Ost*!3jx#|Q~:?_ـOϒ-֫fugJap`ǂG &2 ##S1:Pčo~;s_C?ӊ?rtH p) :`R!ࡖk]YsF+ zgA}(B/ۭqLg̋'ТI/fHbA@vmQ`!ˬ̬Lh*@:\CMFB@5GlQz`(54Q)uFMwzTK *;'(,R" :`{e[SH6(EpLsN)7ؖ4%\KXb$z/ҚSt L3*XpR|_ -JGcJe${ d8.Pe[A[6bIwd?sT0~1fќY-M֤6{[ .3̰H I%\BҔZoTAԀluƶlY*+bUV /c))(|L2&|H;kؤ.㜕Xÿlo =pVZKȱ-Snco;j )T |&,GF ,Ⱦ*Lc WH\DҍihDE]e}O0$![*\(vMyL$TԦhZtʹ撋P_jtk%Bro~x0*+Ko }'GsTO =xSĿzHd$6Nn J8ZU* &x/ICOqj\m ,6J-p XMIfǀ SpMEO|4qaRjjĊMPq5g@Y; ~txO{56ȸ4}~%sb`rcHTL=mN~c)M]lFilX3D,Nc0сwnpqŇҋ+a%cbҎq'O8}JcWd= sŚ!keEXe=IyKUf.)e#.Ewѳ-r` \_O_N۞o2ɞI 3d* ~WWe?YHq(0x rw3 a38YIi(qDH[%u S&[jՊP0vTmۙ!^goegZH JR&q4ҥ3I "iǂI0wa^ $X؞tLSLde6aqQZ"RPS#kQXʮ!լژG~Hs䊶BݟMLs.Q*aJf !LSlLI3P$Տl#![mw {"]!"\Z؜[+9AH/uic?CT c5kJ,AU"g#`_ (½m<};<*4!û.^Ħ -S~:v_~o>O]|x`,}+x~ZsꅫOz`&=IJ朂jڙ^-G_?[ -|/ԁ=*?q;3a3@3 o֯&t"҅vZ4C}ka}qNFf;U{v&O 0խ$ΜV_vS]MS<)՞>;|J]159HF|taai/[{=WW hmVm]ڏ6;C%:\KOψ]z^Ix̭^#VW";e.nAiowaa[A>:Kx <epVW^lmV6<k3B[ t8y!~g j7J}9=3ci_|Km}-}#u,;8R84Bhm\N&DYPX KK0s{z;စ4iq{OLk>ՄᝎxunT:?G^Dk38\j<U:(8 7&4-y᠞p#lgr^i~:*.OMZO" 4\\f>K5uУH>k2a,Z1QKdDb# Oq,-NlSRb\BP}Ïv8dT7xP>nj sy (*#{TkFI1py8L,c,uoIhI6 o:g*;Kt00F׮xMZ_׻پ\}& nvl~>x*\u?osXi.k=D?ZzWxy woI_#/=CC]=+Q hu6ëՏ@x8, QV3֞ũxA0 S #Ƽ*RTWqB+i¿|E`OMG<9Bz~1>{鴖pT8wf'6]!a%Ҿ=Tb6& PX׭Sڲ;A|6ɪq&̘ 2D"&TjdW')c)$vD!wWʉG}-vSMroWN,:PO" 5'eŅ.4&(+%ºF[ܺ(˰ A "8.n3{lR5$evÌD$tpcaElB|ć:$bWPĺZI(tD1¥TP!_9 uuAYp¤(2*9:0S_OkDVi_@Nojox1;^W5:3ݎF&$|Uz["iU $LY.:+^L:Y=%ŵ ,8 J0; (&3_{~)Л4hOod|/wu_r S@>ϫy[wnArtGN0AL[$5BSAm!Q8qŋ-In>?LF^wn ~v/,2vnrݽ7_#S)`nݩZq2DYj\%$a/Q /R)(1uJ-A6̈́vႊccVֺU#T^j"QmiFLD"bSb*ɔIlJIֱe(M@]B-haHwNMճfnS*ymcbD&1Jeq LibI3Cx+R,((?jɉ Vd8Bؐ;wz:J޹\Vr%9K˧;am5a`?n5 ))"]I M, ~sXYjDƃqc3O''7#=\ \mD,J͓Af|zUos]P]XMٞ1)D}ۗ\5mv}Rc* S,pEr=oR&i->~`Tȳ[5g,Z֝ClOޥPLKeKe :I`!skWG[W8Oybc̹b@=N_#ӟ٠OqyTDX 8pi(I,E,git8bO8 g[̯Mᅳm+0S;N}3A]%}ayfsU|TƑs{zg=ӧۦ??~ߦpφǏ ٤^&v3a v+W$aԕ<ȩB,߾x gJ( &LIU^ʉM±BEouZr1kDmGhh1BTځ$Cy+k#*/gi 38ILF,IyMibbk{K|>̒0U%-CRO Ay߼tˁsV2VmW>b(HBa׌U%ҚƩ,F*і_[\ S}>B\$\1J ]rYNNW_y45cӮ0QuQvO)*[%A%v\N_`f,B鑻j&6fQ&FVM|=O>y?U] _X1aǵgta&%;1A*-pz[ם[Ν.ۖt"++GUuBٽ/ a~ *7nF}&;D4T,%ST%'OTCZg >;r ~+ƘN`/^ѣxQ$k|)Eg:gZ0 r ah/NVa&aX_cKZSԨj89ŠuN;::Qn]fj?a"[c:|1=d5 7V% U9y!%y~*R$tӧ,{Me @lgJcYvqMyhWoueV>m^S'Ld9gt:R55($xUߵ/٬CPlMobS|BhF*)@") s,C/3Ѥ,3I&/\]mo#7+|9,I"O X.6H.{_r:c;<,U7QeD`Kç'6 n2u2C`m-Dx.jE֨CԧJf`RVAhPZ?a1q< G cpj޻8uc+N15*$sz$d8 ? Pss㟲DI2@sUy`MWM/AG6€B2U2cL$G+h9fAlA7G_*{dd)XIi,&qϦ-L[($HI"A'M Ҋm$ F/=J{Ãt AV֊;3) s << ^Ug5ZhQ:IYVad$d']dbi8O,ji#T4vKyl0& 47 C윘=r~ȣH$Xnj_<]d7Q z Ky(ѯlM.S(=s 3M(]ii[̄́W1Ӹ)iO9Ս )ۻ{ғ-r}ZZN̊[N7o9M>s-[>1vх -VoIP TK|w)g2P!c &EL0ic洏(UL$ VOb0H*]Zd{CՎYׁ:9f]_UΊԪoF^ k͜Ew. rdD瀣OyFj>'KCF',m>}ll[ڣL %*KFgI M _djd1R^Z0Cf_ f]wf7̬A]'uLÛTkdDl.5FTF8b#ύ䫮[$M9ӟЍ^A̽  dN>.(Z BAnL Vr^i E\|7+?4"m5޿;9\6<"0ZC6yȼՌ̍[$*Ac?.\u@h}g:N\٬8ZʂlJ2U#mY%Z2}ca*zz:%Z@vud0e0zj#)_gnjYKZVzl#3l7,-"ץ@M3Q{aRKdK ?2\"mBZ1*^hr};)| i8_>%!LGdRt7VZ2.o>42ۯ@H6̅^VaaYWCڔfg9Et|36wϊK:,D mu,ۀ@]_^&'Uadc#0fJڻwV3J#bApƵ9\_K}A* nSR쳃ȷWN6(7+q==> F@d;IhDnFN?iCѲkQ_6 .GZ]ΞΞΞΞ49yE>!GA9之T#:r.I2K^cɉ,]tk+Y`qKbMZOљIZ;F) 4& j02zd0+eĚ 9]zBW 8]!O`w:GB%0fJefV/GyǕWvէ# Yt .b >s(!W0>_%oj8߯6=Bt;Ǜ)WZ'oWW?>-׳[rdiȆ_#7?w@wvР?e8$Xv1H>]ޗV'NxP /AkWΖ'_k6"%7kq  u˒gWn[^v^΋#=]ӚmU瓒c#@=L~K4LtX0r78a2Ќ9nZh&H<x)/9hSgWn\HM{ ڲf6PJ+tv}?{]8޼=?njo Zs>8?y{/4ܜ>1 _KDq]3:#_Y9~y7Ȟ"? `\J$-!'٧)|ՉjȂbmzb?KĂ2oAUZSO^sKM+w?]\r9|`;7!E@~$P7%|r˓9k 4i]&ҧ~x_(BF7#qp1AԤ_~i@%Yf>0\C2c^:7 1\o5 nn&怸~˯,S6__£££&6+S¢ې"K2*b9@(C6{:0W)ί:Pʮ⦗n;^H8tDRн -11ڱ.@):i# . 1>p6 Dϲ+r"Ij5ʦb*b J9K[XLeUTiZ*K5r1BE-בEoc| 1 RbZ*KR*R2 +.T(jk4cwJΉ0 5|(vSǏ-k nuUBԘrm԰Y_ۿ;ejʖoK[Oiy7?+_̶ݺwgHҙxDV)Hmpq7A"bu|ϖN 7Nc\ iRHg] i@|M09Օѥ:bo8Cv\fzA1?1ӪksL)ǴEɄV.8B PfG￈oYh}6f{ ;| ֞o0Z^ڭڔHDwYdi6+>"t# FRa,M+hϞͰl=5zԑӤJDhT'DP4TcZ_G360Pw@ȥQ'm AR < A>#@2;Ckn.[JXo r8&o]KFt7ۋo.+\ssv}{q{Eč~|=狡 Uqov^|s[\aUg1q#&qlǀ9Da`,g$-NI'3+-$WS0$+Y"–I'8d"kF2)V[VtRH9"@.nx'YduXT*0Q:ְ;n(p"{]Zz"*Y@p5@FPET޷$G 4Mi' ώZPIy=a}W*;/noek aNJCm.:C5E6M/&m@K=s8S H¹ˈ\@'n({1ADFW~-N Fxfg»_D֦?8Y= 3a^z"i'MԟTQz'7 }>ԙI.(jTu{Wñ=ht'$iE;}g$'m]tn ƹl:rBFkrO^G[ ,!}l`̋\x"f_kЃCJ*|K\i+68WmcƘMftIp N*W#HPL4/HDCtԕ>C8[F%|V"KxXK@"Y:F;ަ,3VYĘqP` D ѡ," XR\bfq+#LsF@VOZ; ZgYX7|6۬V |o7=U<]N[3Pd*}vBmk׋ u~Qam>URB/)ЊH5lD.D̟; 7O"m(J4?mWVξ;fy{11JiW ^dVXKllrL^}Auٻ6v$W˶@fdfv383/d-[I38}-jbݔ 9ȉuaXUf|^E';n@-OЗ3wagM wͲyyJlT,fn;WCrW94!0d҄yU xfOQ<@,8ɵH9ֱXS39\ߍuh;޷0"DqaHO?f4U#:K؀nF=υh}1KժmNj(? @83GC˛bn/l_lvۄ-qlIA%=' $Ռȣ& P~(5ۘmڸm)B(g !s9%1s[bƼP<  '&45oT6aAۻPym0 Xn,+a 1p1 ǁx'-E,38ʄ՜"mKD%7e^m"dQ?S~j-8WVpL O"Nҕ ~\q.AP%xE },[M~[ֈ{[ֹYR R LK.sYXJMNG Di92tTfQU3Nbx3>A∫XZ<#69gulRF0| &,> @LPJ"_P$|I#ϥbTL))SkZX_ 3z!(>7{| ^g./'˵q*J=x otj!q^gYLp@!*./_JI.t5p|4ۇoTǗ?zǙ?ت=ML܏1LֵfE_+ ~ -O+;YhٔМg21d\8WȀդ|:{瓗[1{$B4Q؃Q}t"7a[ 5I!ɿi T>uߤ\aRJZ uq,[]K)En)3b'Ub[ g $!"X>2 䠇~D_sqZp4yo CƐ.599Z9^o,L*keiZζRO =yD6 $(M "zAg%+3m0r,8Vۓ-nղ!vwC ; ͡Kw*[:px2(HL{%Ct>=*>ipU -U#PU>ةj}X3ܼ窛Ej )ǀ\KZ{< $;5 _] o%w2P.f>UAq`*׹ H߭Z gpQs0UX D'7bȷҫ N*ϺGqg~9S"F[jJL4rKi*7T c\\Zr5Fjńt& vcՉ4h Fk](g3#r_`rǵQRn\IJ\b0HOW w>3oLj?(23YeoqQ2ݜ6EЦ9 h=6?# QhRx:;۱P ,Q0 U9I218H,Fٞw @r}gi$ٰDg)-EŤ DCAᆗf ќ)RɔaTs+!,h.o h Xfޚ<}wmCQNԽznAQ/BO/SoKe?WhP᪡Sw1 AW?fq TQ/2qe(zCʯP~t8U"8Pa@La8f:2M9vږFTH {%+t:{]`8pg@c^s0)6چ.8 Es[["l҈p(J[b qIb2W.*lE9ڳ؀ LfD{z0rRY֫J#8+ok[AcQBeQX|Öf(aaK!8B.3~8$"ąՆ28b"`$!mڢ` Q{a)&>hTeI <žۗ?XgP? POC a#M.rz2*roMg_~qS OhhVC/ ^.AV5Շ[o g\F@煬<ѓ 0 r 4# Eeb֦Y0nnpBerR9k[0(ȫQ8Y9nO+gCv>A6sibf&!5Z}9Y9ki1\{9k{\κvr)^FGM$ NQ4_d1W:)T@~rypP/eQ͝f.͘,*+s(:^YT[!Ch CA\@a ,s6"#qYiBYX(jPQ%~Մ'',Wb{v O;F=kjoh`<_V-:XǾIm*Tde'nC`IU4J4:d˺1Nd8!&ȴ*m_[U4J>ysۺQ}-!&Qc*[@օ|pЩ@P}a"vSu~01 ;  ߃ P84 m>YWRRI-l*Fj:6_Ȑj-C6arNɫ!c"ޯV77寿lJ> MHB")=V/#U1P(1DK \Q JbvVs҉HjI#NDwI牪8 wI8yR:ֹXd.ɬS}B~)Y45#J6˱AE;;{)#(!|-J@&LR:&a6cQDp/iC\mֹ\O(loawPnD$uiȋNspZo҉8zbhMY$i,d'Jq %u\}[@stٔ<("ZZFQLkZ~zkt wS;v hՎ5P7 At֜L^uVlYVPC%}!@.a|g?0_4i>kE,cceeV0xyǪZ|C_ƖSz<>up-e Ƥ r,k269פ8 8u uqzC(ZZOs=A +BO8pAnu6fH Lמ6 ^L[ZwHfsfADT0N_ ##y$&8$a{K?ݺp=yM Xn!^Je`;R(šph&@XݍEq$@)XD09< jJyJ-CVp@ig7Ҭ#]+sR-ARKTD=gzΠ1!f7ETf_volz+'9rI`Fd۷&$sd| QF8ج:-oв2ĩ50RkCQ,]ް6 MςU˷FGQ:z̳iD59|pfEg")9OLS-sYgCӝ ҝQ>5HqD y $ $%=K9'$aVc\`GGaX.aK߾("bzA+^\jrQ(;,%NT(H/l+5.-r (;t0ni|ĂgRekEGJ)9cp 8!g#m>ԖX1+,ကf}JtqqSZ<_|數VuS>1 ryLJ<_S$ƥdC\!yt;]aX+0bjA]qsGX=S+ƞg%"!IfVEIҺ@n Fds{ B^c"(#okΧ瑌#H~G&\ S^"M(c\Eq.Ԩbsֽ0mxɬS1Q3Mm$Ifwoi163XM #$, C_VK j-?|TfVV0=垁XXկ9*$ l?J55b}_ rLGOkr%( 9ˎ.J ڜ*_*7:VǛ*R( 촊1δĵZ$R?u3.F2O= :bY/jx(ܶU@)U(őXc'Ⲽ{帐 99tƈBgNh_{L^-R 1u٤\*%$K(/!GӘe ]6hy[6YCyړg='ª9t@K X0rD0ɰg%BS?%0O^M[ ذ1D}f4 A[I+4b %!ba@r7)uKDQI\`>c#OYcyy3ꄗVd)+PFz$7_b@2CGF? N- lx`4gԙ.$ޒQ .3A:O'@v7qb䝰U C{q(ߵJ,7_hj/SB'A*xa|(FVP]qz9U $2)'@?o9KQxMax' kD@q AT_ӟ 3V1?K|>:Ykw8 ۭvkB;Y14%,L,@j^wV֏eҲlg%)g*b ƌ{w6[L%Mv{@OXCfú̉6oC)Ť(&~()(J ]> $\WfR:rfRU@\T-A J 76fRzfi&UYIwrNV .D6mN3kf"Twfϥ=cKM-6^H^H5$sV@[RpT_}Pvјs EfneQE^;bT6G*dD+&H cAFő欞ĥW.hO%mЂ6M$FM%WZ4&̆\re+2)d0fd^X=PI#^E*I~-!ё7t5!R:v' ((iIC(S!Px69|G`*e $.3Z4|% Rn|2c `Zp-tv1<9R2* W7Mx.pnK9U 3ؠnk" 'k?tqq4kVkL#Y4xFyFd] jI59R}M_5l>ť@Չ eT1bi3LȨFf)%lbщ#suUSMi.{l m}T'nJDJiGXM֞B 6=9"UTŒZiܖGMDSʅFTVB ˚QJFiV+XbۍT5#uݘUBP]^dACJ )i!b+x DŽ4[asZfh@֧1JDPɆzU0t~f1ٔQɪ]q K ufx͂T ٽ?Ld/q kWsחu=O]YMtb9ɜy`+{/A%s8-/ BUczqQB9 ֙f:9  +%pz'#s>\TÌ Y,iDj]*bp^"7}^e x 4]|xphz7:: mʉ<M,lp1C@£_dQEf+`Y2@Ot侀BZ~彦‹S9I+E!)0^xhV~,,)gO V}#W5Ju > 0xuv\Wgg}U};C Nj)kè{79GF2wώD 5);g3h)-UD)Bz0mbM_үBoyH 9pcpƅΉ)y؅L@p!+"{OM3_D)(qqB;PA"jEӜYRc'LChl c,b $pdL5 |yÈzaP%+|A[aB $ctdVcXރgĖm1yP``CHxЩ@v`%hdcXJi TEۘczrh+ȒWmaj+Ȋ^AjGW_%]q~N͒9%s4KhV4GLnU2IU>=EO3ˇLxAY00Yf`ظhQTO g #aZT95dp2]e U| яvd/_ky+y-{.G:XQ`_9;kIz;Uo'[Y^xAT[$aBDNi XG;΂O"kUU2g n5|줆>v Ԓ#x*ai.`VEx-0X<=kbi wTiK9pַwbFr[J\(_0BcEZhEۅYTC~a&B%%R%5R9='bd}d G5FrHuTD 5K͖x,pɝ0갡<EkfL)CږSP8/)I-%]\5.4I_7KFuQ4fx6x*P$F[%MA %/_dH&$($)ʺ ]TU@X*Yckw z_ FuM%UAD2ROeT:BaukDžTȱLwx4" q',}Gx y)D._ : @R.Zk\saU >Ww1`|<)r X-$n%~޺槛Uo~u U2oeDE I#B$e]4Jkha0{Fa8(ld`Q*TyŶ_ Na$o F?LJi+ˎ鍏32 C ) w ;_ϣ^xUzbOM_0)6v }?8݋pyFp#Kb)u=W'vu~x[ Q+5jyh>p1\s1O3y){8ދ_ =suB~|||se{/"gAs*f8<ܓVaDSإoCf24ݕz͝5Y{ݍ]^?7d|x>. &y7*ˇQH;K%K{_3c=@OO;֞/Az_o޾|v/: ~"K˷gy81y͠?L}&g¯h&t|;r7(] gF.;XFőnpztfwv>oe=ٻ:W>e ]U}~ 7C-Z4%kϧ>7s9Ë8~)9]UuWW}{+YVQ@|]OFMT{X?ç?_^˵[?ac 7?Mjp9;éƁƁƁMS俿?ϟz}䍠V욖M lɘ~*#!`EQZJO3K !jNn d\黓;9v﯅EŤe&]gJy{Fh=5fZV&{IEzV~.^Fdں541g"jH&Md GҳP_kU2yC1.2P*?*zS*B\)1ǚLLRZkR_?,kQ_ZעT_k555mrQ_62 ^uuL\aodN r<<:qhF晒r:W(yNZ9oV[❍rV̏Z%c걺tV0E&ME-mnp;p/>^B t W% @n- ZdjXZ _Ir!V1"}a |MlпO?}a \6x<68n !l+1ڎRIu6 l|;= !P^F-ۭ>1ͨ妍\h3>5̓A|>u\o9vbfFjmY1X:'a#1sh['6\d%:jh:rۉ/vv'|LTZ&si~jnՂmP(Ԋ'K(Ψl(.'K!PA$hE.v]1Zv~~~~M4Z+e݊>31!# Yҟ~9T''̵'F6SkXsVe| >J!jd|"HE SP+glTr*#'(M|Ѵ>J_>>hv`vx$2*ȟȶRO]QjqREkՠIlE<.5G_JJ,qQ<7 Rֶr|r 9Ԑk555m'ѐjC, 9BP݊>S1AD2ZӈvKD${EY;~g}Ƿ*m-hP0RJ" ((j ց!dmqhZ=ԉ/A9j4SVg Vyl8:hg]}Һ!]?o-~ l?QT#mte Iiz7Mf;ݥ˲oTV^ y)9HQVE{[k]kyVm ~pXLthFab2mȀ>9|VEd6; 0L O ~{/ފ՝J6kqؼڭk{tFioϥBѓB櫼BʜLȌ^3GIgp{ `v21V?G{y7޽;yKv.E{ Y\hLmYw5ZWӮu5ZWnit=9;ζmK j#B]rehaV$֎0Ub*W,S4 0OhTd[kHaQܵ~]ܵ~f>x}nFBØ[U}'6񘴶H󗿶kHpTFtQ5pf8P}kUE;:Q]#E@bu%{ETRGx~:Z ɆeUDŨPma%˲_Ncsw̩u{mAI1=|lic'F pGގJ}.sQɶ; uHBjB$Su ,se fqHLeߐz~3W=ѫ q#CewiT-U%ϊ>>\}bǯ3>yepp~kMkM&}j]v=|H;}2i4q[G{7qeGZݛM5S”SJ-ajfNcdN5 h/ w?sr2+jsz>[S3bs#oWA:#}rqIj+,?Wilz'?Wm x^'!DNyrrrNM%e<`㗓q4pb׳];hrY)csas,\/!KD~v 6oMz!f5 ݍJ?"w!0]n?L j6fڸZnd{oFhrK0D_mɪrf<~NW˙󟸷gsf?kM\t'.,2ըupy6͍x쯱ي&i2|$h.G=99]qC^"d/ i9h\Eێ`O hbb}GD^%\"^d"՗Pr=H m૚ANc׸?:&li=-Adz,,G&N?aFY|=27 c]ݩW@ia^{BSjɯQ^a@ʙe|x$Տc:5h"?r]^^]n]`7]F:o=t5. ^Z(+J2"I(=0yv5F.zEq햢I* 2/+8\ow'oVӺIo?(:nba-3DSjơA@V#-3Pc:]*6ي6mQUjK"!56ĪJ5ɸ01F1=pSO]S*\ <ɕBBRTrBT n ؆*GWd8deH# ,O{ŗTj_//v+V*!\~n] ΪLy A'GZ1(J$@F b**~}1FY{¨DgR9/bDYXw|~W_Ay EtR.Se\jZDfY Yا6QaDžV|agQTPUI_;j6=aXNϳ J-I]Z !Sh!Zk4Rg*m>w+ѤGXX{¨LMUs0!@衬^ؐC: `0H)"+tvT8bI3O H{UvUϗg$]}$|Ze)~Wyu~oև>po]+8V#0ۯ?ѸS)оߊ|]~u :6MM=t.tw) :1)&->|jBd{} 5xcO7S/.ImTp6[Bkv# g_3oM{%)d rԹ!DSӪ}d0bDD-X"RfDɺo*?k@*A؟>dЍmAR8!:z 9b0 فDV8N@ʰDi7; Q6")iXY%r@Gѭz rˇS l}q0 ׶=LNz9j-^/tF 60k쐔xm}Hz ʺ!t]-SO7ۍѩRs[ࡐg"]g5b4b䨍ØT^Dp4'&+`:$haRȝ!06L;@͋ym5/bֶiQh dy~VLKG#NjwSIqSIVTҒ"b7mX* *葏yσ6gz2}Yf S[.wEHjBјȳ?j]~@@݄>(!Dj>EH wx4<6_|$"90c ~ |+6_w -{F"&A#%/T&]ǒ ][o7+EC{9 6acl6[D(/~zFRkGjM-@H,iͪX*>d@^vH H\P4pyd ApCX)i$cZRagVNgI8. "I01(&m5$1-mRᵦ-; CHI27mz&= )pə&XV3&2Lc`RvbmP4nqƬ8JJ1 AyۦgZ+b=J(Կv(FpWH08It 7T0Ip9Zυ$Ֆ鐼35`4t˖vx$Qi3ӛ=3i>b$ЭVJp0ٙJ5INdqL; wb`j-̴O6mϗ6Qfݒ^7w JmN+dMϷd=4ùgJ fpId]G8uFΙj?qe&9vksY8P+3g26jY7Bv36Z98yt2**;Tk7Q 9nHaS$S؉$+lqjY\E3}d:h\FJePd"`1Mwj ,?ڠ8wC,E BpPRr$E8ƩK%POf1odzorcP`dL3}H:F%wIaz W6ʴ?uY?|N pۆgJ!5&pr6IR{0΃D^6A2K&H!ۋDq5h.&HHD A 7DK}h[eLg}ĢO=XP4Ic$S"N&JQɢ`Ekvzm $3Z@(V8L^rL*\J9}RXhe =RI%:S!܎*{D1uӸwo(K̆ 2iY̔/LZ^zN!,J)EQ1,0!G#>9T2]C((\-ڿ_ܿ:pcga%) LYDYљɸ\ W#Ȩ9( J#b5>[ G(y=x.}:^MqHm11 Ɂ+աp`\g'.ÿA`TDf*g,y`ZoXh:/p1D/>˨#XlEb/uɜńSE֊iJ]BEc0@QUrIh'",!dMs1X2cZǕqn̨GH]ie-b*6=SÕKi)>HR#@*2;~T36 Rsɟ6)m$"MlIT&8nZ'CZMD$A h7r{e3LLJ7XMݡOSK%\4AGP4[Tmz& >EO-,InI &= Sy$T@:휜 D $iqm`+@r#2MxDpAM @VQyvfBrk6=tb-4J{I lzW'm jA{8|")"$YVk/`>k`y (XT|ύg#~j(/E,n>__1XL?o{^Vҟuo5yϾ ExglyfH Z<@ q>%[^7&-3F*bs\-Oh5j{ڟ.O*;SV,VGBƜ"9m|+WFh~|wPu6obtәƔ!3N#2Nbg3ԁw|!撵&z/K~ fzL_`5mGkx&y^;أ~F'| &h+tx uvku/ϻO8Iegn 7T,o ]d GRb~y1_W+4M~"oyL<&h4> ˬBQh`RV\Wk_2o9s{ 1xz Ice?/7drQG9YXՔft*X _ꝳX{lQ~:\K44N Lt)fМ T*PXRh򿻽2t+zqC6KN}:7꛺P q?s'R& ̞??хC 6QL cg8 n2#cV]xuL1ǔz r/4S]au^QocWʕe \ѡw22 uׅЦu z6ͩSV1!3|^_vŠ' 1U|LzrQ` ֗KZ{|^5J`hO]Xyk}Q/}ygW\t<ݖ4r"ZeE)a+ -0?1(!g׌Wo6Sic{LJ)H1FZr-縮h[õ[*@|[_Ϳ %g ʹ x,0"\+=zZ3SjϹ$ h83_t4 lYF6?~ΉTfS.JJf'U;1C9PqZ'Us`fqzĘZZYdmepbG3g W9H ĭnIpPtjGgwh0$;bTFGd ~"4hۖpk^f0{7Px LÒl#uIGx)`˨ҵ!A{\J) L^ @0s L84[JNXԑ)=%a<}u5~]rvZOLf$$eL5%Ѭ%3Z]j ġ&%XҦQm#q| ݝ%?ԍsQCk݉j;[n~u.O@"lXaGm R.U9 ^748\:Ľdn KiU|*){e9ǰq,A0ۻWR-ihE. t __ԡLH'K\]ShE|yt;ZC%+5gxީe7b*<C$myXkxH'JkԎ$IOOIh*( Vx2vvGDzps0PZ?^İ1̭ɶmRm=}w~NN盟6Oˢٟ^'X.JZCxkVl}ʠ8XJփqÒV|YT1*K_ A،zI/DUѦ!479Soӡ|W_&{?rؠi8?)7{U!+hLo(x+w3v}#z:hHn}Y#jlc{,-! S]q{7cSn}u:}ߑ3@ɼ[Nіm|T\9Cv)_V~=su1[.U3H<u^8zWGh#*)*m߽#0fnr~Qq~y/1_XrqNfBw~JYG;Gz\I`}WG>r,msoGtrqr~V= &]X&=K$ۼ'4IwL͓%`zǜ'mĂ_>-#j̭a*<12 IHc``$p-v{4Ρ̂薋ה&6ɂULdrp"Dk]$p#i:Z.Chy&*p`6{ŋe9N쏓g'}f8Ԅ>jwGndv[>t*T:GZ2gAȔuen"2^Ϙ)ev(v ͳǁ# M{t\K e)jF )T,Jedݠf6zNÅFd(m3f2 4g:oW/-Wd%Z\=jzi2(e4J/c.\1HL(BiRWY,RBTrD*uKrYPFEu*~ECL_YŔW5eى"ncЫWAOg4ckAƵKfMB3 '1PF!(X)ͱsJ-MnBy)xWLz>_s>1VNLr2M=b{uTOL؈.njM <9%?Ǐmhdz,K <>~‡'UFa(o.#p |뚍=bӨ$ln^スyX<;<+ {u h=򌞗?-#"$p$/nl(GPM>[)_{Ve+6iv$;OÚ%!K(o3c'0 VY$ʡpKEIp_ZFXDP2Ɋ1Oؼ;") %c*d3O:=b,k^(&ɞ!_O0׸LQxI1+A]W3'o:&|s}}-;ch%>?B@-R#e˟{"O_h)B%PY" 5 3Łj㒝՟A z0fm=؊H)y]^]̜º-%Ţ{v %0ifjoO>(UpфD${ #x pf g<Ő8`/ߵ#zgtv5;~.y8| ::"J)SGZHLHS(1 .Z :d oupRY@@d2B_ڕɷk0S9yyv???]J|ϗ'Iz?j:ʇӐU4dUŬznvPBX( b$2$i2bg @6#+A)<%|Vs|r-.'TbV~PDv4 ՞6}WtH ,'˕+djJ.l]Q2.h8IL=oM {f!c%mB1|h7/ck]VpFwumB'lJN6IZB`ƫZWx*}BDZHP-,NJ(ha:լ[okE_ UtXNx%y*>=N2ilc}b)mYɌ Ѣ^SϗIpޗurbw^FI_TVS=ƧSR]?gѓ݇7omha`@_445wp>Κ1z5Gq?G8И_9 $Z."7P3 3z ,1ite >̓ 0ttMu?>FCn@1a$^H=RPZ~}X?V4 ^l1[^Ǧ|4q@5H+.x@s!c% *q!lZ g,؏ TQ:y" FO.hU CEzNE(TTJ!QIagS)(44@24U`T8efMbfW`8 9HZs$b>Sz5rQ0seXTdc0PL(k#1(X=<,ƣLkiHB4aCw;l^ݤ,+rD1(U3Iw<:dܴEw%ƪFGEJPwZ@F)j)d"1J?#/]9Ky]% Y*$}( w f$fvVF[otcܘ^sR-ӵ<|1No_1 (d{Vʍm_AN֨H+&Hi_'YQ"N/̑Z# _l,T-0UNu Ƙ{8K]Ok*Ɂ wXQֹAex[Oi׸"[}uy`[WCe':c1$ t_c y.* .`tt +eR!JsЂd~m<4a3`qA0p6NIʋU^lu3VjP! I};P|R2P/|R_}~DP pʵ$1"wUQD7e Չ%O@sCQH@[0 1 5 2:-BtY'4 wW*5SRoqc=(ՊpU8]5+Nߣl`c$]+4y%@\>`N1T;s=Œ4%գ `tb N{!n}" _#~+Th6A\Jp"N0cEƈn\չB'IpuOi*!e71iK7aqˑ8LdXDʇ-* 0qb0IT2q=WAHy;woGe*^K2^\6VnHQyj???hP¦|_.s䇿?i09`rL&gBv7IR!2L -qd9{C(Z9,FfnKݬ-.jF/rBq~ﭸnt-_9ɯlaӇY~5ʛoZ Ip;TJRj2ZE9l׋ZITKg<րp ߁X; foEM(qDѧ+/v (N򍕞B}g~iV'+Z|Zj5?ϟ־3̉SXR1G^zW䟱k>c& aǏ/#yJbl/sC=mSt^+4Rѕ{nl~㿴wK*t `>WnI𺘫]I+5> =*oHޕ̌֙:9Szb[鯳x}/dXZ_b jRH*a8b}[ޕhsω5g`/T$/wHƋ`mއ0d;1&N`tȒ<KvRÒ-/TX,M!K5[@!\$3 &1! XKJ5Q b"jԽ_D-oxdb7#Ntc6V眢ot:O:jzRt1AODor$YNTos:+PSMt.=Om^*P %7 -6Xw}*2ຉ'sj_Fl8x:=@JDv׀J%(EDQL(5" C]HyBZz}_:Rp1=q Vꐃ;W({:'cQ6Ա?D< Z.s̠q#:FAQ8k{8jn8Od˺))z8܋%Ts!+q]zdzڷ cyF߭hYgj#ޒ󺷠}vH3I^wIYXxmf L!MzZ/m2dXխ6+J 1gVN@cɊ_தzwyqW"v]PҢa-y}"f,榫&3uUΩ-9u|6yxݫ/S(5OyZ!5xaq=a%}&l[Z40.bLl *w'u4G#* mx0_FsTnynA9xrwetNԌ՚>{p'joOł0?'(:29)x~5ү_jp1b9Bs R'RńX"aI,3JűZ P˃(VRŪ?Z%a(6_OgE;iE5 Nd*b "tPOj)C)C)r!j%I1F)j-Q>Jbؔ㹪 ADBmŰઐJXQ"NEBf!Y"!TCI,9Uw bŐ@)(X.lkT@̯o=)ra8r$F\HqJc#jb 55tq*R֩`qԁܗLA(HMP;8}`7gO WMÝA|P0`DiqU >fiI3:'e#PNBN}*Ә w0o,`ON^=[c-@K#~}Ad|^qygh{|1ߵgoxx;vPWlΩSzR,zGb:|'0w%Jwjɋp )lp_a x.hx_+:/" *mZ#SH ŋkZw,*_^o1<FLjr3 PGD4Vh(dIq]u6a):So!_U@D ִ7[d<$3TjkhǔҰeWKJ56cZj jLEy@8 ȓ,%Sdn2,8H~ofsigz#f3u;~y@@ Gܤ\Kd=[<˸tARx]_U.`(q LLXPb0 YMT&fV2*֐ M*"BWP-T<ܘuS-sфӋhÖdzӻ!zY Z*%V M:F#B$\iV BD(@Hh섆$?jJ\ XA,k91T}7ӘEQ1Js ?dBa(U۬S02otfWlze̳0_Wμ}3OY3 FNۏ5]r:>Ggnr6 lZre`)$]d7T[S>w:s>3t|,tȯ`0_ Ծ_LE}_|点OjMwJH笔|ɪK%2+s Vp$Sh7VѩZG"OW$Q [nꐐ.dѳevATvk+(Q!ȹqVڭ y"(S N!zb뽿L!@&+>x;8ݫ-u{yķ#=FW~޵]}s7R?r+x4O!!I(29%XO&0`k#-&d)4v@{xQUL_߉Gwz㛙RLP[B{tgIv~;y=J#Oϣ(G6Vw/:3cu6}T'A:NWe?";QԢȯjLoG?9`F&M_?.>|{4GS<-}In_EFתeMҟ=re z7$ʹ?'fjt?O.\f|7>y׿qwݿƣ韌o>.d}k;4K:ԊJ;wp/.:rt w?harr1aS&\wSu3{IF)q7_}}5=o>fiL_(7${{q/^j?ܫ?+S3|/s{/O5\wOIwd]7H>tNnG7{yy?qֳm]w_\-ݞGs=Xۧ^4I70Wr}:d|cf尬г,L$;%6q-CʆHWbFrz>nzgw?ݻ>tQ_lF~VӿkN=kss6?7מ~uӅ7SÛ| :fݛx'XoWwSu>~smpZ WFO~zjҼmww? 2o}Lo^oeVջTŽlKM^N*cPvO_"Mjp=tYWbB.;S>56'M?ʹz8|=4v Ws#7?*=Rp0SD3cz'νsrqƢ>g3]dSE\>`fI+ s!,ExP@>z g% `^2,z_ rmnSIكC0^Nױ (%~j# Eu8އg /\IW_Gpv,\OFM%8FBAϭKXfzYQ~k=S`6J u¬(aPQp"151r"N1)E[KHCakyDʹ-&X&1V$&`%[DA찋U5tAr$ZY:w(/ e@qBӈG$q=A0#5SRrmZ'{ۙJ)'Ԝ+P ϐIo1n)Q!\}/6AlWư3b\];U.e}u{98l@}s=ӷ$J*brG޵Ƒ" q,^ʀ ,0&i{גI?InE"/E kîXV_^˕B~F_[nioKh@t2% w~xZÇwSfFԷoszE, 勳_˕BaZhZ.}q|c[X`x/z*׼vwM `eWW pjOǓ)@Z֏)bL-h7ro16Lt>iFzFљ]F -4٦QAtuEǦXm|,bK;MFJ, PwZF>+"ޞ"gm .ȅ{X2X,4%6B Gec%la1ߓi&ӗ 947Ƭt#iw62t;LM=x8u1ujG&DIVȿE(iNl:RDnWI܇ RF3 Cq]ySg;%Içˊ%G[n)~X ́{'2 G|NNw/ =aHs;PN̓õpdxQ\:Szs Ǩ-Ni2d׳. nQU‘ͧ@tbAgg+9 DB0b͠~iÑI ݬڿЉ$~J`C:^HœЉtWI[ʺ9UO4֢}Be;-N!oEɦuS1SV$BB!˔s-a <̆ Lo66 TE5f6@oJI7$ ׽_|9ͣyd3Ҋ9KyZt5ԒdcWPKjVr v $‰ j) +4)n50j*PYAd?bp˜kv߰ݷSʡբb'#UC߷$1?vM~:4 uߔ?)lmەW?.J'~3[roKmə̯W`% 2D}LI-b j4y.85Pg ^gjz{Wg#pa~u?7?Z:gROEe]=̶?~Xc .j4.*`^hU)Cn`>*w'Ɓ:{e&uuŪk,nȘbGٺc۴E8Û͍jo'sb|jA&A[Ϥ2͈p񦜍VRcSΖW3Chr+PO<+}kV80eSLu&ӱ N1I`(4@ |V@ Y0BHdiu{n6A`'< W!P>9N?!v^2/f9dh)%oM z+c/)$ A&SfPpi)<5OY&#Xr^i~P|H>dF')*tSNgIQ2rcmqG 4~!|:ǏQ_[j^&޲*&E)C@ƂD%)V±:}bxOY:l){_l-x@cĕd>!huɤm2mBN7%lH!@f 0.'+@U=F8JzO wol^oz( 6ѽ(tkҡӿok^4l|7'g.cdJLα4Hw m,53-1=nGUj)$䙋DIgkaW5TqC[m笥jWC] ,{^yazƣ(KT"D(fKgQJdd_GvHȐ :9B5mOչUA<ߏNgXU܋xnu>w^x3A"UY.HH@vMLYj VI,(>= 2i=sJn`m&)y$_#kl&K8xiwSHlgW#?Kivb2?@Eh(9v%z 9i0=R*l4Qn0Klz80QT&./rTt3Ww<™ HYennZ™^øf@{kI]*CePF+<$ȾҚ]95 (N7wt:OcQ)gN|{~[uooX^,U )CNJLESM'*.XrdVׁl>^>^uqmȆI@8j=j^[&i䓡QSSr\IxHrҿ[ԡH&/YԜkGT#V @fWR:W`ՓK/ ʘ*FDkF2whST*)uF;&3HM-0XZ pLz4o؟FI<Td*Nu u+%ank^Wv l$&e$- TsBrI8Jf)rEbƺYf bvw;V bX1}rjBnֽlLi6wjitF_Kx6k(H}Qy9 6ؐ5.''#N>L;aҥmA<~VpV7׼Q݁{#+#. F6A.P\ *9y ;) Ώ~盫o{UFMp]Rv/Ƶ |ܩ-p&7024ytQ+1p730A^afˣq {:Ⲅ^ggq4IK$7Wn#4W12oCY'6:~0jԧ ӏJi% n*IjK56 J]fz7=6[ePXPT>YncU `t0p “ %Um%`3Ω'{ (5%~U/[V&ں 4P/0h3ĺF`TbNە(.;&(rpFDBE$sn" ,W:mθAj>\^&% XBo[*%GV(kO)͠QX2K x@#`,'PݠEHٷk Lpq]j1W=xy:`F-+7QRԙ5=7L,_O_>z͖;yﯮ>zRHObq/'}Gdz+ֆEnX0k}]z 8*" ^?Mez}f75cb4A1$ޙwkb.~t6`μX5G]!qk{K~zɋ\{J-_h'Uzs\ EDbt_Tdw)a#(xc"Hjjk'0AlL"*Y1頤OH;Iz/~nsJ7cCdL{P^K,S9}r}T=3_ 4% 0f/3ޡD7 y0"Q"mQ\i;;nJU4?ASMjg^i hC`5c :f!{D2:JDzXO3""\r0a M҄\=L{TQ2-Qk4/mLM3yF%>@uu O;XҦcEvfX?F%P`Qg"̅bxb/OrdUL*li  C%@!QӚk#[bŏ.~>n{\6 y 6 )vA;{TUTC±U!X@lSl(4i-3 Y6AًJizܵ%v#RgtZKxr?"C*M3L%Qҏʀl9&//_Fy(QO12l.B0ñeJ[ft8z=Hr$Y&߉w;*Vr_QW<^VNl~ue(RO!-I9*BrT~-dIUbaҍӯ =k 6ی9q3.ڂ:m7T>0jk,5sޣe#SZnx߽rEtL:}FbP#:cѢsU:~hvvBB)m99v0Biluk[Ӫwj>+rF(WJ gJ4x5쭡3rOc?hi,rCb2l`ݷu|?Qf{ɶSx5s6q/dx6컺(kY$%[NC I-U.I@׍~nA rͷZF"BRDĖ*i\q-*eD12ڱ(mǽgh%*cGHaJa]ԍ0JXH06[-Tʰ "B2"}9Ƃ1%#m6_' l2 G GBFZ *__>4?GKYrY7xMJWyY`,-m ȋ5lY ַkds].7q{ vŌ`7pG^+u!N-mI+r_vIBaI[J_+@3RKɫ_krR}hiXgcY FVEP*jF%;LG5I^)Db:-b:n=(dPA|7Nd~g@yx^Gg.>8)exF]!/ 5&8]Bq?_S1 4|Ri'ƵJRdIyDKUEi9x|fw{ҽ|mTtk'fǁSmӮe- ~toV uk zq@%\ԫ&ĩ?s[XQ+3p*+=n|QK#ܐZm 3 >55TQkS:UJNcbɖOy<ґjT#)%wQΌ(| CZ.|2Y|fP ۪!pA'Q:dZK;u'pԦQhY$)?ꮏųMR3 r$s,G3DSN,i6U$rQQgJXQ1ǗB 9\l&X:Y涂4BHa8dQ[UHh+Ǖ'M3="^ - fN )mD22+4q^+cCtMc/Y s#ǭ KWU6E.^"w!=X"BQd{yiqX.x~x̒**YAmyDmuHn A'"tQCk㰵8]Rh6xADGq*;QHJ\TL2ˤJpKcs#ߪAZuDN| lq _vqyYT evd#ed5LNe mDڿKd7/_:ZE~Udzti2-fRêk&f}í`H8T~5?(Kew$Z|+_ EՑ q~t$1J~pAmzd:8 h+y.pq{%.z 笮1ݓ#@OPMl\'ȿL|)He|X߆?BݡlE@ltf}E6/ax`4 ѰqjcJURBWXXQ*b޻+NK}z媔nϰe|/{Nv<(նcLx6z̝21hmiT xHmM}~[| !3NywNywX?@90JvU/a Ueb'-έ'=NN&)*ZhoR@>DJ*#t%Z9c2; IW9/b=>D,k@ {gHFqCN֊\Z-*< ce5Jrd))i`+E^^qG+  Au> Rz)$A/ )U')UȨ#)ca A0iʃcą>}- iґm+hiG TrV7e2)DK,E$<; 9؊K_R8@+!Ȭ Z&RiIz8&71`dƪmCwk, D+SG*ڊyi*!e^  ч˜ޫ՚Wf4Tst~;u#l@cx#O:QȽ#jӬ\of">q4JFpNgBF:cL(6Sj>j`1slF5\X2P?Z iɸ}2 E;'|iašs\<>-2j+Z;O^6+PSY(իӫA?=쀫>T_ﮯ:+U5]އmQjk܇Uٸ90o^0(\^0,G!\jY9Зf]NViMZ#7卵g!`5 d 7_"KOS?nɫs%q_ϛ$lfx;`xr+>7SXh6W $ 83]Kuahov򽝌}BJBna-L$iNZj'hAy!zl=hu49g 5dQط{h{ܻoՄRwH{gv2r$9A0[XZ4#6z28>v$@ ;Ql'|(ed4lIW+rьIl^5*11/B*͕p"7:׼6̕SsnEU@^p{+KtM:\T=֫4 x!yu) J)xji%.ķ4YbV#C $bA#J)o]^x_Q w@a6g-&V[]-&/\^LdԹ,iu.IpIşj-4Wu^ZBa⨛k}@Ӯ"K>ݱH|-NO 7O]B6S?"`yF@6ZXXg,$TK엻j8%UDab[ۮZ'7ZǩC;gIb r,LmhSzdng)d&EwL_& QN w$>F 9}>> 8t$'Yy7{Q @z@,Ok;D5˶=ee|:grY cik6Wo,ְ?1VqQyZ Y QG.=mH&^?᫭?! ZV!/lOWB-_A#4Sg7l\Fz-[F.֣n鎷j^ }zzbRY1͎91JYhsYbϟ.DS-=;(lS='P~єdnDr4^r?4ZY~oU;n:hwX'h4~묄5*y`Ѷʔ]C9H5xguA&غ+F{`\wT^yU+`Ϥ.tN0 J񀪴ݡz{ +-/ h &\5*Qȍĵ},dsJ }IDxV -! }o|wK3)$K̭d'yyrW7O_}盝ڧenǡ{IԄ;kHS}hA/Wvb~lQ1e 0 bl> ~z7ȓ 17P~ gJ*c?ʏUe655y.Y7aj#b4}.CA zQ'RCp=@\ɲ\WWϟכ.=+Y#E[mW+?Dji|Iq쯲ѕ= NFzVv7J(ۺ濞m/5*+>ODgAm itѵHkKr4'F>·"t5꧷zc JjNy,j%YE{}YEa',Ô`$$xPrrOK&lۂgJ*3y!Ms8s㺈1NW7496eޕNcʩLeV[P'=Ϧb~tSTB۹[IA/Nk݀2 N䜼dalrF-;fldKZZ &آWbUz$Z _;J$I+l ,qo3GU2J4 6()"9`+pV13 }!Ojr|z;f8q^)ָMP$c҂hBe˴,mo{?kޣv+_|9A(DgoaMChbK{Wc\TX~n. FFb'۰xL惰,?O/oK&D)D^LiF\_[;ǐ&ǹ 3Ah%97 :)+*Ǟַ]ȧ(H ),|Ѭy㘣GWh7xU ;zQ8A( Ʋ)FCIZ"1ke7<UrN8=u׳s!Nt9VՈ G G3)PgZǼ2ૢ,tU'&$V6['_~EmҼX/"'Z2/.7+pf彜퓀veHFh)e/'N0 U,2R|w߾sqe?]~7oWiGeh4>4WcX *9%rQ ֻ`X-ҹ@Az7 <{oӣ3kh4/|Lbc8v5(>Yn:#<'8$@6^KP҄ 6".%H9rHԩ<7]I< u枩/ǩ-$zg^G'l:RY *pʨ"3T) mq yTNrg8|q[,b!0pvL_5jALnN{˕[ ; C 1 6 zM|.ɷ ?n2uK LDhol`0 JHP)M T-z$#4LK>.T}5-a|[[~BM=rcDRta [ǁǢ֔ bd"V$ 49<5eqQKI5ϮT/Pg:RR*eJ$RMh8;vo=qcJNG2?hia`' LDSVl?>QDRF;`AN`LvmSAEL>-9!VjG8 v1A^&7 ;dP/ A2z͜0y`x(r=e8¾e')R͝t,wQJZipx AJ8%5H0)ɩ ϴ%:6 ²p3E7Mj=~'Uѓ ~B}5)P>޽}|\j0/2ڿwawVAO#t{}=3BSέ/Z]~~f>&s '8b|>'w(蕨7EPpfdk?<|+ZqD3^,]$YYs!OGcLܱm@p|;,a%}oxZԌ}p3j( "VrCFOG}Pω ='">k5̊8e R$.갤11,ƍV W*8/ם=p $z~ǂ +:^~楗SmEe_̒9x-"ź~gpQS.guO%HU.؎3C? 0\ y97>BU!thq8+V"O$%8O}:`ۈa`BpD9Z(ERuo &qD*SRPDIt>~'f78x„J &*RA\Ԯ?H Dl%Kr;28rY$v7^K$L2 kV`!pRMr2˦5Jft-'K`0~qb,Jq9R1; Porh `c*kvH"B_:CRM7ƃ7 0]<TwE4Њo6rƦc)e%+w_uJRkCjY 5DWΣi $:'15”4W>:}gdM"p]W=9E^~2<=I JR{Q@;Ov5DEiV_I \'?:;v pˡxpv!pvY@$5b2f6&aw>Lgˏ+`ci/})lBpZSp*uOK}FUbrx~[)-ݓ'r"*yaEc !!!9^8%6~l*KSbS,Ħ֚&ƦR;'" 1blĦ`9r平n|ywMA^qIA\D""_1ʊg߾aWyJeU~;^ZTN\(Ϩz!,@wn\=𨭋[@mcuXnHAUFrZf"+ 2 z\yk?z8VY"!2{Id8 6aNXQ&hOr˩}WzV:Дb| gBݽrL?Ki|P&}Lnw>hAtr\e϶et{41{@cА ыw.VRzi}&_Z/[Niꘛ8zIF[yX~fpk|HW s+wf=b2`J [ۂ?7նo⛷]"-_4cn^*ؐp6RNJfa`?s z2 K0 K(shkIB^6PuvvkA4v;>KiPڭDS[Exj۱#fn4)4spPzX{nNa҃õ38kf2P7ŢZAFJLRNxNjD#&]"ծ:``|2lZO˒X ;fowGu:F7|X70!hԼY &.$uf5K6lՋ[$$ (m~hv_|͓|'`M~faa y<|+ eby$p{4Jus}d$_*7Dd2%kַACM#Fն.u:`\cʰݖ* qU. -b8o6=©wSJ;ϾsU\);/[a]oIu\OFKojqTʒuG?5UǪm6띁[ĹUVF6ju"8l+EYj93Y->)E鐖׳[ͦ3yS͆ħk4s{h虅}GrW(C \% hsot k@/NAAg)ZDv3L8%DEs g $QkWlEE I 8,uRy!F{L΃tn:.ÒFH"jB .~[eT_E $D4q7* 6;XzB'spJk`RSiK,u7"~g5"r$Yat9DzodeNƚg3xB`3YL1R9/`pP6( ty}=aLzh RT#zػN%."B\%%Z)aNpHaAHG* 8"7U0M@\ eUJ+ E)lX xU n5B`l$(M"JPpĄ6 weIb`1#aab 7fdyԖ(Yd 7UE-ڰ%QřGRcƶ(ִ-<Řj1 hժ-vݘd+&(AWoW 18S2EI#za,\e-$W{D6rn+RM9] 2%[ Ōl2!rN,dh;r 5ecb c*+%K$7MHM*zE;%dX;"ynLd`E1r\=iR  ILI;>h湧MF)Ά> NCm҇:A>SrZkS =<'tQ7K''Uj箊MṄz BesAv}?>M&_vk1Opv*qN>d }н0=0eE\sz5f ]*\_JW:Ss{T|":R4/ylKzQp eԴ]?ie>3wX-W"!/R߻cӻAW7)~r{B.NaCk?~N TwTBZyl\]Я|7;+홴Pf4O&W>HL7m<=Si2)gi\|cnW#6u&5Tͣ튉z LmX(^*>jʮX+z!Qyeӭ;O1Lg=v[ Xz֢\eRQ:6_8EZIY;-]\\ׇG_pFsp!=|Y;@?{ZlimbWU:,DԺc.B{IY(Lea]fsY9O_'fޮ]':-*E^Yԟ嚱K E\{ֻҜ(k^^riJ+yZuŠIED0RS xRx Dvs.JV4%+}xce *BɍU4u齃i5Gн 8o۱deYcozEP;=kǾZy!ސw^]9z`Y3J9yU/)yt4guq:+:EhTkaL @ep"窼e YIEvg6'M}%^Utb,N/]ф |!gE >ygG&̮̾}iF!lv}=f_~b$?KbvVzesw1Of& ?r'˴#©緝lk"{Zdw c˞-Ѡ*eB V|XZIʴڑ_.fw]W w V.yƽb~ܭ^yIO\™_з1wvi;|:3?KͼI9g pd4) #w".m-Xo/~?(ygw fr)+ܻ$u>/l|FnȖh^uYXn?flg,(=q~K͟uV,:A.zOFZQ)ѕSc VG(f> <"H,R5\Adj3i 6]29|TςQqcvAw;"=#1;_ǘFs%<,T1 Qu,dj?WLDSOܒl{s^᧟Q5z:O=ۺKtF7e}C~/ ?vPK(bazu;&ga jfhcC.hDY${)$2i3)6!Y-4GyRkȋD>Sޓ *@hw)ɻ ("X^ `GY~pZ(  1ӂ3Rʂ+e<(.Sґaxp\9;H|RhȅG)C*JyoȜ$<¢tJw0N2#hMa3 i=zPHkΜrB)p0$Pۙaxx6\U$-;H,`6j$K~J l)EkcPv`\Ap] Wce%G*洪RΦiWyu"T5+eR͸O{"DGNy>*&; p-/ry83(O\jUu!sή*v,%=\o#ߗ_=GeBQ^ n|(վW+P*V-YUM1ﻷRK~dFvΪRH~E O-ihFzyReh_Z3<#EJ.2h¦J&@f` ЁҰ%1cP9 kp d \&삏Psvh= o>aiamg'mR]Ak _{G}@>fuA}[c6׌Jf.ZFtC Qr˫d31+g-hTns",HzR/ފRS?X|>,pvDs̲me64wOQJD+(+t*W:`܉T=zoCf lb4EKcvϐzP}- %AK4m|}B8|س_Ĩ`yCI3A;8gU !`gP}̎a[;TSc:q;9߃8 v| vF2Cc\{c,Ǒ^D5y7s֚ (Hh ޜh᏾% 6EàR OAi' ]ќ]Р ޝh#窵~6`C=l*m-=ɩw7?u2WկnRۯK'.ULLu2=܏=غ0Ʒ>&"dң?فFatFl@L(!x#G^oԟt ̉z*W%T)@"6qЭw^P$kCݶʲ.{$$.ZЊd/Y 6Ͽ̺,mMօtX6u՟&Ӏb05^Å!!Xa8A+]YL,J6@)Ԫ?h0ZPϙ , ;,`LW%~eUedŷ]S5L s!>/:/ųy s| r?-2Fpj쩷22UZfD찿eH!@ژ`Wyy6oEA+cV)@@/#>R\ ~`V qԸ dFZk ]/f9B>6^ *O2v;P#!*`+ȞW^ L֌~oJ,5d<`/xAm$5Bf qc3q9]yU:/K0dP fdžC'/O/Pwyۯ {!WP< 6 X s)#x x.ۃ:zы>8耖v6G-ꎓ keׯ]"Ĺ_Fwd;&wmō}('l4eBxzq×OY$u?Exi)^tZ֋-F U(8* Z"xe# E8Q)nu >mHN%9'쯓чR|>OptuyQ0 '8BGi>7YMlt;O$1^n)~9n[@pFv7@^\(->>MX̱#nk'^ }6hl] @z:oua7a TnWwZ!`+Z.u#߽] L܈k0^;鍝/z/~ Z3^A/ZI-G dlKo07@J {e ZP-X Dm0ٻ6r-W =R/ ^\L$~ڲ[Pfс8E~l<<$#Kpe>$a^Ie0d2^ATR\+>s&`N+74bv~IMܶ&Tl@֏e42_rg8P*V?(KqTQ$VB kg_pNM8*,"61E"+%=;'Ld(ZH r7Y"0]nJҁ{fI)/ V-b*8i ";t f]!,^0AZ{vX.fd)*+\2R!)ZP[ln  7`B( \zOgbx1ꥸ `cns]q81* Vv`Zc3nqJ[59I/Նl@s8hJN w9Rֈ# Q x0^ƈ0F`4s! qinՆel2 >%|2cΪ6 [24:x\F` ^X/f/vhg%#4=8 0T ~oHڳ4diQIG?o~Qd&ӔWO%5a+>w;SL~[nJ; 9A5ȿx09j|oV_ҀyR*7w[AëxVPpjg͸MwCI}hB``;5ś? '4&]wؼ}pe$ i5}ϳѦoZfьc:@mE4{l _("lIutGvN#cxO`4NHv^ᄭaS'n>u}ÅW _g.W bi@tbD  =>{S`‡&ބm!Jdtj"rj[wNxO4JQN>33ONV@ NCN @v. "Evo\TEԴh5z=QFI<q_ ~ӥ+H0kg19Kje5ێg.cr -Dt銰vF2&'1cSM(#WN?9 Ku}VԞ F\L ;U&KTجɤیgCGkWn#_{+uWg5IߏI+'ՕGi+VW% $O)h.JwmV fifI;6@BP헂hB$gTlZwNh*"@)ۯ/ GBx[GnQR9DW8)!Ne3S7tNO>J5σ| vJhNytZL1ˢ¹$Ir';ѥ!/k:I ~CrR:fҸ8^㹭(S̒lH9H_]>:hX%wmr{.{#ɺ;mo Ѱ+N̽tqZFpX=? JRG" uPjɕ$L'eK¤ƃ&"6~֥ӽ<(QqoxjL{&$K^+x[Y1X4FHHڴ~$fҙGb4N .됢rݴ>>^(PkLM-胢&^E2E7;WLJEdC fkν`1/DyěC<[7'yz|qtbZi$wN{'ÈKvZZ5NIm#q|MWh+' j`t*:X{mN_J֭,^;)EH04}"89 !t?ߘ ۗSӌs$c+ 3b#DžS"x tMLTO^pkPf< r*HAAj\z*iLQs;|_ϸ}"d ԘfΦ 3' x'rEAs;% #%wFSςQ"75c:,[et c"Frñ "V+@dH-eB4" [SvYtgt:s4˚匚1>&ɐK T(Wb jz%*^7}0Ǣ⻍)m=0^U/&Q[Wt3^$ԭ|~xA@xO$h#?OrUuG>3\k X ,W<kBBn쇇:A˃ԄbE ìܗw}n=%grFVd=-  UX s0(Aa3= âF+8e=[tD@l=b$g щ!b]~ab GUWAp*TX @ B(7K #}2%89 HJSi Ì9ې* S$S9||{nrU `pt;\6}P҆w%pΡGuc`C\iJ[>5ۜ .W) a''ݪ-6"d;3D7Y{4Ҫ 5U$Oc EC19s1M:zyLNiFS)(*5n?=OitдsBKT , "ᓦ>%iў_1@r*VD5#֩n 1+QPaG%z] R  R.R" ɑN-8[=?WrQ ]V+g^oT {w1< 0ķ{C$s[377{FǕV I7;UKz1>ڱ3&0 ~c._.g BF vto*f1%p|_56*6Nn\", rFK= u#nZ'2O30IEA+] +fOg($qCf;Z #Ȯ%kgyR*qMфbV|Qxf 0h]W0Jc)}Ǻaݑn$B Q$a-_+Iωw\:X1+Vc вcԆXåHHT NUoRPBh56uٻdWyٓ?,<`q'ٗͱ44c;BiH)%r+.}U]]Y4"2fNS~-؋Jk#tN&zc[>r]NEXo/qw-{_}Xj43M)=с+6t]io(:&h5s~B3<fy_&+0{+JI*?xY`ߴb3:Mx(%;%D"%-Pt]L+ť'WU*pሆY7]JēW#x /J9X6_9\ o_+[~.Q^.Ã8AlX bDRXCyʺxcۡ\ʇDɨi䇦:?Pq%7{ Ks:!% Fީ`bգ^}p_Q nmR FK"?v/u`C]q;|l*5‚x24 jtXؕ6 R%]7P Li<z4N$ QZHa-X3+ 4m"(qJ"rYq7sxg92P57aWd(f)hqre?*ZP v2C5^nA[x-1>IQ)]hfc8Zj`%ZIzdo8B*OZ N>m]hBv|'0ꅞ=꿠@4N(yEZ xtrTީ@!/̱Hpwf&CDRG]2Im !'YR9-R[NxyUs[l-o p@,ͩKXDb,$aDN*ָgM1[HȫK9NM0bL1]U>kb0 >B:PT;=EGpym`uw52$hANׯ^`.cv?}<{P=TEvS@ہn.zp jc#ßL؝vp-4kX5$ӯ;PL+%[_On*GJȨV݀jpg~NhBL%"*^HX\"wdDj*EQt7v*,ݾ5Tf6֪TL9Хk=ԃX/ NSj;SߏL-x &G*}qz|GlsYrO&N{яwN)Ԃp,9)MRFN3gd[w+"[!+)g]3Yzuj>ӘV-t)F] J/փyv0'(RF#O۬d`f}zcP˵,L16ޑGl3gX~x4#Ͳ7KI%&?F+.y}& /= {x8s8GSRš8ǭ9=)7I"I Y$c+'8NҦ곑2^Y5@ytu6~A/'4sdzggà~{܋y8d*Nz~SC2 N`W +pBY ;ͥ+$U(2!ݥLhgGm:`ʾJE:)k=?Ī!6~ $Z7m#c٥݆]3ԋ0:޳č\:&CBGf1-$MmGn_5Gˏhl/~sQeCWA^\X˄r`"9 #E{mdI*;ZSI#pݤ(BvH7OV.d ?Y8Y2tCñP{U26!!%C>ʏC2vsxH=8-[86q :Ta6V&Vo"w<'yNx9):!e)URa(B!B\f(fHyNeH c9Az\SGr#r斆 3,URS [Rc8CRC@`O .U5xnhDlڭY"4Τ IC7sAaP.\! i#Ho+#"NɠoLt*IbLQlk$@D\o' 02P2"HN1R)N ΦL7_5.E٦!o2Yc  Oq4n{j4-^&-[o)l3js Ɔՠ(U5j`xMB&)&I(j,F7 s83d20Qs8bF9Lw}+l%B&h) ׸_rh7~R4z \WU鑽Y'Lkg ?x7dE[wX^vVW ""хUfѣ_~~{|c<|8|`cՏg&b-3;~NyODk6y_DïDor10>}'fE.%6"* * 6ڪ\f<9 Qͪd#3Z'+y؈ ]Sfa|N08͈.3pc ~`FdN8~ )zd10? t&\d \ezImnȇ 6a3j*<[UpIuOy[ѾÏ7`Wp5u5ajt{Dž_On.V|׳oG?x!86-\+<,'cGPi ͓[[d;jE:ƼL3nS0 xmϹgOsmq^ A鰶\iʼnLLEznm9'nJnpbDfK NegJ8K5wǛOu.\zJ"b{-@owiY`ƃym{Xzj_pu' Φ̮8sG?kЍ8i/*85dR9K[!GHEvIWD.ۣG-7僣||Ck22oheUTJ=:+ˈ5r#P-|yXBñY($e[,ܿON%|PK$k4"P>QQ%W"v.* Jcj51WaZp22 syˌ"N:O bMNK e 8"pƱp:{ʦ<KqˬL>]Sܼf?c]mݑz ߑ\!ju_ݺ۾[ڀjXOo146S2 )bNh xJ /įO}ӡ]J&}YuY>%G`@r[c2@# 4.c6{`,׬CY'dW8;cjWPqd(>B#Кp#4:[}ĕ]rd7+w>g~\>*5uF1]1o*zYg׃ p=U(*MjPn3"pSU m@۞W[ AW[hi EkYq#?l7єҔ%[HtT*MJvcYX1ˬ#* XCrsrV< B"Ym8tNoG8(n:?+^p(^bωSwҊGKd;{`ŇC]XV|ΫesS0IJ34!#0IB5 b`()!gl ,і)'`,MJu2hh9U~ގU `ʥDq_xDF̏̆tʈFuveShJ>" C/t&(jB㧕 !R 5oDjL5Sm*p;j$]LB _}ڶmSj/L@yS|qBV*L@"M}͵Pֳo)Ym[Hz䝇M^uuF!vhVJ xFmT+GijyjNNyuX5e[J5dNL xHxL4?^p옾_.ةk]߂<w2mq̚{ &aT(ս6aivkZ~={7iRHz~M`]^{~6by8X0-G 4 `q&!T͂#+vcӧDr)OƩxw;YՒwsoHl2;JI"$7 h}ї/XW/c_V=q؈,s՚pC4D{83Vh1JccAH$LS@[Q <¶GRVC?{!$f:ϏfU?zOQHK;3&.W&&zM̷ qo”i\B2zpY/WߍWwcL8R{.ui![$'Qxːs.$*S>/rUo)=l%#7qr7Q] 6hĕm9I%̎\*>:-֌AIkϵεuZxx::0uFn']7,=1#7ih{S"@ f݁0u%(V}aa%Ҵ( zUΞT};ljPCjqNJ\h$&"q!gXf LJ ObIS@N51ĥQ K e7+╂͛B'iˬ`]tJLҩ;2s yG핗+lRZI4@Ֆ k@Jh4*&O FjS ZZ,DD]^vWr$/\1x:ܹ+(#HPbxtЮ(8/~b)3~y(%L^ y+kcS6n`_dZ(^ó{ΣLGt:x>{.[/D BPm58B:YQE>^#;8҅Y&_l+kywd©Pt5h6|h;lV3'gN#fzNKH[ |:K#NbWJKpƹMҶ =@+Uql>&ORΪ-pDd~P:Le{x:uà]#Q"ءsWBQ&kn''CF &$ah)m",|I3F Z3vuK]a|9<)"fϏ#XS^En-!g AD{XMزcseER7)">Gqak2K8!#9!o…ʡLVNlfJr`!n\*RR}9{`?\X/ۃ+&b…k=pYY 1F` 0|t"Tp=z=p_M u|5-`jxn}=(F'\s=}bް\䩔VI;rs -11/(7KZBZS7.jN"zh~5o."01Fw!'2^5AF?=χ9χz8ANF]Jp Ze#cdSJp h4&6j= 2i+R%3iaYK)3ziII9p L JTB.u0 bJC<厤B * Zϣ,YGՕgWir;S%(gBBK(F4{fC%*۱Bg:w@7z]o{(z~AF%㨨%26Uhr˒|jAPhwVSlgVbRn 5)Vq.eY3TfoH O %;GqMUœ*x)Zczߩgh`8t/v#4e][xLI)`&1;3fyZ> "f#k$;qAB ̒/A&\ eЏݫOְ0Dt>g{Z!$05"P5xP&;X=;ޜ7J<ul(m>s`@cMnQ<)t?sf N%7ˊ6uK6{d 3azexzE )jlH `D2A.)ÇvKO2$MI "!D L!v-b޻~9ISɘhC6d4MǴ- A٣H;1j4kܠr,SI7\}a"r|hSS OϟQbg2nR^NN%-;k2\aޫR? ̉T 9p=؉85_Ϡl$Ցކ8\-Hn@&es5خ `sIs'#7֯בL(k Mm9>)+>r^r^`G)d .jJr2uB]>ļ+5/"Fœ9(mA2CE}<聟߲>×ٗx `a"5 Z&!@(`L[td 5q#4b93ͤІZȉ)4F{Ӓd=хn(P(2n;F"S59J~'JţfҎ0)bˍChyjvf^(]9B9qrR ܯ(#Q3ZG$J$ZrJr[k~[iξ ֜$ 5WR u G^(S\kn1&3~ *&R+4H2IHeDE8V(9AE~=(Q4``6X) Fa jK5F %4B~ ?- r (1rHg6+㺰Ò` r4S)K8^&H@~I*JsS?b(2zjϯ}{%w-"X/{nJ&{tnyb.fxM~ww}Y <*{߿d:K~Fzӣ ?{W۸/{w%@>,fNٶ_$ĩNolʔdQM,S97_hA_a!FH͸/'"aP!$})3,ǣ(׬}sX|׭+_ λgh P8 qG]h^#;[߾'þ=5(#EPk߅Bu.DaeXOlqe^­Igɧ x^Bݪ. z'*^&[X&z7TQOaci3>t/[B,\QQWe)5Չk䀉;X0&k%fQܷFט:᜖ɞ؋h(nC.k b8p_6uj8WNmn[ۛK^'@@GBl½Jbv{'1~dBԔ$36scT)Ձ@b!/D`dSIqu:i*D;T;~ĥ=Rh0A9 MbYtRSfskyK$ exz!psUDG 9Pʙو hN%0"NcK aeJ$@jb8" ISl㺤<bj*Ǻ6`J p jR4MݸNv=``valD `".%`JpAR v>Ȝ2$Y vv?xC e+Ƭ.[d WqB&C IIdB)E IH%n lC,vf쇥k!MxǍa-8U'JϿZ ^Ri_JvOؕuq.P,*='.G!JJ>?)S&yΘ3:hxjk+QEsګޟ{Uky´\v?OQ6ޏ3uaœ;Bf<{43fvt3tFL*Dii7sbw5w vo퍏8-Esªf=FEq;&x9m3opGs6&nD{;d.6W(.C4op<]5Q Qbq3(#=zhaIFƧ\F#e֢;JebcGG킹`wEK7.U$Թ5 6u.#^vd蒡{=#{cMrwzNeؑ:;i9_㯝R;.*~%=`oZj~i[}8T(0 rT XkEPS8wN9ڲCP o!&E)E~ëv~}!)ֹB[sLҴhJ⺺X\>!a%1D5<  m5\# q6 uMK4C(Pxs !|9iVEwBpΕSϺWPw$n9|}gcwLq4=kˊTR nʃiӍu>%4xY۶dNnsܓ~>X[ѝWJfݼ~uUHβ>>̽xG!'/ ؇B>"M QpON[qORZ{S_o:O>e{6C-CB2zV^eK ݼ$lz'g;O\-Y>)˱[sSuwOsOwNj41Uv:ިo(k4v [C펷֠PKYCZ)ъ'P:+}Q RWyT)Cv!`kew(kT=j(ɓtOE`.-:+ٝuL(\=kS9ѸbJ {r)76']E׭I =׸ZCNU1F*!е6j挭tO%VYH~TZbd0&nzzTHFCϧ_ǕxU3z<7#|C?Z?:_?s~{ʵ8~Ĭ<(=zPk Y_;Dg2xbqt8&2҆g3PfϛUty#ѣ,& zL\ງW"xJMrqatF'ԈK|xĉm8R}xc^T%8Nq2&PS2"i 6]m->u(0rf'59?{zjQ<ЉƩW-KgyeF؂C" (b8EQR2V"SD ZO58ÊHA4f$4ՙM`c'Ux ޠ$*_Q d{F;+PS.:Ee 2.{{!ߟWo&P[ӡXvׂzfG:Ap':҂C,Q= 'jJMTĂZ%X22lLDV$Q2h5{Oϒ"quW rmuJ Uz\Sd{_B[׸k4m~Wsp(E'|+݊uboݭe f6F @d"XŜBX8ƚ$V ),Bhk_?Go]<ƙQ苮G˔s |n/vIAp|zv8W"R4e&~o +ĚgDF11H(%"I C Y"Dw8RZkc2acĥUB*&-'`8K2d)痆&Hri,20(Re$6&!dR"R`0$B{l F)ތm_h@)܌q ;[t a* 8@sxlU-RCiqd(!)DI )nju'Bf43NcZ ] 5&0LkW*PL MXs6p57 1$Pq<8"K4o%@V|w{Ov1W/= a.٧@QTSco=a\>K޾y|\z({ro7`u E/3߯ǣl2/&ۧǛ1|guS]<_|܎twOྺACRs$|x%BAKyqL[`<`i {)?`Xf9̂@ #~~yڇN6N?t*0$ܐ77~8_= 6$^% M "h2'a,hE뤪;AT=HKjQݩu9c;VѼ+`枸%`ܭ,?;vIoƳGS,uR;k3JftCX10>7E?pj?kе+'[gMeq»hPAɎu8Y[(>]'uMǩκNnMERvj-0|#AbriR0' sBs#Ar%0[EN; TIU-\> ]؜_}v9LH{Ѳ`kqZ ^lIz(NQuxx/Sc/U8}b%GGkרujU`OjH{&5bWdL5}p{s{?ZB[4|s,҅9iє<5X\xqg,lBĝ ! D[pE9r!\tKEpAP>4LMYatҜrX)x][oG+^vw? lޜ@諭X"%'߷!{83PbX9ꪾTY/xa&z{1r!URJlھCn ,JHP`QRzTOA!7@{&YLR=bL&K0CD6խm &_yYx״1` ۼ۶ƶ9o[vzu5]촬;h'zٵU9S6tlA$雭.,R-u9XG5C}-w>T|XrZc(={LN(78:[װoRglS DטWph{[oghKnW역EZJ"6j I"r &PÓN͖#"ar1(NV($'$"y7%G&ܚ ֧oK}2]]M'Slח_.΂kS=l9eyP5&OV;G\YؚĂ[#X*! c&DFYbv,1G"|\~@,'iۻ6 i+_vE8HQG$Bk-߾qSaYB+uYld^Lq)IE?d./gȷ\pQITJ|v/"WKO#dri\6>NH_^eKGbfR$_QM-5d{%u(I!哬2=YQO s1Vb+WdX@Iڗ7rU5ɤ*cy|a<)դ)[r(JRXѣKӬZ74pN$i!V[ZOS.$|HL4$5-{.~x?ߝ~f<3ػfXn/3C>vmM}lKb J Kg(/,)|+YR`H!T2Fhj A'ORYنrOÆOr MIƖ ZhYK{ 5M2,\TɴsҪ]>Y*̞Q#9KC/CѪ.&ȚL!Q))`BN PLoshہ(Ψ憫8. i+UHtT3RtdM$,D> KWKŤBveq/HR̠gTJ1FE5fť0>'H3ar_VșTeo"1_,tiD8D @j 8Ν\:Zo2޻({/BHƺMɍزf355)4g4LRA)r~,^V^8L6$=ܧ_8n~~ʍ&>F%J2ۢڔ8#2>%oFKO'޸pń'r6\bq[%y{+%D1&r.nfFa]j>R]incU\up!"dZ.JB)oMBDʍR--e[fPY,.[upAО7 =Чa_-XLhS&Vh'AaS')5SO\aHQv `\ _[*!ڕE(.yvG:pOnò{O ZD!T\m`gSO-O2w2 %Lt{0l Aeh^]aI-0M$^-ݞi02Zyt'$`ЉIZy %_K/"NR2dӚ/=M^fo[=_2{JR>˲p.ۻO@5YOPP-b!fEYy`RN#udF .R Er'Ta$5:Ɔ F_YQM/fbUdzsՁd?4'<-CQNWW1CȲv5nߡ7%jϙTJ)9q)a%Ez?TP% {T雦Xk2HdLJ~m.9cG#}r2Fkl{ѵuY?UүQ^ !Ba6cA+aWV Bvx\$twiTb7h)uβԶj^e {F7e:͟YS"\S .QƂ.`6[7*d4&P# cY|YtWK]-;\b615:ό-V>fp^rR;tJ|=O C9(13i/r)wrZX?>rvq"厖ޏR X.d -t$JY<մx l[wܬjƒ!:qgwםǖ3k?YsEzM OOi~]T:n:"Lm:0mu)w}6mϑp 6c$Zč1T^T7l0)4;)˳صsWW*嗯-W jp^ ŽD]*=3vUPFvGto7ގ0Lk 4g  fE_*F|~}%i"V}~)Ae>6C/;WJ |TJW>oίLcl$T_H`5<9Tj2W=m0՛ܴtk[:[?DAA;9ۋNa^~1޴x9&Bk0l:+ʀ43- /}2i^km<,ae7j;3`vo~~NͲ8 x3x*N:->ֆ9!7%se6fIObsJcj ||=41'$ jn{Oo \S =N=ҌMԀ\!ί`3Wand>ޝ~hhF}^Uv`QM/ J ׽F{D:mrץHieQ GQ/~3RfvҸک.IR5ZsⷐZ ASA3u[)5~ -~Ei##+c $a1Xu$WJœ=ɺ[KYd#9w-<%>6B3 3o+$/ad2#1X_ht!煵`v~`sFJcM\qɌ'Z1&MF@6&S2ZYb'*v7W.~2|wR ؛ NY $$V&է S9Ga.yQXEpU]jҵg0(m9K4ףnVNcg&rNOixt@0YΩ}2=; ~NB`mP;x:'M?Ɔ1-gBSA&]ѷsn԰"noA4I l:ȷ|ح"CH*߷06(Pf}bFYu=JF fг- +3`GGlX>vdh6]w@[,- دÁږ CƘ*gйz,u_@Zhd0p5W GrѬ=J`szM>_&4#FXXy&n. I)* @<%TT!0E>)^مW -o(ICi*s92d.@ bbF4/ Yw~#t'乖R\0yDt1!`k\?ᎉ %-u~$Eފv.0&E0"ZZFP Ҥ/_q A:%KA~+ئanCPHC.~{ q-^ore7)B {@ڴ:._׮:A fw?{M de z>I*QЩ]7Q@^2*TFYEE XL c~۽Tiin8#R6w--Yb9J+/8e`\X[z]LC;L8JvON$%Z4 Y>i3V#1o/-/ ~#sﰶR5s2nfX U0͘M$MLpR5st3`W>|ISX,mX3ReFNf W w9ݽv4ymb IѮfszE`̱æІp9~ U۱je$vIǶQ?:U(Gi{u ۛy)ZX-bl#D;p%dgX3'T#7jX5([t-y;|';Ov`bؚ1z̫:^ؙn@5v8;8։W_YA^ST#AjV`J Wu]3M0Lr8v"Ks3=׻h,8aH rۿ רS Zo"`N{ 5~-9I j\,2=\ο. ޢtfko3Ʋz`|<3׃T0E. z3a}\XϿ]y瑋j7BrH/ eJrʿYEq҅*6XapkѤ 0?'wiaHPx5K?}Maήex s=߇Lt#p3 i2'oQP9_)٭sxydžyd#]W+m+?Pi12_˴[gbg`2f6magְshX9a]ݷ{ϟǥ'ő]׭קSPґmi^`DidzEɀONB;Я`85c<55$`Ia±tϠppx*0bޙҜ^௓To)(|y>JͷlR2i񅇗D+.g)T1 ӱVQAx/F'U(:!Im !hЧhBYA(Eϧڕey;pqq^wow{Ss"]Ig n_zKZm{Odj&ѵFh-r{P#R~q_x\//\V@set!0sp5a:2z'oZNKNdQB<|orlgK8_36q9cs|fYG{i@8U9Jd^3*ɒsJ&i\rw^rw'10%%*s>V̤`* Qd|5t4ߚm$.2tm fɐ##Y8E"xS0;ԮyH{;`=4ris z{m:GTn7!lAK0OrYJee*Oc8&-В9Ri>h%J![)tAir0AKb0EU>zOSFj һ1;"KCGeN6垢zO4 ~R.'pJg n/x$%gs9"&!!҈2ir= Hk%Acס9kBnM8ϘeOO|OkTM7_|pe=;|_a*VN3_|䗟Ptrq9[NO):::OyUL{[,]ξ?\~iR?]M|!6;?0ߝ} 1ՏG\@xHȕ[>Rܺ&:/SLU2UTy6B+EJc6!;r3I84Edj%^'M$ f7UzDP:N gy%hq p'1|-R~wlSIb@Rr6J)V>()*G_ΉH12WQӐ]cCVɻ˚ E7FK OdgcØ֖?ÂY`FKq7*O=>= YsdBiYӇ)X;2+`٠uZP"r,5fLh™x",MNY<-lHԓ2Tk{1z rCgPeZ 944xMK^Z)yLyX{ >~HAX黀x{Yߝ&Wdb/~]Idd Xl`yQ!Q}6č`"t()Mbm!%\c4FjclEC 8"0!NLcC`B&왁NViabmi*mYTnmkAk%*6)֚{"Z_HZBG73`K+0 Og仞BBen龚~It>O4*NgUcQ G;Nۍо֏rْryHhEAH)G(G݅tFƼJ9gZ"^ÀfvoK"K^=&3-˔~)/3j`Y}ka6{yGw7n<Mr:^\_Go|O9W!= 8A3׀ģD >D*7Z;hkD vfDKbڭ?kOLil6=(v('K:cgmwrJk, v"5(/qd* ?kJ!}1yCjlqܾF5OX IƓ ) A9-ʲ,ki tbOAjpƴ2R.~`[E[ &yn @IKn/6% +U ܮDvn "Slg|ΏN+JG6{vv&@1JE?x,,K= 2%c2HBDcd] V%q"@E&  ά CPiAI߉;( me)Տr91?]nb~ҩqA> |nzt1-MYfqbmr9<킄S.3vEFå y%T.;޽E<hq|m{== I%->^+Sn_ty֍W>5 0.qvֈ_?A1\L܈F O"ҸnP5 Bi7VBaPeQOM>B>[TĮXqbׯiV>`EX.LNKf"mmҢQ%JK$ww-LFCfpSes8^vh|S\,`d@%׍tFY$ib@$*KFP A"NWjJ$4 +"=V >57T/z>j *?́v5`@T3 A0؟TPc3 c{>$ܥ'C u! }/g>ڏ]A(kd.~P]|Uڊt]rTPDNa 0)chMjE0$ST!9@%TJ0#Ϫ20S&D, :F5X\>J@ pX 1gjv14Yq,>ǴDWݠP$Y?"v H1Ip3 OnbH% I{@Ěa~6X>fiJPnHMn!;3`Vw-BmV82b/`WfV907rQ9aIn2 MB\2|B%#rXfBw5>qUL8v!(a$ N\:^A+9c>"_pگC] `4 c N8pn01ɺ\ ز^YWO Y=+F̮jhxy/!'^B^I=~|$] im-zP!㧗>%԰cAr A2z8D)*- v0TݽJDQQOMOw$w)Z-/}D4(;˵52ſF2}` XX+ӷK+%iuk\ak?nZFzG-V1!N7[D+""liZt-[AA_A }A_!)AT!rC&LMwӨ@TV&*/QjPE[8Am"|J!V"WVRI Dc]b 2 )Hei$T^JxMlKzO#B0c^@PW .q'+Mk\Э4.KYR ɐ~dLUN?s6̨h|va;}!zT۫Ojy=&ۡ9\oǎɇ]>5741ylNۛR#-\pIbƔ4E{`~m[H~DNAǢ/`gޯ67m.,y>姹{0E/'oV7߼齱j58;34#ൗh%@C$cL")k 4XU>-1 \ eET6@ˆH?fxb鞮1+wȘ&t.5z_0ǿ!s BDZ[ZGh+:$ԫĒTE9rs8@l0\fVYgt0mZYwz IѪ$=K_?v]4JW(vteV|ܬ^rq tE\7V~G{[SBĎQP&t^.gkWxʞw;R1S\{kk֛KK?ߟQ/&GQ)%~F𙕫B\Dcd yICM=nTºb":cTn@[2/iۺ_+ nMHW.Q2E3 b":cTn=fy=m[֭ E4JnG֍@h-щus@P-vukBBr)%oK5Gs٣iT8v2WxM⳻A(]bF5#|~g ~.qB0a'0~%N*&g'X.q%NhR8N8'\&4c%]KШ& /N_r>/qB '$%N jA 8QM^S'KЬ&pVs˂Ls!A"QDB?"MJHDS!RAY3A JD2,K2ER`,cTB Cn-RMSĺ Ñe$4#`FoZl싛r1_n]۶er31R@Dll4g*>%J{o|x*u%-s/u$/c`;Fā>}YѯԤ6 &CG9CvxVAx:\tx͞ρyc׳5 ~^Hk^dl]zi>q`.P%9Ze> *k ֯>bc^»n᳼Y8Zm)4#T o5ŃzAc`ךX "UoJJ/ȲeC,lZ/qZ K)Yc/p~W:2读+ON0FOf܀Q.ygGd5judOu$Iԝ;_o+b `M?)jhgQ$ha$DTa-v `Sϱ܌w (yz^ϚQ@7m(A"̿lSZ0 eF/-%%ǻ77[تie*HU %W>e9\jU ha@9x9%h-Nj{4ʛկmJUї#w흭fvTևi-U(w )'Z^)ft,XCK[Nv@NsZt6۷?)PK3}ANӾI :VIIX*a f\ܻ]owE}~ߕ2fs{sЫ;YmffiFqc?G܉sRA>H^5[{o9q{0 1wsfnp7=G:]_6DGDyЖi6̀΀L|1)Cñެ֋УqA3BNn c (lA\~h8D9!mWUxWPgx,9a%c rIgp`|mlZD1G j3+ {o3:v;dn8LQjOFF͞8?S/<9^ $f!^X؄*eOw˗8*r0谔m:GG4t!jWC jknA t|~;?1ȲgqzQ x!%h DRœXj2rWh3h]koG+f׭G/b';/]]ն<>IJjM/ZL CUϩu! BE vqj41V! ERF˱LxW+^s&Ez*}auo|._"AĤVQ&P~UL$ƤV_]Wuo=pHMpVWf ;A]QI򧷿gJsi509X BRGQGUyN nD yM4Mqޭ-GAHӉwqނc6.|do? 3u2D5T|G`dBhRTF|x}.l:ǻݱP VۣWD%./F7l(gwq(TnƂ(*gp1[!dEUU%vfqz̕+o|ĶL om40xC1jEՉO& =WJJ|ϳ㍖0mx5yn t/JCՓFyjΠD!8e)N)M=d*'J'$s"$8 \$&oNtj證ڵ a~0P_1O &?6<͛P'SI]Y'8-tIZ0*e,72ҍ^=ҟf'8*H)e)QR>/)wӯe>8 >X`^tWꂀi9WN{ْQ[ĨaR85Qi,I589sPYmt ]nTQ!> rP2%g "eeDAw?؛J>YN8Y4V\M^r5w=xQ$ICdȧ7OAjvuʎK+ MwE8Onפ-կL6IK7DslPhk,>OHI` ZU<>`@7,zq]qɇ&`a eOW;{M4M+E M'KaeѮ F2%@$S 6kٌE+rԃ/vZ]Ph a+i=:\pWegP^sD0f{o<:h.I] '4cn)3kV[ QIz ̉.fD =2jIy.2ⱋ&1`@=Sp>E:Zxp! ΁qrE.R;Syeϸ{ZdY)j\k6m圫\qzźHJ{ _ ZհRD˄X /XSC S,aۃvE !+9@%+ Z)#9S?:YJyIeC%>A1űPN2hnwjA\EWl~*%^9#!@tTqtʹden.U^+*+/#a\>+SUd-ȣ;XѲRnxH:~Ri6-!ZӞ9>ڝCĘZhAW ]v(l_aw-T0!,%u@ iޙZGAxc NxUOti+]CUf4 UQZ$j[vPބP”楬c kIƫFQ>$ 7W; zs,<"{]e_N'h9ʐٞS浪ic 9rHnUo4k3D܃ǭ'S9a:ݻqm /mh8O,bULdWh4} u8a?oܴ3=eX BIb먣_ "hZv#P;›hLڲ4Cn4:x• NK[|cwk&ztDcj: (H%`L)D6MO_-j5^^IYekC,&|!ɝJ)|?$l%a1[ȚUwJY ?fRڰF@ZRUC/O?e7ׅ of]nHjt" \ωq"@mYdh'泒b*JoRrI|WZdӽFX LV(\F P\]n:~8)_.Mlk⮧a=yB$rU Rv:]nxӇI+Tx*~- BJ>!@;OQi ߪyڵ~IH[E9Vc-/t=FRR:8ϋjТ8^v.H[{dHڴyzuEnPyB?' jAUi Eq-t;Լ<ǬIŶ-alBTs2t9f i0URonqL?VǬ5E[&s{LP^,jzܼ*}a5¡k>BPdBe$ۗ<*%I:%ٕFtPEw89!b&T&*3Royw<Я>|wQW 3%Dt ~ A \3tywۿ] 7;j<-IPS$eAJ%zڌpf"Qe,ޛkd6^>]z]g2SeyNY^9'uB-h zt,8-"zGSG$=Ʌ䂦_*$0JA2¦Zyr42=cIYZkrN̰8k 4&˔V3*nq'L9qiPm!'f4&w^sՆК+^ 84!L&3b"f2/p'.569D˔'Vk+s\x2sC< >Q',8|JaW\m5Yj1 S^*cP,N |y ߽rJNb=,:k X+ W;=}~.<~~Qz`yNZo_]0dBhR>8dˇ!wW)MFX)&\yܭa2 l[5_>_ݾG1 ⸹γOo|qP8e{p)N mV1Mu|a0CHth?>jc WPjYGH663hP ׌B5q:e4R!Ec ڭB5.[h b;#HCgkK̡,Lew!p̐JQ!amsqEHkn)~MA@cȐC)4" _Ej,_5iQW>hn #A\q7d2w h+. ZNsȳlz}0ʃw4῍ ݆#"':ls7X!͵5῍G;l]z4avy=VT[ ɞ\jY2c\!G;g†)&@[ ]H{k}()NWk'zXMF;ؘfX:'?@4vW=3oǶ|&0bUF׭V4:ߔn)d&,6RD2cVO)xpjLCsi-Q¦ҥ\[i<7)|ZPrz[/6qzRa1VR꾎kl;1Vc6]$ePĎu&Oo,ƙ" \kMR=Ϫt~\FE JZyzS5mV`^ n7إ_=]..otl6tǬ/ ";$ ttoESuHqWPQ@YOK)IY?w)p[My *xziunn yAJQ]IAJYt{oku!('nW|80 =ny}k(jHKZmthmKl㭔8PmC Or*`'g/gJj!V&@lJATwf`]"}R.{ r!~!_=X:rxNeu)fAͤ".}('Lj;< ̐G;rD]d8Sp-ꃢMi}t 12gڊ~c@dՔUD48~,#LY+?׏K>*-9 L ~l\ aoUN:fu:tV7R~hY<*41Eu7)˜!ք|^%qND=ߒ{ͬ%rpvoO~>(h\hzypoՃ,#jG9TbHWbt6-J@͓-G5>^Eۗc Zw'pLjx2pVl8+kB(:hRbr 6kxu6ցr{W7˯?Tpl1E)}ZL//I nUQf|w`}/FDmV^ } '&jVJ9u"CH!j ֥JZTczW^`c#V9t 37AmD 1D!ET&(<]N,nNyPMHD@{3+ũ1sU Y{>?ݗkW֌qL2dct0_nCbl-ȜĊRݸo q74E+ x\>qAR&9-'qj(Ԅ"3G˔N^5F5y 8 !By0t#x XJ%yndͬQ5DsqDg$Jc@%Ԕ֝c.~i19#'Hin7Ue;Sդ>)94f '$.TD啪HYh)mJL@Q>X˛h )'FCC)5}n1#ͦ>I)|#0Q5:۸`o[6/c0eGKFfGF3.g|r"c#!$l[QtM&ug0ьpB dxEL(@Ljnt$3Nf?j2sMJjfҚ2;7ϳYM;3|.OvH ('r2]iɻz!&0H%񘫛׌"&mZ0k8F6*vgB8Xx?3<0J3#8Gыe*N)Nȡe1C8=ڽ^xŹQS@MJ`զ&'kV,)-iȈ5JK$ZA' 9FSP|K  '8ڙY ?//Ro}餙./1@Fi-<4׈ġidFDn棉Y<@p4&pWJ ~X⸰GQ%T#?>_<|w zyq, H\"燋TW#58 @߶yK K D$H=C"F\)ږ*#1P*{9, #@nӼOQj) ND4a DJ-2ѩq@UMkYDiTSmђM>ɾhR4899i;,s8C]h) 7 +X{HScs%% -r]N+#|lǞ,4oǞcAQ%f1HGZ*:o6MѡdS.ƾX>j |")WxE׼Ǐdޒ ^H'Sc,D|0HGNs*˃u_ÀjD]FF5T%[%+J(0K#XtoȄzj50Och]ѓ7"JyCb^&L՘V kKQe P,Jrkk"Y4wR[7S{J1_-s>җH D\Oi@C Ϋa5\+( V: b=:Aڢ3 8Z+ zP\MH%m-emS aSnbs{k"Č IDBG7cI"Zb"Ž*_uA; "FD 43P~K:hXOzGř},҂P6EƴJfֆU`2g8m>*eɥv K\`*EGBՀN+Sb) Em=k!e܀NiMB?W}]^j{gunv fe/Wj/wd~%/hՉ=*"BW\==l!ö, K԰0ɷ^)LeS"1qX)cpX,?K_3Ƅ!md7]G8 ?F̩qa3e\_/4..or]d zhUڛ[<\E&F\i3%.>k9ѹ: 9qf-V?=:Wo|g.-Gky2.jZ,ܾu" W69_`90.?ħ B ?Jx= f;yAz8AY^cAٖ );b" 1t]k}|hg\Ưׇv9:)o[KޒϋjYa=9>0+66Ӟev4p.hc)FkL.f2S8"7/ᄐ9[SƕGw)Z:.|LLXZB@T"L9@Zyn?ͦ'5Hk`mX>QTL$'@# =䈰PIA`,a #p^8a2$E~ʰ8 <`"ƢhJNHěSr S"! |o|xވ>/qXQ; n&ޅpP>I|k@X{smb%Ժmj]jpWپϾʋ7PeFI_֩GAxu~t's ec+em^_ڴ`3Wj;HCF?sDrqv}6P _ '_X2t&cwЇ |zܾQKQCzP _ql+U})tR>`4BݢHxB܅+{k_5;LjyС;n"}wʅEׁxi?H8j9KNI6kx5*yG֨ZNK%|EhQ:E QΚyEzh6| 26jpB:ylL^` Z0Y|<u2 P&m ;&O6YKz_/yJt3R>x'z։ŗ:h &>K"X3i$d;6E: ^P3&/V2a9@kN^ d='?7RCR|_}"z/+z1ha{AԌ]ZZĊuGk OeT֖/=R_ 𥾿uKl*ο s=ROduTFr Cw7( uљ.`#Uyy5+OnfPK'^f0{ܸ]W9nnj"sglx:A>nD]C`3$[+w@pjK?7s-c C^1]f7Ǥ1zz\\Q0i96ϸa1pmp嘸@9}; Zq.^PRӁyeU!TBV0Tf4: u=u͕ gK*M7n-2|s}&2|_#dYIv&Z [&ӷLJi)޹ 7 64jh\dR\̈́lT+JboW#9*&\ܕ"C=t M\gQ^kƘ>#?]5`7zTWspDdN]a9LS*$jߓ21wqgPR1bjޡ/qVөW~os׌u%A]5S%F )z1wWU^+/W kޟ Լנ0ThߩofrWxyIK@)ې<&yV1$Q!4㭪Q'{W;@ tlYf|JU̇2ȸ\]~bj6zE'q*G-͐`ZH@ѐ %!kM [q뵐zR@qΤS" G"yL1IRd<:@kBLKBvr^gW3g1b|DI=B$di0A"`ۮ4B ,{^qz8#CI% ts2YŔ5Zl!zvUfp py/z|N@k9sɻJP $ބ Uрa?\fWKтJ1 ON 1,OZ"_oN&(@gk9];PQK;/{ u(5Td:Ku(oR]Z 2Dr`91fxL7'hbXQj)𤹎 g52MXq\JԒ(# aFj& .eaݤq6vhg5MƽA7(䙈ŕzƂ( ɹg> |0RN F\)ZPKs̑HMh, BcgW19F#[}Q^MF_t2Gwp9]ms7+,} /ĹK*ٗ 6ϲ)'[߯1!9$1REq^ݍ'8D^Ab[9ޔr4*R.)Ւ8L,'@&RJM V(QE <˔]N$Yrh9|j<%% [9@JwD

    GOo 0}ؠ,<,f 1*r櫷9sꝳ_{sy-PZ.b ENoH6UyjַnfqL0oSO`us# imULEJ}{B/6zcnC!3i6Z{PַiDoF` 8#%>~ -ve Y)VK)Ī(fL,Hq<^ȴ0NK$FC:'vPI9_uHE1LGġeEO9Зl@BIfT"MbLE(G} ;I(|9) 狵Zq!/w L øRJ24K嫣 &a쌳Ђh)ǩ:_>e[bff@+(R;"r` 0-qюE&y-avnv\~.۞Q=wc>c$7݉'{3wf_pw=eڈSD0"7wM xy*ûƏV?OIu;& 9h:m]cE]XrIyS!ry~-O=N(JZˉI+7 rtUo=x ~athtg8xJhbճ3Y]諞U mNo{uQrj!/\]e^u燪7o)W#t6[Ov88w1}{[ PUٟ!Fgx\:i dW*. b#dU,gi'Q)rV0SK%m.`e*YspDHLe,cLPRig=??1$c` oYzit8%^qK"2B,V"LuHkh,dH$p?-KqZ-*'}|*ߡ߹ < @yjE` W,Qb)WLp錱F"oȒR@=Kd3DoΟ@cwQqLgm m9A"d  ݛL$59r]Vu<{wy8^'v￀E+u=W?x[RОG}EEoU0|5(.JTzPMN^zSscֱX=wkl~|cǂaQw>=F^"N%"/VPO#M0A"#Rr7ŹIJYwj1maÖMRy GT֪cC9e:ck8eV]>ެAA bՠ|L4hő]`DCVmٷ7aFm8/uu_p!$'0ޥw*s= H׊Mb&\d]_>4ljBs1Ura!\/,u$@^Sʀj4 jR+:aY4Vt݄]~66v($\LSncW 1ROEz)q'+QJo/?E\$@n]\p=ܔy)l ~xoW!G\\-kfcء7_TdwLk^eet]]U<|ff/Nf\'<˼˚(Wy4䅫hN#TϮu5bZ\ RT'm(SGnMn}h W:tWMUI1I D=GIp6FiANm-"vxJN;%{pMɏONETzk仝F~IZn[m]]wZA\4[Q1 7hgB"6*KvLZ9v%S&TH )mfKm+`K0!ʔ@j:_|q9j }ﯭޢT"xU|!J(Y.Jk8h[Ujdzʣf 42A) OFc3I=A:L:KG2<dBP B~7ICe;Qi2"3I=Iigw"2[$Ic:9B3H$.% R?_&%ƢD 1E'{T^l6#Z| Η 刡 <P5 G ˌJҔRYD;7e4+>_1/n,b|' KvkNSFp`P? d>eFLDz=Iny)*Bң$c+$zYފa[ѕ%]h {xVuZmLxQxx<|m T2_܌HTcJHX|ttA6rD(SaK'e (bd=q9!h#B$m*О2.-эֆ V-)b| v] : JuY"#/7 a#I]9$FKR< )x-Cڒ:GaOA>,2&]h-|ְ,$"U`q\k%(%B s@"k[8>`1 Kˍ }DKBkp=:2afD* fa`5˶# ަRk1E`>,G(unŵy=PLޯ#dLԈіۃ(4䅫hN}INZx L?+0F5nv?'d ʸ}aAKA-}z7mGLMNsk}GG|B2|4G&!/\Etx.1ic_U)5R~(c_W٫zuUFN)|fӥb(,47B!.   b-jUw<"))kwhiINGsuIy6D5_scy^* oدXViSpa0O0шӋsu3WW_TU<:yS;S~/R_j @!&Mpg)]pw^)yzXXX#v>?'$ xB~Bb`EƤ=sY%bGO2í7>&LrzF?'pưԄ 2 RׯJʩNUgj{_18K^g(m?uQPqOqKINLےEl ,v7,g)$sIAH(1H`qlG&C^Q8_Z,%M)pd%tq`6$vw0f3bF_' Zk6z/Jҹ(!dd- 3FmF%`n}rvA@O' xyПTt_8\tRY+CA(֖;Ly5B9FIVT3F )^u 3$$&Yd^\\BL sjd<.ò L 3M`W^H)_<҇SZ-U ) )5)E J( (IN0b@e!V:H5nŧ`Nd!L X2h0 Qxav@ ,o!PC밪߆Ϡkټu5' BZ)X3EM!Q 9ÅҘiV^a X!9*va9%΂(]z:*T gݒ!/:{ @#P:W:nL}AX`$MΈ^{T XTn UR<$,R%~npwsYIi7KWyAyNjݏ&Ij0S'X !YG:11gt@Nf!JD2lM9*0UL^>iy:w.1% GfMx)"LRBw<°`m^zwbNA{u1 { -vfV ?̶<_gٜD 8O|kt͘ z k\|r;;Ųego~!":hegTO46BXO:M'gfS arL!>®?2) ʽ'ך{$ ?%@Q":c V`Hh 5@TCeeJ!8G3l;y6|Rd2UȂefC$*yq BȨd*$>؏:G 4 @t u5`rNA'/*!02u&4,ZJXQ e([`h-ʱ՗*&!T#$F v"hCZ.YΖe@oMx睺v K  atֳ@$[wd)18\[zqFas,!͝ӝ;C9Ϋ?3ivFH2b}}׬C9t%t@wqlx,IK)+-ÑȈT{ݖW `@c/*9R(Fjj=Y+b8E)aPgʲ`QPȎΎl_@uȴ"e!3#69zXp>ؓj|n<䴕EupLx1eD0ӇHri{@ ӈa%ՐNp(>MBrx@S8{><8ƐW.A2#fݬIB11hy>Op zjE4Hݬ3>脎GZ:^Dj~uSŐW.A2% M8FnN}ۈ"S~c}JO xޥ*7vlٔZbN;rK a~6uVkͻ_[mVYޏB~+бXVLXB߀|0}&$$BW8zp*hB rŮubc(Ap&.0% \0d׋pvOa 'biE<66s6P =HE`;2zYb&>ޔ+;CZ! xMV7ԝ]lB)"1Oun[q\B8]i'6H-Oel~:#%n6Ф@s3'9.2S2a_DNu6wC,Wz o4]Z5wO USE޽AD /[Ƙ<#HW"e1?o/8]{W<m?e|8/|C!in]&,AܯbOKXo kGqJOV6Լ9~;t;` }(Dn|2,FF$訾f3M@2͟-ǎK_}IDXO[ ڌrH '}.e7G/)NՁkwlQSQ ؜bz/Nxu@1%qͤP`+@`_X֤BFk%@0a 9~p9EU*f a9 YЊr[#BJy[$3a쎧2]GW6w޾ c=תXcd`H='YR!2V܁U[ikb~6+;Hm=?ܙ{kDךhF<琂˖F]_5]v:{};{~}M8w+[viչ5+;^ivkw F^3ZpA!9N=WA) $ua#R; D QQ9KN@~t/F';q+ F(~ H"6T>6RO3ssVTQ8HpF(Hc H9Ȋ='Z^{mL*%J\Uhng7wKv,_.n{kmo."X}Qn^m!/c?y''otS^&8~uU#]GLy ~.\/V;3#߭»e)C*ؐK mi+;hW'ïߝ}/ ݻ?ELe ֿ\´~@ ܺnO[Dedu;eh H[þ1\/>edO@P'/ǝ~qmǝը8qُ=O=FA =$/%4Sz&:8cM/!I /ӧs@;wӃ3Xwzqչ8hV^٧s FYtb-ʮYzq_ze lb!=LIa 9.x_O7:evO3 r:9ѝjUVG%-v[m[Bq* x4*)g)֧PsTyZ4Q_)u嬰1+^U>H\5%hUz hRrc7%&tpI'zBFCqܚ]8O£"iqBq/˛;nݮQӶ{9JDW KN"V! jC(Ǡsi2\(MUb RL".(À#h,vҗU*foP. 8 pZG`FV dn y1 p:_ʌA *T(la龹S +Ne΢ 6`K|Q0n5TۘWB:DUuxYM7>5kG}n a!⬍` !lFQ ff(3MhcWs(>Q+ok9I:=dh\(.<244VDЉ} .Ѣ-9$pwŖd/x/ۋۗ^2#0I^>؀LIoރh `Uc^iWB9q& ^.Ugޯ/. 2ƥFC//9g@ 1F3)jrLTs{WMFrw,J=>n:jh͝.MpU5:\䷁㥖A8A=z\LVS$vi6Y|>->rl =anWyV6|/9˧ 2eJt*,'ddDV#LnS7=tAu&xa2Ƅ!nʏѻXy`.XoX{pTPkhrogӳsC|#2/9Gbb6Zٸ IĘW?QkxfI'-a GҊz>1,5OFք14HD %szV5}p݃/I鼑 NfRH8AeT8 &#| 8@" X*%?x. 9,r}vAbuh|HhZܿ|B ǫ~u4d]уAsX9^ W; IlV7ٻF$W,Ty 臁^ vvǰ3kCʣ[.T7H%^bVn %Ȋȯ⠃78l%REHJ6AVށ Bv.XLUrP"-:xhYYw V`ה+9:J#IdBk>N(?mZ.B28RPG+ hہ*[`\*.(HB|.\]`񰁖vR;lj= S|R΋)"J1 UZB;Ù*ae^UpzA%ghdZJ~k(Tў-T4)NT1w1rͼ38hx<D?TZ]A v +q*GUAr*@%hVՈ%hVqfVi!Q`/ke*>+Y׉!׊|Lz!jÂ8)]X”R1\vd!_ؔh[?֥cӻa5VAޣw;^8t*nݺDldڔAuݦIM-g]mjm^RѣZQ-0 JbDRe*+&.[-c7K^Lkzxkv`8hk& 국bJlFj}ZFꉧ4VEht LVVJ)B37T੮6FP]I+_2ܺ/a{EIuYm^(j!Z,%{g fA9Yn8FLAޕTd;).[6R3,).Hv}HJy7{R^:JoًKiqD*D5\ovu]h0I\v*Ut:M6ד=ns# Va0jЍkY -D>M5,ÄRyxJ':8ͽ˛ Mp{}z0x᷹{D.=jҽ:9SO_^{}薲+us%wQI9#㑜fJNϩ q08m>*qZA@eoۈ͹V@TJS;.BE$@PBS{"XJFiqQT*"b!`sh8==S$p C.K(614_]^ʌJ?K-Ec3K;qRJcS=9C5({G]]k#D}-7klJXjvN.|_P6Cub!Lf:^~yOolf>++Va65~J.St˔ ]6iiM !8LS҃xkz3:M2kBLPCzO[Quwo3|H4doa.D3j`9coXZ-Ӏa"Ԍq_i xpk~ avhZchlyn9oYnV Ю[$L0yBA_Q);'Df5ɪfRi~|\v fq( _RYgdc.]&lXs{<3E!2(B:_ZS >굫y~ ]L>Jz%k].&˚s VߖOa<>>tDs|LץmL7>M%BYFQU URO jK<'NY_ͩ/;XFW}26DH!%y ~mx<߁s$NtQ\sg`C`c<\"2㙍SjQJ $2M@An3)QpSbh4ujiOQrc^%_.x[噂s4- q!T^>bvKyx >TU)F\x,DpA a2z" 9X QUfLrMō5`$@*]EfU0".ۈ"+\Y`ƨbX巳ZoY,"k3R n0ŜvK6_mLSXa̴v"${}'İq`.2Gu&9#FH2%W:30*OLh{L)K+RA'W=׳7;+~m^ݙy6e6:VC2ćuS .޽ڸ@JfEC% cS /l:5ҋ!Sr^?sf#d`/O߁lVXwOҀoW!^+.&kFt IbIA..E>'`l- y뻏$q<=:2}"M)#s9&嬮n} C-'ĵdGl/aGjPK&tecnvBC}kLyʶ_=OϨtQ 21̹d|۸ry!3R1aƥPq\:0 93Kwnuڒ8=yA[0n ky8oVUՁbS <ܝsItT:R)︗X 0 {N9Ph̡[XmP\ -.;\jcRkaT5\K͉^{'ڈE#Cbh4?@>l$\:*/-k ѷ@_ 1mKztu5;J#QfaoQc9/{ߊ88gQ.VMqAgԍȋ9:u#F;ΓtƁy^&n:cuC8.;I93^W=5=\JN)U=KT (L 5EV[y S9J)1\wPlތۉ;@ĥcӿ889"#K{Nj-0w+Қ p(Ŵ{K%h7,Íp͝!4%B Ii_dSoԒwF&X5 ֯~AL/2~K1:Rls$+@o;!ϨK ]ve%-Pylб*ӼkշD%Wi4-[-Ӕ!s"9(:KxGYyxKLu,2YeӢHr68ωVK& IZHs:(uVpnc7*,fKFtW+,g\a-po8,ƫBChGKa&h.SrtwJHW1!$v8F+Y+W l* Z\)n>۩ $Ew1UM}50Sq+ҩ/0 !O~Ywp@aHRt9eF~La9J /DHB K4)gy,Ad1q-, 'vN6?)˫/,u;~IlNN'>|~:+)_pċ˼Flxr;,N6dd'K42tˌtHQBLQ%mZ#gFFe{ynf4"RWoķ]{NŽ,K# 06 C5!Vp Џܥ3I]/3I[]GZD濞-V^%,/$fw jP JG!Zѵt؉Xr`nΩ2<]ObWC7Si3]y)!aBelqXKew`+-AVhBCV"Sy"e^"v\8m-sԅ=7 R>i&iƒ!>kV$B6Zjf+-%6 Xi\ `7' ͶUs621o#I<ؾyf'j=e8D*a9 zwxpeK1uQs)(E"T!jb(-tGb7$ !G:a`=t;\us5R"̽{4q ˳#[A2" X-=2ٙ?dB;(-b11' w `1߅qWC|L|5 BȋcV%--s2aCf/-/&NB5䡀7;C22:'}~LmcvrɑD{tFX2j,lx.`m^`SɁɥZ.B&ҊFP1S^VJQxLo.);1Zr]͘/.cfN:}F6D2w??t J]VOK3)(%5Ҋ"% Q4F^0RIdT)S˾r0hHܥ4AŝnպCknOJD{b.(sѧչxVllzϺ,B.y>_qEeC_Uź9F:θL6DlMFut_4M[x^LSlR~?~ۯN'S/o^'IOr\]:چ'.d!!,"x㬱"7D{C5jG/ '<8@uan{:nʏ H}; ɳ/t:n NRp4B {NpPK6;8*q92HtӺ˶SD2Tr`,u|}feBE;l_FS %P},_!UW'Wzܒp׋Տl,/΍߻ CB.ܾ ͯZt%eEeN7WWΊ9&m':-9ᄣOB9*?-9'2əo=+ň"Ya/rŋfՋA AJ%a9 *PA3 Vi/KMN$L)܌I{h͢7ެjU[9(b9Fs Vkn8;vVvtn]>SOW шX! V >9qJp2$x3< 9kLJD0 ]Or(/ӹBU-aB௝ɑ s0s 0E=j\ {^< k,.)< 0k79qXZ k󴂏mo=*4?+< ^2ASP04&F+R P(BH2tJgqBpW9[+{*X~KȑEȹ*'GE(s.qn"^L:sZYIK^VTn@)@ϪV#>$6֞JOWYHqP:4h*C*c=U <,g_ `4ZhFQ9I КZDh_Hά@URɲh|aJ8IeIlj1r di΅I0l麴P+uEU!b)ׄ 6X1-d%?+f2̖WMD1_nUjO/xzd(AQ=y} \ZК¢曫P糋S\|xuB 0 ?3oN6Z,o,%EUӓWBMaFQX.W4Lև|J$Sи<\ |C.A#}f8 \A|> *t}$\KZζ;IKxu捻 Wy U>BV~M9-=I ô$׊ǢЦ"tc͠S$k!Bq6;u#K$EU/=H&\j^hv9ZX3h`1&֥&JE'ƈC-jYAi!,KH$x?iM:kQwu]~ukWciBxtյm5 gA!9<`FrrPm+ULG9RoDDiOcH3LIq$ðNjJ mةL9CoiN 9nڽb%za$yoDUBsS3 \ lo QjAh c=nS'{*^o|>L +d%زhXxpgZ.<W`HOe 1*5i5i 'o4A#ӟz}y^_pV g K~w_M'tl#c:TN2Y='Q Ⱦ;LtZn֊#Z,җX♄jFO%9dn&e6qͧm 6:!N:,28I<[~ЖAfE *%FreUtѐ3,g^9Y :o4X8|UUn ]S-!t`ӿy3yЧ7Ó+@rIb3rLw-o6hӕ&R΁-"_K#Ox{;mmuv.V^GqK_w$XBʕH՗*op5%z!:<صN1~i" IY:狛V .\-\ r{ou60Y=CUw XqO-hvlD:i;R=p*PL콨p -*XQCSt ? fscl5aHz(Ȑ 9n{Xb ݵfuOq+{y ڨ9+8ǝz#2BCt~ d"rJ@gZ"nicHfeKXYbɓ )bw56uf⯪UdWEY6ȴQx)IZᤙ)5-E(M|FtIlޟ=ܛ9M,Wn~_wӿFڎnL7ˢ1z߾@<41Kviy'FϺ^%ZOBóՋ y>iqfKM`"J0;_>@,2wRɻ"# 榢BV!DS l7NWw;}ו8;Z k =`lBeuM01ZkhM9*f6y0{Y C&|cȀI9b$9`ezx"+)pp*c?6E+Ta*&5v3*)C C4n} 3cWX#(nk`=Ҩ7A&دqil)l "&AikC>,Is~x&eOrna] 2fX1úL3v\8ұ19xo]3+" c |>;n@e츅l<;.gk.fͪ IOghRu&R},5k_)[ v,C@A (ΰڱZ]AWFDD1mCD%9]~؁sΜ:#ZCy 7ye;=(UKy60djƧf 7lOSD"a$2Ai[hFԐ/R$̎"yH/i|stg up59Z7wW.>釋q}"e \,.<0r3~c;X`E(SWq/+ {|) !MgB[&__1 y;<{+Ks<2l'qţ=Cl+Zj5!lPԭSi.$S\SP,>8 NYlGP˚whduP %7/6AMசG%6pkD;}7@R~B^!]-I zXDߢZH C EIӱTB^\6,mAI-ϕh$ iIlQKB.=ogK"c@Eڢ.`?Vf2V9=|NuDmREq%bi FWId]ѭeΔ镆 mu-dn)q v  %2̢~3?%N0c؜;'|ìxiޚE~RHWw!eZY&)F҇i; ކkɔKZHocws lTXJ_1e1Ȋjl@,#Վ1ԑV'yV]s!(֟"mx}CV,4.w׽;Yk1Nd`N`1 ^Z9$幝j8I'mFD;85d=lpܧjJli?tCdȸgz-9qm ./_o$?\O^JtE3Ix#+>Ƃ󭩦5S2sl6 [M?}p36c_h ]`/9m4F}KV`}5i3t}iG͐Ulh 6\жꬫL56zPDZ,d~g $׳xp+*eT/(IP}ű2,oѷ'pC- qBoa "Mol(b_0p˰sڶ_~`R֖x:6='A b_j DXEń$A= Wi) 1?޴{<-Q^A bJ#_Wxp8ʂGr~+ǃ%`xwi9 ׭AN=fBiEgH_FN,w1`9нD ÷wn':1ߏaý3smW,{no%Ǐ3[P9z4wc},ѨR WHG=`ܩ]eOD) IC) .-t%0tXZ__PƣG݆ }P)rzj.}A#C팴Mf7m/Å6jreOWc?֞[QdSd4gA2>r=>q;7 @<~wq8GJ^\9}=>vGSs?Z}bFsVFFIQJ՝;} @L?u_#'҄%^RͲy+R;P1ʂŢVw\z가[߼z:g 9C5Pa_[xٺݹc}XaE*OgE(drP?AƊ19j*)8t왊= 8%Gn$d&ANsD ~ȅo:CC\A8ll_O@2]sh`OwwR9IkFe% 5L2%!ԘS^ Q wwpW%`@F >h3dӂ#qzy Ƕx-K_b3gWا߻?s˜C3Sٷ1zwV oi$LeeYWM!+]4oǩ` 1]vrqk %uHk4 R dFR%h/iF+\ȓZevrNPs\7PPVrP6RY) ΄\+B@!r8û8x@i+H")ėҙ$ `=3L>ĿYY yk޶Z8+^!ˊ_޸eN?lod&PC 3{ۢf ujݠ 6Å&+YuI_htDfsv"&%i;psA-bӊq UM)BY㰪8r ӜvWŵhA'\_wLfvqʑr7SRv9 -( \)dh hYi B&$pFHH2#&o#]_@z^Fej@# D~w&5XjM | Bc ɾ)܆',]v:*=ޏf?Y,qMkbFűǔzCǝZ=zvI?\\q" K&k53+홪`ye˓yC N1EBC%j RTP=F͋&97ś`*h֠i']\6X1Ơ!Gx=V;0$-Q7qr\0p/h J tCɾ 4wn@ 4A}s\]O]]]W!A1<,tWO|W8^]kְ 8e @h'pk*$Br|bcq!ф{K=cdXiQ<A0X)(r덢f ~ri ^9r5c^K +&D -|/©"߫eI0)+q"4AZxK.1XSqZH HpBIsu!vbMS>3HʽԌ[{ϽV;DNb0-@c5iG4|Nwߥs7~z/W- O{x-+ \2[W Vi-;=5Фڱ;~ը拵_N&o[?#B^?{&va<?st{hCKVm~G k(AZex  ǚAD)B^چsyubCKzGl*43l@08Z[pŠuY)#i͉8ꂢ):{OaEn oJ@FJc%6:q !iy  CЂ\ 'LsiHh9.+MaS!6K@CK@#tnpf@?kKtr5[N7Cn-h}ĭڍp㴂MPM?ndqr&ZV|[N"5Lү6ZF[-y\WcQ]AK|zcQz* RZ] Gs2%IWf%Yg!d@wƀ3sAbVf/uRbg<MPknde`oݠPWG-׻<|:QBd* .߶ kBnj6TjY@nr(94aՄw }yg4m 2<Afcŷ[iYYl0hte2~:f|zb̳kOÿ-"ZٯlG$#BKKBIph)bS(C(a)~e;FYՄ!ݖiK((OK2qQ9MXrkvF-'hP#PLb)rEZd*~PL΁!ƙ!\qAmo{kOPb4/p7ZSKeUwۥIIFٓJZw5CT]®TЄ]557)NUp® *QÈf toݠ@)EhEs`gxp)<>dw#*rY\/ENw Ρ?˨0DT\F$ч`GSm$WjnSd%=4UTVک]erTX\? lv{5{3{;⇲q;_NU11~?~׫WY)?cS}<")ݲXĿFXIJ-*\iIBq)slk7ye?v DtbhNh Oj6$C$s/|+WW<;QiNeO m+ Sm=UVJ~|w 7!"h["J+vvk|_ggP~M78ݦ$ewvj?. JD/vp'^X6]CI%@ݭޓ맍9~O|qz禋 ZeE|(^0ɋ>a6/68GkXN"JB75| QJ~+a#J)~^iKQÏ ;'J╆OrڝrRITbO̔f+eWm:kXg#|#v&eN<`F1!!߸FɔbgwLDW•S 8Y0kG FO,BdOq9%YP_\C+BxQ)ʩlJCKMH-dTIβ e\$R9fZAܐ;+2vnϟFr9A>7 (ޚ3johAcȐ zl1'-mꊽS ȧgDZ;[7+~ c=:^Z];hV8G9D:d#݄=a}2 Ar6$nrnsq!}sTާa6) NbӊNQÍ:}*Spt|=ꝡN/X-WBNCzZ;& fh#¸d)-BL@Ӭȸiv:414iXDv !H3* Tm5ltJĭ X9C y 8S}ÕgTB^d$\QT 56wlqhqrdi%E5z}kE;tJ0yԸ =ŕ Ԣ4HlqdmRjcK#Ȓ?D [*`S(Hrz޴$h{P P?>2Z9ۗF E#Tk1.lj'g @J/jj@ vlÈׄQd'héF$2EW'Kx!tR54`I8.lkR   O~lA~oAL);*ra[rTgv=@QcYͺ*uPh[.6JצA(=h_2eNmmZ<t~M֒u\:1ɩo^R>xY|>[_8uT4yOW7f2NoSC˛/ TOya/=|m<dTjZ8/>ᔡӚd)sݴ_GmPc ԕI,&f `!iV;Ý+2K gŠ0d)sQ@K't&06APMkBkZ/}{3iuEJ CJsεCg1j"Gbʉɨh. LYE`T(NL F˜ť#%9P89VBY˂ιa (x\(,M4RCAը<4h=4 Bs :D K3uqXqgIh(0Ǡ0L3'\fFkʄ-8³B2;ܘf~XU%>]SnQ\v'|[vAh)=}}*d Q#Fy=^1TBhlD#藟__lX~91 w֜EW/~XT?Q#p-OaPrFkH\N8_nv׸S> ޺AP;P]XuKErAQzP[?#B^?{j=(* @GRZƢcݠFS%(*Jj+3e%s 'lǭ~$!2hX\64Y a1``љmp+mGl9 cBPڹ:BO'LJ_'/h؀ubn vAMANk Jl/0(/Z ZԲXG,'uN S]֢Z32ڋlw 6k NӄL!T0L%Jq k}!r9d%B@FnH/j1p.2a8ٜ68**zGZ_s 6 3@V3b9%~WzixxusC5hzyygt,F9J~o'c4n zdql8^x@㒉kF 4n-Srx! 9c/KnqS2g4s/<_|`~#r y*+1[:.|?oB&s Siթ0p 9N4RH2x29-hK5Ow"X&SJglnB#^=t_H%~'-J8@iH3@"ܤ&%63cKHVLU֑4/&֋N\=K٥_Ѫb)1|^+̀x NZ dt.ĥ\v\r PV|szC "U39HIkmȲE`0] &;i'/ztkWm{-9I#(Y^VQERT$H-u[:GS̽BIU`EV8%,F;]X΄3a"Y""0T՝2>1;G~q# }~95'Zag0"p2SZ /4>3.SM/b^:-8ve&T-5&˞4YmZa)D*`-ێ9|ϋ2m?_Ix\SQd.A6r;]I(oe#ryl$Kx7 ]U옋+C ۇ[HS4{Gogڞ'fx]e~yT.?Õml;Cn˥^ F✨~p)̲wkû9<XV =[pC4b+xo=Y=Vt4)n~;3E8d68L[EAP~&`;2]ES_rb5K痻28N1*I;֥TÓ~%q\tLo\?X'3P{}uEa"r9A1 F0(֚j%^s]N4f+uCQb0Ol xIܭzAY\(Hn7Z 1eGkd %`dSI*N1-?LGadEīA:k4+(dwyaqX( y&N3J65Ur?.~eϔk*ϔǹ`j5@ k 7n0/&3&}w~)/bP2̴0VRH*/$h$HJQLQAU,d}ugVd^`e%I)m?)oITQcf rGa o+8uV| ؐt$ ]":hn+0Eͭ7v/jvVx?jhn7zN\6gm6^*/fK|fa˿^+)%;'i996:r7~z33>*'}mY yVv[?,ֱ5 Sw3SԆQ#j&Uls1XAe8Ŵca*b>?N7 jXu(8 .ʠ(,E :LCX+1.0 6=zqζW6%ݤC$Q-9'R]xԼ\8< (m,<]c\hP4<uh6E qR'X?OҠLVPι~l%h|IΫ>!JNg|/gIЌBZzB ໲`.z#{7g6kMѠG]r2YqXOK?.`~4z$ Ǭ5H5郠\c!#*@$Vh#P2vslkzpPVZ]ޗ%+x237Ӡ}5-9~#HV!vET:qm-2 &PTfu‹{& @(Ll9Q1eHcqެq Sara3F\8RgvgH9l;aA@ѱ /_@@ ǂ N3QuˇZR~H EARqXB/`̈́šr\avVxn`(`S#u6XvzaUE<%Ug/YP y:Pe< EUvˁ2[ܚٴQe;aA -㭔UT H♷JJe&P`mlTpݜ>..wSx#| 1zgF5pq9|/wULߒo_>\0;< GQdDao.`Lwe\xX7@LR0{>Ͷ2©$n!=].X!/8nÔ FJm&eǾ{&O˿ؙ_8p5fWt5WG_}Wgx+-mZI -=~<dzs6x!*eTVo b}KBvK^bTx!HK;#_)mP;gm9[CӇͧqyONYu_駥5ڲ=~Lr>C1i/a]ܲט 6ޝ$+is ǤwQ8. u% l,WjAjݳ]ϝ!]>!Ke;çӳ;v3"Qͩ0Ӂ/w7B[CS{n_m\ٽ6L)m .܋[ؑ QYMwĖHk0{!gl6h8xmq>vۗ"BH~:FcO-vA%n޵zʵ $唠a toi99B!kƬHDA\"*6ΧYl\ &T/f%*NSX@BAjWXR+o=AJw76^`PU‰V7KfObX]ŽU V`F5oHQuKhF3@`5rA$:a5ߔY|q\6jUS"yaIDrוa"GmwittwUS9 *zsVU3 SG4e>^|BW0TI1&%7 .Derf6` ÚZ0PƨhMvbrLN-{lI&g?v+ղT|kҗ:íUj-Z(RQռr}𡌅Vhrd}4K[tshgtsqbf8?}x 3*n1k<vH0#|ؠxs}%>L{&T؅ew.Y/wgSхxWtql:g=aA2ҕ3v"䍇hNLU[Qt4[.):툛S%;nY٭y!SU4b#iRm9bv %trQ'β[~Xbvk"Bxf׽_Oof7&{2 %trQ'sD3M:n<٭y!zpLHu( |ɤ8 !wɐQX 80a {;&[]y@"&9roj&LÎgvC.&y޵ޢOj6~spiXi:L8(%󵷀 :ekK\1qIiHZn~⯄Nv'!챟SBqJ%J',}^m\T %{jEYZ2]DX _58/Z"Dy ?kJ'DS4K תaTˎm@3e@3Bsyu"񛟠Q)n:rbI)Pp7wvXr8JkdGG+eY2ү" A_JZ!rD ¹z!|bIHdYpJ &H&U:Z#tòi_}M.WxuIHuYŎA\D ,dGr<~tOaY]=dɣ0]9ˮ@DAiϼ&=?yzs7?GWW@ÿZI>z3~eondƃ}^QCF]z<1a>L2–x"ޕ{d-?j| \ MJs *rЍ X[<7G?yOg.\ =esFVrcٸys1^xcV|nAMˆjZ@ We_V>YO]YB1D,A.K qL8˂"(ɐUsE ,ǡ45/$8׈p6~eL>[; ӈJ]3t+ue|ƿ+~}!@J6n++aG=TN}]A:U%ͤ74k!MLAZC w<ϐS$ڝHAxv>Zჴ4e*N:-MKmIL/)"V:iL{"pb+ޮ[M`.)GϨCu|J+ mt0 7Y1%usfv2cgb:ZGgN-dJ@͊)yM(RhK\)-wZ TZa%h )&GvD.S0lXrQQkV7_ڌQRq=̢ %,ɫ:[ƌ@`ס[QHGwF,gT2\i 6_@RҗyE!]WhVX*Ozh ^QqegUi}הjY*jb}{8Jf/;O+2?A gZU0DpI<Ţexqchwl֏8-?瘫 R'UFɪN E?Hg8?Qm#V]A[JhR_8,dt'C%bпW aӅ)PdTW  TcI!K]H1BP 2P?9}5r=|e*;XWZIdC_͋zAe?|5>J Z.,|^JK9Bm{<ՎYbw 'ז3&q.8x L'UnTeaRGF bQ# lHjJ8޵q#E/kˀb᰷ O$F̌WIi ~Ll#pd U,_QVjC4:p{hH%1M"nZt;uOyYI]JUQpԵ㞗TTQZzk\RoJ2|i ,ǀSZknJdVU(|8֬F[#e*ZNhʖ\iŅ+r30E9" '"'Y{w? Ax YTVvڶ>sۋG?<20 ?|{ykGw؆0YZ4?s߾>G'l\ma~M?3B^k~\-2` QO_>~'A9|G" js&Aݴ޲1[6ƨƱ$12vШb0BksPV&&fEոOPȂVTZJ^k:ɠ4iR~=]ބ/iD:ɽK6 NL(a*H_RRBUW3-JxJG kh*d I̘BA.2r&9@|%]Y:}e1vAit5!bCiG *ӒVPBJyJ>?HI;A4n )SH ?> pAY ZRh ˣsqkBpJdBԢ/8qaRdWx18Pv'/8 B+ƺ&'t=԰pр6+$<3^s_4n̴{D-)ISw{ry[ùdӚJwPZ6N7RH5b&^h(=ۉcv:K/9ӒVKDY@[+ce9c:nM%/4 %ٿx1ټ`:K_:t" bjAs^HL.A;tg_^~ԳA q2HIR_"юyBvΫ읏'd-R0Ui^4xS&Ͱ )p5DR)#})%i&6>)~,A6m,Տ /$7 gR6;vZ_5ZjoʥH7N2ٍ9k|3vk.Bj[Vd73wrqyZ m@a_A`ж`@5#(ܒ> a.ҙe}~pIyQ:p(Z(1]^*/'URI?pk[ŜU[-k˛W YCeͭu$Nw)"*>u[)JO>Q/_R8Yq᙮WJAw+k爰ZAiM;P퀶 Fp*iTPT J'8aInぇr)dI\GZ)%H9ͅ Dj3FЦ"s T ' kiòEw ʤNeAfTs]yv*d^v:`B V /n >˫9_θ*#A6^//J5mc0yeqBȶ25RxxbeUUB:E@ ׺ڡ 7+߮{`5+ ڹ AWQ.F k-KQeen7$e#| ZQ+٬7,\'4#P~#hHhJpEgеBT J鲂6Vǹrk'^-:!q IaK~֦U @)(p?Ђ3>O#WnSJtRɼ c.btPb܎,ӍC=/@8 fle?f]J{_ش^0opu/ٝ${ܙC  ⏿dLr5:`*{j֨N`h xxq gx6&# փmNK[$|H.(O6v/mgC:!x4k4%Å5<:[է].8Ua{yBʫaCO+3ũjkg6'FyEyo})ňNE`&+b?_g&Ƴh̢=.jkQgh@F;`/*>5yck cM޴!8,F.L4͏[bLQi#e t @uIo0fO-EjgGxn|N*TkePoz*fFa9sq7ݜ fNn?FݪMGpd[F4/ `l[~&b4GWI*> $YyTp}tȼ,bE$!ePL6$: '%aH IGaA1:{jevpF=HiϪE'ع΃4'LR|%_Pd]zR+ce9c:nM%/ =v`OII]ŗOi ֑  nm٠ZYJ:A ?)ZdRƦPTZ yJ.x\%z|JQ x%GҖD$; L LŌřcJ@g3EKb#Te%wpJT1W+S)0YITSjMs2^\NE3ͤ38:is;Ʊ-L6E_ ۸3ѾzvHɥth|aJ-UyZpu(uqO op{k˾'c#I; ~(<?^)X~IvsϲE9mt)*ǫqA2mQ޻LfBDD9yB] `ӝ#."b@gL1gq=sZphKL=.I5?F ^^WZmeQn'Dke U c+V0nԶA5Mp !ס#N?{=ceOsa2(SQn5udsN*1{wt LF4{ 䰐DloxgDvwvS4)׉DE)x=ݦu(o6x: hoQIuYWZW 1u!yEp/~a;5uG<_NLh#E&kz_)SN`iG6s|*Fľ7WV/ښ%X!&޵5m,n/CVR=Y8ab =Ëש)@ t\.CL׍y{Cøz@Qʀ"ceg̱Äm5͉OD'i"r;纽`4zDiB(cN"' rVY*+or?u\HՠsbB:'Uy'Սj,שܻm-; q~qNxUŬ F m_DghgKy?]:+ኪ>"{4=1"/Qbᕺ~x/ԅTo83||s 7MӮ1utS ~].HL5ݤp*9=g}i).B5\WwGQw'ߔ&MdnWfI,޶f/!-Bw`uʒh[?yU4Um}CduڴRTs=^6r@WD3ߧysג33iM:pXqh+edAj5F~ՙ'¥Yzd6}Lv[/[jW7f4 nS߸Fڞ߬箰xD5/Xې_ ,kU⋵xiEP E5q@Lzp`,7iC,u߹47ll hͷ6=MQ5p7FݨoTΆa"q͉ESL * 8Vqc$P Dn= o_ozw.>ɦԫ:ћ޺2ClnS_֞y: 3H{:i`&PHkj1XK`jW()M)01R%F^X>NTI>N~RS\3\Q( ?/ i~: v@t@/OnUXEWAjF,%{"V Sj0U0uM)Xȭ1pJX!)B'2 P+mo#V`XB"BgUz)h&ţV2էq2 "(ӱS)jFJX N٩..-ז(W$MU!QD6p@rI+!禺8@`,aJ*9#R*(8Q@B0R4[b@#t= rT~HK*La(Hb $e:I\t9`1$֎rJ:D݋C5*Y?% }{Ԁy3i c>t0o> |r4w ~Boό|pn0!vlzo_s`ߪUCRtJxJgơcY29mLQ̤9 b&2ݏhc{qBA<1c[2GkFǯ8z%wXWX;(I(l_&C vHVTVޫUXNՙw .w~ Vo}ٛ[]H(ڞåea=G;_"˙{g.?Eb:ss5E䍒ݸcq#f_K0ILh݉ kv 3A0\ e @Z>3B$IhF'a`1r Yĕ%n`ѝ"oJBO%1A:cNCΑ kFzX [AG!Go_|/.R_.l5Ԇ+k :+ 3@{$!j 0͌0 faɾAngضYzl 3lcэ38/qtJTt}(_AP7{ l`cϔɵƙ婕dޕg3m9sЗ8^eL IY䶈ɝwld&Z_I :;0WDV-wmF7wᬝ'w^&kC+~xALu5 t4pʋiW] UsNqow{g?C,GW4v<4wLL6-YU?ۇ, y!*C꩙Wp$Ou?{Ti=ܸt,z&Aўh揨g~U08ӹ΋rð(P*luuҩP j*)1 gzOR;lKjX{jܘ",Z!s6S-U9Qh+Z zCV!(j5m&͓Tx].IA8!}znsx ܢ*{$WmJlľKVFJx)rfx@RTZ_J Q)80RE( 12 Ōc"F#FW1TsU>M> u֧S͕JJQ-iL* 2`q&".5g#QA($qe9ъHWZKd|"i>fiƐ&;WSr ~aUWrr> *i>m39AIނχ'-%:YK6Ux1o5ЮNu|Xצ@9;‡-v`i7Z^hV#pVA,*y|9P0]Z+/>3ųSo8g%&DJ)ŧP6~1ĦUHex-y% f/1M_ٮBU $NN:QZ96~3V!RUjfhs& C='T빦KR^d6 W]%L?y߇('΂ۨ1p1 n[xU+݂޸G:#p3$p6 "gd^$26k?+MW߯C:+7}=ݲrU& EL1ŁvS8U6EV"Su2ڭŠRƴ[nuH7.˔&ZZ ,Jw206G~XN/!j&&TQ?d+iBDYQT2J&(Jxs 8s!)t `52$Bv3Ԃ*QN T;F&߃ qso8[] +E%>VN!o 'cFF3`Jn9:rMB8Z09=]>e~c,Ix͛A_(K~Ryek +zN~z E: "]l\HT_>"<Ռcfl[C4-tQs}Jo-҃gu*68jpEdrHRՓ@P;&47E;>q-xɕ"X7W Lᾢ n`bc6lٺOs/1)ڙ?PVc:rsv #5c^:PﯹAyinv5b$8ғ1(N#|.3]UnAEG8Rw0<=7QCgV뽂Sg{eW> (/<'9疥@oM̶֏e+yӗi<~S@IcEAT9jSGM4.eCY۵WX(j"T|1+tю,y( ) 9fKßPY,)I(c4~|)AZeK6P/V^U5kG17Dj%H=! BRKbj&ِJ@֤"j>] \_'5؊A2]U]ʶUB^'wV w!>_,ڲ?'xV y]Z5&aϖ~ wm|,(v8Oj #^K;-&?ook.lS~GWi᧜4=Oցm-ө+B8Tt52]j! y= cAPz8VA+̕` V\KOsIgХBc 5d1M?|7c _́} sӸ30PЯ N O#:8. xps ac9wY/-LZA ԖkF dm6+U},{Qxhޫ޼@  C˼/*hY|2QBH3+?)XđvMDaԻn_؋]+fdz7f|9/K'Uҽ?bC;*2KX#g(N:RM6v$e  Ğ0\ /%W 8P9XN'dR/L~k&s)rUZ/*2R]6^Ctj)l025$gtA)λLv>l^(k-q!d *qe7ZXl®HyE;Ha^L~ LKY'Ia3isL G1$<؉ˎ ݣkø=?ۻ4^a.nzT.}e f$Pl_NeG=!y|3R(WZ ?л1`>gmA#thC?ʌPt+ڌ8/5`kg坘_oDٌFٷA[uCiL0G*,ZN89*lI^'Tu'SObpבHuLm98{+.o|{CERH1 V mzLOM,˱hf)X&x r4Vy#&7bәHǒnLL1$_L0TLi3(RJ:9&jBCcB},IC%HRܪћ2G*-Aav(֧3Da[dh-gZksGٌSY/S)1RNGU񴛳oZ%ӞN'W>0)䃳f!ET Kq?;}QƨX-2UwwUe?l1yӘHGil]c䒴y ! !͑n*O:e a5Xwu]$ȥǦAO>!*Dsܻ\ џz0w<@뇩Ϟq݌վiݢ>_Yfw X fxXKesM=,y!,~ZW]9`|\> (O4sIqgIڬ66ˬư(-LWYƷfB5o'u}91Uzdj&ZЭ[? ͉BzqsdT,a11xIi RHZR͉fw7?K#B>d`?n)gjҊyMLmvSbup.8GR`dQrD]qI H+-T%Vmuu4_Mϒ=:޶wfz Vhy+חjC±v8zl.8ӹь!y%NSxub¢輈L/Dᛋ*U"`-8? s `^e-cm ~Z_w33)~Q3 ]ś~R ֠蚕mƺ\ϒ]эmw\(I~ZLEx`5V{9Ӟ\cUyjHCN\Ek/Y7J{nu1HQϨbvNK]ڶn_4պ5!';V=bX?*o0\#0A8൴5RYԢK| FioM" a%̦@m6ԀF;& Rh`{3kD=0E { !yeГ>F2׳pDЛ8B\p1VYTU4'R0VLsV\9uhcRbw5X$! vqɻLa=X1CJNiSW~Ȼ yQ.;Oڪ_5[2̎m6l;ߣFt`xP[ƭC wmIS{ۆm]ZHĭaR%sor%s:#s ӂ1osoH 5kB x\%'RJJzR# P,+ETz\3σ$DrP?!CpAje=`l K cאZPJў)oV1Cy 1n8:m):kW}bH{g?}f܏oL3M~mDJK+d׸To8łaSĸLPSg>c),/+{ulW=cv _9YzIF~WW ")T28cfd0;]I??iuQ˚<3­Sa$1k0N1߇<71ˍX[ Ν#|,8B\DzEQ>7/^_y!_Q6k?zrOPATƇPݶ؝_uTd`:?|W͎5??YLcps;Q)&9^g]_+Z?%HrGkg!isrH<9c"˕xvG*QЬἺ(BKE@FU:#krD[$Dzpm\s&!]_AgZʹ^SP!jΉJEQjq,PY[ xWTC%ł2De},)(j!I#$8UJ( as6X U `sR1ň^z>_Wڱ* 5i匵/< -$Fn~D !%X,/(C:.04sg8%6qb`&2FՌՃLSVewǍVeWM<,`ӱ|S"}P\gh!6֞1@P)D\qSEX?uF'f3Lu`&'][s\q+,%3иJOJI\v% WQv7[ 389fZKܣsh44^Y%ɯ EZa4!3{K^!5r&yl[쓿L7QGL6Ra`B9=5FRAr&F0.\]1XK Io"bjI/IFP{ڂ#yNu\Q`&G٠B:'L%hm^kB@E/C.'{0vWIlEeKIB~bi.-kP ɞd ZaҲv 䡗VLN5GxD`wi}Fpti@8]w>2kp3]Dve̲s.= r=SWun3ğ%VChֶ`_'Xzϋ\ymJ 968]1 @q1ӓzXA12Ӭ}Ƒ:Jsgrjf3lN>xG||br?=1O$]Go>>JMj^, 5M( 6xqIoCѪC m` tV z83kmpz9s]QJ̏k&8ХlWEbb7y?<ݒ|\ zIudɴջU bV{`)*gc(geje;OWRn& ̡WJI UʫcC D`'rTWR +{D 0ip[AF'K=;>n5ԳRUnE#]9b6Oo#m~åi11JaTͻ(qn'NLG۰UV_9o<*}Hr&Fyd%A<*JÒmzxytpshf8]&aFfNUSi$Ė%+BnԧN﨑p #/{31 WaF-a(^kA& ?B.6g/V Ui `G.^p}t/Q2A=l/4 1 h{L Y뮵oJ@$~Kn/iލ8; IB48)h)G yH7ُ`6?VVa3W;.~xn683xq7IngpΓҍQ!*- b&9ń)Kh*%+IE^p  ǔy# h1` K 0;%1}Hedm[J*ȭ[OS7E`29G9/%`R]Zn PƢd<$ozHMRtY5O@*KIłhLuW 62LفvxjJjnx ( tj&14(I!0;@p ioX(56\IMj2:K% !-\;6"jĆiqawY:b ` لh}RF9#1pwބBhLIvYN{R,9ɸ[|?ausw֏wZ*%Rm!y|Wg2\e{L2~Yoդo\Yi}Z4Z$/.Z+VEɫc/:P)rݚ14Ek +My5Y.yWןJۿ}#4(D%/Aɴ(Du#c9 byZrdt3Y 2{$+ VRTgJvȾ/eAJ 9ئ nDQ׮;Q sn׫4mQplBIp-Z< a}9+Q1TLgYg|ݸ4p`ӗg(+P_ڌ }$9o#LٴZsi~@c=QShWgҶ-WmU\Jӻ/W=w}-w~zrV1DJF%.4eG7ѫfYۡt Ͳ'#r;M˩cW 2޿\#$ fs%/d_@]*3Kn1tЂl ՍK-/{gpإ rCp}oOI!~ٶ8|'u]_5PKNԫkd&ݱ|rn r/x+8}Wiҫl *ҭ5H9=V=vSǰҏ\tV.,]V˝֥SW͠jBKx{7/_O@l|r64[F"$ "i&Q-|V+ʭ1& o-`EӤu$( l}g|MmwZ%I_ӳ]-,MeswTwyET)0RdFfQ 5 s>A@H%T' d?A%%zA >QYlh0OX hjN|8 {|bWv +zX㧟 {qA^[EI~("%.U yzrU>,Pj%ʙ(Mӛ%M]k2`ѓ:ΣYr .h b2 @?]}G(Z7u ?z5yٮ-Xl!)EIhlBJ%PSϒmG/RZEO0OhiVSM_^t1>CE V7Щ@Y$ƣZq!@EXPшԋpkp$Q<]ޗONTEƷO3SDs VܺHVDŽ&(6XtK7e_: &\:K|V)]U3Bt^ `6XSdU AUunя@ |_ @O m=*!0אʁm{pw1++ٔjuInI R9Q߁{"H"Zi0''j|I};*Tt: W%;rw.N.4BwXuZ+Zj>n'кŠDu~ƺrE-֭B6|*Y$q9EDLjaOX \?n[a4:vmnn~/k;X߭CX]%E6\ax51_}Wcib]/N1KmbEjl@Ѐd! KwizwC_ߒ Vݒ}?b4{#n;Vþ_,Te\"h'W 7wk-2*<7׷7׿G/r>Ue@]}~۷Wf)Jmy΃A sWr㻿&:'D@:^%yN+' 2 ҿrHeC\m\67e{K{"7pޣ΢ !Cn1mRN^+;)6x߰eH'`ؤmI$IۥczOuQǥ6IpuҦ:3ms䤁'mԑ/)%)ng8?{ǭ=^x1vYd˒"8[IVkn"=#KClkĩdW6Mf\eU0wJp-<P4FbVݶ9Rd$iȅo]0EFۙɎwHE(ibԐ)CX(kb9mf|wR$#sO[F! &y)7)꼓QؤakJC z.D*Ajb-maT,(HI]ojE<\HiZ:b]䶡յk޺7ϗͤ7s. R:''5Z!Gj|ž77/nR_^ƀev֞Yr BQD<6+(( |\ώZKd@,!-#id_uֹo_ȍB 3/` 8&tӡzy_b5UPr,|@5ӳ{d[Q)ݓG+q{D%!p;k4J1pCC`@d@_^DG08g!? U&@d p좧BZZ$>jJ` j4Xf#ܘpUé6QuGZ^+ޏ(iO:?5=u{R}ٞ4p)>Foz7#V9S%ytFV_OưnU6ջ:֌sûch-AVnw!y|&,䅛Me֔.8p~ܼ}9+loY{CB G=re 6vD*H2-=7V;˼bI$.l[dR*_.VcgVc Y0أVcxqWozعmB&b@']H&FNA*X`kv!liv^vyyZO9\ӪlQKw蒔qPb Rֆd1 +w{8lwSwi#wJ6w5:?OT_$^bk}q<*Wc]HA>Mc%KuPC7On>]]ެ~~jje?Ymˇ~͇yR5Kh ]wͽ%dm7d{˓o9v=+w)%+?"ф7cdIIS%ѡ('fyλ.64]e 8G=B8Q>'E;d4QT[wfqG_)/gLF} \٣>n?=)o'cQe8'T9c>-N2Hك.}ИX!Aseo`7RZ߀ ǭ"(=M'@SQ]5PAQb^Ti~LB&0k8Ֆ%7Rzt y+=pt|9P.zM¦RG &$Sjec:Q}N{S{L6p)Gqnz7Ջ*e+(w;)oQ}#Ln6p])iSv=0q&]=-P\/D(5crMpO}~DJL@ЦwbxO竐?hp;Ċ` ʙf+ѲVƣ mXA>vZD[5g*U#357BXg~sVżdޤZ6ҥZO Nz6L@ j&?@w5BX]h(`)GC="ʈR2mъ#-Z6Z=F+jSLGRNwF]kadK_'ds.>X;3 [TVv4RT&6B\-@K[fTʘ,ޤjCsO*x0}qkɂ1{8ԇ])Ƶ5:F2"֙fw|x%^Q<6y%u!5q!>sX -8W+Սx_ K:0P_Bã̽@/ayRsX]zYҭ%ԡ陵:.C~Q=m<')Fд=jn,_ܸ9+%(b^oT2o1vZdZ<(Ya iљ>x4]; 7o$H"&սg0 fFhլmh:aX`D+(XyA?'$=رX$FLp>pYisZd-9Q1 X鸫P8p)Z [/4Fd s:jiY21(m8.FA-?AZ'Lh9 ek99 }']E$[ (`ҮPuw  9E!Al ;JtbB,yVNpRXRR~3HN 8D67f23dH 6|oi@"X^H]#7͆Ai: 1v|@uoAyo ObÙ?ϝ(ZalxE;F^䈽ePz><lx |6bgv2FB.ѨBt1{>-N]?+g Meх‘p88A~8z_~jɐd rpjztjlГ9P&MƦP6cn 6V9S%[_ưn56}ƹݴ< ZNwxC"l/gnm y&fS2Ӧdr4>߯SmxW@Y`)#~Y${2V&3J`2)\U7sJbĦbR3qe!WCNj k\1*(-r&cL̈ aC604x4:d>푘EY+T$SF %[/ SvOf?9Q$ wzKIN0"99+xN"~^^N r݉Qhg!^. WOvmҦM]"9Tv9ѣlܑyˍ%; ص̷}\*jlXzG B= rv|"J'PS.*4#rv$NiJiǘ mskiQƊN5kyOuJ{8Pt`b9~;o T4 6y9+a̼IGRnCA/]~sVy5:L$uO"M` ?i.3&!Wz 4ל~b)Z4ҍ6ooP|=?CyRlj蠓Mih)G7FkN' sV-jqI[0I;m?194g]#2OohXcPh= Qm}1?yFWPmQLeuq3jWzwmYb`t lN ^'ȗXLSq[օeG&3[*X'ׄ<ᐞTA`|+#$gpwE'5Q9P _!f˃J.b-A p,(=VZ3I{#9a`$L?hI}WEb;Fzp TF#kSFvL $9A8 1'>#v2!MNzM3ԎYH2L?4HJ8,&r!zc&i*sLr-'(U<0FѠ> ܐ:s h|1.äמW+|$J!5Ѧɚ9K%TkaG+*]*{r(RQ$* ,Mڔp;d,Zȗ G!(#Rz瘎&r&DFČ>>.{| M̌B36"` +A2X^UZgiAݝj5/V\ ̼%,w#i)2dLdjͼDރjr `UFGvAZz$}&s@Mү=Vg##J',-|!&fB;(hD4>bSl< @}4D;2) YZUƕo.h#4VJ#)W*Y՜ rcYrх>EfHr躁0YS=-ihdº zQ|OK"F$OV/`#-dJKF8_f__MMrv!,lY8SCe)yh,dh xki)lO[OkTHT2/ Ң>GkgTϏvdJ}%kzrh}7D8oX4Y\/fW(/{hBHSce2Jrh]E 0}[f4^\4)\61Lw_]6j*./{Usk+ϒXmX)I|wQ¸K[gTSr4۸k)n[Es_f;*׌Ѫh^RE"Lh"NnQ_eҶ ~-yURCD>/|}n zrEhj^}wP]}bx㺨^pm9}(iMP'ANVmޱD̞Uj{uúi,z 0Q{ڌ& {9AhY)dIKi%'&, Zh:fX*w={*=:>F崟cnMΕXsV<9^Jm^ߦL@+_>|naXsps~yJۇ;7)#[u[~)eV$(UKr~Y&O4j:xu9bLې?6x).m٨"/4 > n 4FJa)]>ͅ'_]yWIyNfShvv:\c.enoEϋz>obp{qݔF,Qf1,@:ud- Aɦ Ly> O~*|AIޟ,:.~|ELwsZ&w{|g-ŇKaOj,sw1h Sٷ,\'Ț(KC+BD2&X#`]P;M>5[}Kl9/WriaX9Qώ.7O1i>/8Ki_~NTFNg#|[*ӯuqYn]3L`xO+N~׷N0p?[€L1fO.WLKCSMmaYVey6JIF$໓suEcV5jScPJnZ}_=xW #rNU*<>[bf뻙޳%J|1 s$T$YƾzZ񥷈uKT vb3r>{FVǒ^KefN2!0&&zP' zt,ʨC|:}V Sm® @54P5yNhc%燖rRHcTv6s%%ϴL &yKh*r dFނ4< ee%G#&KֈȣR%fNhKQK_F!2Am);dıKR Գ'9|o_ }D.ЎWzS)_kF 5b}ӧUnbE۪U}Mumrm7[!TYY_S{#b+[]}a\M e;!J -]w %EyTa^w|Nh±qe{ם+8ת4F-1)Z)K;pq>h~5gɹjy<}hwu͌"ԠRO}H(q@ @"ghO2|LMLfd;2aC7cF礔TA'oǜ,$4cayCʩ,ϟ&ֈm-J47KL_;P-jޔ;k؋jQ`%'}hcǣF^D>Ž)X)ЫOY(Q'I?UjюkQbe֐݆$oF+xA*S\`dr h4 Iۨl^kFXwa^緟{2<2Q7`SyC?uG&r63_3̛b 1)GU&||0joSN8(T@;­ Imff!TU ӀPM ̜73_3y%QK Dz&ϞQ82Q;SBK7Q}j1[Ke-*1e iUEȖ([Mʺ==I *% hOl9TMNz[MS_n]n{\R!jM}xr![ygKnO/ETõRs2ʖhiRhm!"?$6 3M7|gXn"f*`uKN(:aUJ[KNmMG J SΤmu}%1\`fawT1nw5W]g7wQjxgK koNnu x[w6c&ICNEyP1 in"ׯn)5 /!gq:dXz0&1Я"-^^%cMJűJw㮦Ynn/>*Q"ATBH系5szKy;\kKLؗ+0L k2@(iMp8͚w<.} !+0&!:޾¹ω#]]qCR*Z: -c!z4/ -nmܲ+G;l&(q&(Z. qjfGfHZ]LS,B"sޮLOkt}@NBp!M: ǜqGD@zv*]YBjm_z{1aFi r9wN[[Ўi4KHX)"%[ `՘=H{U w~@ҭ^2 p٫= `yueOIΆY ̌HVLhe]94x5VC:3AQiKTvV-MT-S ޫEKW<+[xEY (Ce ɧ{IRuy†0}d/9 an\۽q9@VrwvbӋ @unPõwd t x{۵?hǾ ];ܰrް*5!ܰnIIlM6YwiL H=]ͽ6S=)W{Ї۪~4b+ ̪ue:(O&ۻ_nn[:N^j/DRL2qL/uO7V.-K >A|=IΪ X1c`c3ٷ}1^ǐ4e F C2ð}8.ȻNvaNn \ |92Kogf&82S& ]T|ӢM^M]Q0Z umoFޏ~Tqj,7nÄ$ ^ݑ[Z8٬YMr(ѮFfsPYXwb=il`D}K[=Zhc  @+43Jǂ\B6i;+>ATKzV)Je/N#:MdgЭ"GlⲈ0 7s>@N`ڇ"h X.%vesL d/WJꁵM1fUXG>[R@}ua{#XLy#h0Z%c_}Yp?c/AB ; vڝYm7(s;)Eo㸶HcrMlvbNemP;Ƽ.n(>LhFv˖SF1N.l\gBh˰1/N R?#-YeƷi;VPn(I'.<&=Aj2HZںe7'Cmʆ\g*e՚?ktS\%VNV8(O*FKٿ|PklMPw }Ig3AjֆF2=f C^sc !>C=O ^`!jR!omMu Jꔀэ KA֤m,ƶS^p%54Aj=Rmr #Kt .I #f;aZ1ȚQ69FVsPj&ºBatyxT5Ok~&R9̻ ]5{5[֠?@Mr GԒ7\=y7rCyWo0{Mw&U Kpc k8NQ7<{oT#[>e+ͪR*9rZFKz&ߎU Rc)e׫ԓ(֑ÅU\fT kx5:Nكi(WZ[`v:azB]!bWq믟cAE b\֪˱/Dl56W brV߹˾=!/ScDВQ"XHQ=΂u ͔ yO9W%sp0mU$Ri WYgѱtl`Sgŭd!J8{z5EcyA:;K:u=5pccHMg& yc5}ty7 2G[YdR<`4* }ylD̕yoU8x=S0ڊ*ΚvcwBrB fp.uS&_5erq g@,=Jr,}ճ g)Bٍ+G0]u]<ۘEbCG?p”CѹiӠd%5̮E??>C|xoه߿?G~LFI༈V:Zs.&Wm,r}Y(E<Ω疛Jxm'ó$XXa?f KbZ^Wo!Z¬xNRNG+*EwA;d^;0bOOgxEbObDAPZ8Ygʉ6/q '`B\_t0m9U{K]FWuVT^[V.˥x& RWh^=W @- YGg D4?x o#XxxV=+z@|8ȅ%+'a?\pfZB0=*ߟX'PJ6xoqy#[˱}K<עK-X~(y5@J#ҙjCbH.7~D;T:w|jVޣ3y" bԷzEd"̊n V"7pNx[[# W( (հ2L'Svq2*L2'8`69P!/3XOJc yS:Z]N٘|l 1d(z:z4Ǩ5.$ IoĨ0Gԁ(Fh);1CcRAg\ ckZQH&cS$P(-[\,Z,ZfX͵<&wh90lmk )m.wVȖ-{$5g_NZfI^jJK@ېب[aF5,c6#/SL= ]WmnQR~wmO-jiQs26KiQ'1fE X^nSk|^&FgVn93uh 37B\U}zQs8du*- (5gQiQ-TO[YL9l5yot(EK&ny {DDzxyG+Q٣ܖL`݉6:p2!9F,QGg6u!FVjXt󮌬: 0`t%wsх-~Q( V7'$G ms?b3K>5oE3+,Uj,´#kAv3xFMDHd#mvn@άf7ѳfݭF,4NQU^㠎2}[+BE}QAr)یUfK!rKM5^-! E#tmسK"'-mŁ5FG> `& psYnZ}+U&Jw!|dkNmƉ:i1m+Zvp^]Yi-vW?aw`we$)|Nt,BnΈ!/7Č*#wVK3fs`w9QڽRPR[?3PV# 9yv̡)7iI*O?SwQBH*k eFA,A RӎGJ}tb@@> $Rqjֺ!#7;nL *WWI(I<Ńi} 46x>Dmrsb LcbIKMRU5pB)O=D Ԫt{4g iH`ԙIrݠ ;ia360RvЪ?MKJ=/A[j5zBS91XL9==+_p)=$,:DSz.KEIm =CnΞPU8)a/nX V7iiQ-@kr`mgשsU_3yt'ooUjNAT?G-S;/ʱ(h䬀CkPxͬжIS5Q fr}[Ĝ<b%s +9'dX%TB+Rg0s0A#W׏ڂx13)ruedY\1݋@ނйsRe3Qc6)y?\HMn;Yf!O&9R>4/jMX*}UϕR&;2yK0eA9X9ADm56|Efb5&Sџ["4^ew6Bå3QjZ"TgzI$6/ 'l \a4]Fm 3Vҋ'R;LjU]ŔWNs4j.DǏMˤ'=NIa(>Y[TFLj \&S=V.|?vEN92`Ty_KOLJϟ߲d`I0OOs.hCܭM~MyOxgWv#6Q>84/k.0oF(yjf&2g"m+%dߥ5}AմH $蜆~\ ঠD3yU蚗.yfE:Lp{kekcE8_\}_:>k7?^%eA~;i辛 կ)`_U"̋2θUq]a{ lX&x837^u+n/^;ŚdEk/Y~;i^՘K6Exz;x+e +e.1ϽGܕVK5=i[ݽgʖR.U o?XKuGXtl@K̈́ u/H]Դy!\JzxOZmXR :T=~5O{^&}yPp*uІ& &;ʛ 3πp]cAqڭVܞw=nVRwr|(l4K/J2Nff=dK/#ےMZLOt/ۙsƤZ%q;t Jx7,]pZ04I)R!׫ceTt̀UoD ]HvʡoKrUo5]֚1B\2mAΉW5ךo`Ru]yn?N34wy_Kq.ŹK/**"eY4%5Y.%2DyL1f1列ݵsE fX!IL$ӄء++hf7ꝓ孩8/:lU#8w+{^Ň;º_օ%vV{J]TM@sM4sE.g4T iTQ* JbtX0X&D-@|6E c[Pۊۆ(Fdx1jRI rk [R&\Ȍ j;xb\12?xP@@,oV6=T\*`'QVUW#&x w^ZWMՊX32?/l$/]jf{<{565 Pu{u{DṴV?CNV8դDEu2䥨a.ftd);JNT(>7&d5TFʼn(Cڕ? =L=K]L(t89)eKdZTQ*@'rv5X,j-wj8[/Cxugޚo)^4,&Jv7O/'OR?][שq:&65QTᔔ|l}cF7Z7F_j"V7EҎM }@I7ؽ Br-|[ :7d?.4z Fm F}!+O'oU=2+lh #{FJ]]M?Ņ dSZ=&Z@seK6G%\vfe,쾧E2kT8`?ɋЖwTR2tˋ̗΍zNG5^+خ)&ldQujFڎӦN&^tmDc2wJClưsX#cFxc@ٕ _iz1F \؍HRƠb$@߸C]|!~Q :Kг+U3$w~5w)%wW:Ҏ W=;Y'lN׶cPX9'e$|5r X&Sm$q=pz턥=X YI"lO] ?ycX]֮qU^cXы>`(Td:DgjvGh40:LĦaM_&;1gR I3$OP$OHqW1]I8Qw5AwB'BF=*'*;*J ]Vz8*6l.r {g7K Eմܺ."Edmds )%S0YoK6#W (|ua ?6'flӈI$J)xd]Cif"?5IB&nZ-t'E?&*@۹$@`ORe5R7%Zl ç 6UDin̬hr엘'OGA?,d·ؖw$׏]^.?'r^{٧g9^ڟȀZKǖU=;7=bSѪ#N~Kf aT ZcwVHԎuZTs(2c(TƊP9e/C5tӉ+TTJY5q1E3A1FM0%@kѮ7` kU' φ(E[J 6@V:yQ¿Lbʅyr{GOn&c]amU*og鴯X@;`}JQj]" "(TLd@˂ *Q"C ,6BHsp@zb+gf 4>l3TCs ;C X+V . ^(57/K?C`K\S[ Bs^ XМmRR +4WBs JؾD'?yL1.KI,&܊^/)WC3f*@in3+x*ȺK|\]7c~|:i}Ei.Xx,`X< @U{'Bv2JykoB…x"j$t"*5Y' TH/ANVĺZfo_ޥ%S*sF8keNM8ǨJ3_a[羔Ge- Pi䴰.1 )ɐQӖwl>Omo>o<2x"פ^ŪWOʏnv631+̮x ?w.:v2]j/kȌ}>%$+6ND*)"9шA@digljH\+I_}1yZ(6 }w)1@toGJ[6jbF>?6#”ݜzΰmg annbh46fHYt!Da rIBNrQmAR_RAnŬ6H$G,SkM,,vu&J%& 1ض@ۿ`P] eRQS ծ âMʵ yjWvUm{h_I# kSg{:byr(%]YTrFETL8#ԮRPqۂX|تfH)N_>m_ll\ YX.]R%7I/p?-/kWMm|sry]m7?mޤJ{~zWLwoh6N].o޾EXn}ǛI1we8ٽﺷqXu5ONǫfӇr=."mz >UsOnzf@w5/HmcSJh1Sɐ2 .Jb`O^|gX3fjv \ *r]|jt~y>h3>d @k[`_}]Wvwol=BP&!v%K4O$ mUz9Pc~U:?7 |1>UT+daMDeFrjJZJ[K9EML&)u R`0h4sk^kjgU_ @ݹ78㸑#inK?B&cW4R:Qb\!v<ήsP\(7$($z DFMPZsx!yi DW |q%/ -0<ȱ6%fL AK@"ʼn%w7}- ܆ &E4Æ2&Kb'Ib QL\m(%@z$Fb&-µ#k?XY>޹za[-m)¿ 8F|qɇ(/椪uZo*O4Sm)ƶh;\Nk|kC_!9 2^u͌B%.HN\-̶"/vsߋ'h_E_XK1W<.&`R10Sr],N Zs&¨ )#%ڔg"ĐSkD"ȫ9,y3#;0W'UD DD^QJ(:xDv52D3eFDF?;*x(ν{Ɛ sv^痵5K4iwS@{:3lSR4n%Ӗ0Zo>YDϨvֺvi&g:Z.@-L'T&|6\)Zs"I (UІST)hzKry)BDmZuY/j]ⶲ^/>VkBRʤ?xI]]=8fQ&5Qxjoo2mI^f Qnl;;/ Jc( uP*xE;UD~D"wGP-R\{W +D`X9WaPʑ) X *3%]:'/;OmOR;؞:^P}VOXPt6Q#.R3®` )jq9䔸{Lj~yuq%^{VdR)WRil (OgX֩jHh{Y,yn`\0Ycʃ2?GDgc]T܅CA-P{e+ -TiSi*/L΀lۉ$9(FȍRcu~$w+S"ޭaJycޭ|wb-5r (]8_y7Ձ)F*a,n=[#Ƈžc !z217Mrg5.۵~pඍ EU0uw.[ytvs#j#@+I{]HJ}"^T7~ʸ[>Hr5_W\@P_IW;+~4Y!JirsStQv^]8ŨRfMŝpDn򤝮Xp:3Dw0 +`wq,/N[4 *H1oaTZGXԪ%KkG`2[j*V>d$LzTT(βҍgdɝU-xngyHQw:F$ J%>}z/_>/C mDUL@-)Obf&PoNd{, Vٻm!݇_?;v%=4R@mHvKnPX(mëZJ.*!)X5$S9tHE؎MML4w` y+=%u/"s;L|M4T3p<,̆܂niś]O_c*Wg[;wôLKWɆɬ :=Y=s7^FS`/g:v6qKܥlYR>k,;cX74RF^8I ]lU0le72ڕ/uj ;_פujJy}z 8i%FCN# $(G66$ԛ+)iݡ)≩lco+T37*9TnxaxF7WozaU]Jh} ̖WŽgk ̔g ÿo4}al܋󰅛1mW# b¤}blC/s @,0O׿bLӧc߱c$pß ld:S8L2q}b|>>B"ĭ YM]cGݱT _:Q\N|8gn{Xb0aA)NG;Ӈٸs.~*nNY}""x6 yWhPfl}$&t׮[ aK6ڧ냕"Re=-i#ٯBj{-g#he]!cb )I}p2@$iO Hjf$3o`'%B5гGp0u 2,Xo6GbY>n0CfOYi+XM_]ef%֟]-&#^qkf| qc24BPr$PsSAc(B́a\*b̍FWvRB(1t:PBQ8QX"9BňI.89 0Ā*A (jUJǸ GfNQ,NQ%XJ焂K% ABFC. "K" D)֯Q,ᅈBc))@3zQb PP!#{+M28]Mo[e'bk<4SBA ѹ`6XY/==Eɥ(ۼmwxqAd *`Lр( b`;rE0^}_T].i0ndHYX9'c{X`COrJD\TAC;=6Er~ƹ겕~zŀ3}u/*2~0DWHs'۷+)'N\5^9򚷃`&@J[}BPRH- & qDPpM9'Dfpp-\v;ǣp``:]Xa!-9a8 w#қLWs Yݢ; X7lr*JAl m)Xg"럄A5_*dPyd2wiAK "&԰wj' SfqDJcaG`-p XHFU8%Q_VH3BI93c_^(]:B&9QX-'랜* Rvu-'c+ \-EsH#=: '[4#qf@u&gi  HfqPrUTx83R i7K,֊ە%(D#SnLˆ BsE do]!ǖtMe89|2q Hfl+nF։=Nia}63>:uK:!z,~0PFI2iXdzee1$F6QX %ѝ1m 3z_gVe~0peeْzi'2X~[gHd4LaOLJ45w |vh>.{8ܘ`v>3Vxǜi؂ _m6 0@ % Ub*UDR<у;15%+׶uWvEvJq@fJBpH#{v}zmJ#Ɂy{%dʹD'x@q";]<Α ;\7}:Wu3CblU5\tᏊ[J-mݤ7þrz/t%λЖ#$eḫjJ]B*+2Xhk2drXFVYQo>Zh!248* G!LCJC%ň)P"$zSP~JjAQ~I" D*J+Rv*;(J|JzVZ=+_7ݣ ]p肻|pWjȠC )bAB_Xj`_#Ca"Q Tl\kƯ0;+ ~B0#RMa# G0 IuȠ E ""AITZ hYK$H3!l1`('w^K2 _f@xCC8o3H}q|E}PW Exh l5JAb#1TԔR#UlMO3b #AL % 9UVF@&P]ƒWi>ޙC'68ÀGsPg [Mxcjտ)h8hvEm%IGRh{pa6= Or,s _DՃtlD,tZ\"SC7G;,+}K'IEHnyB2^WQX2{énk.\t X'HY"ĹE/ ib"CXγ<(s.AE0&SY`ۦ! 9f c P]:eVەPsYGQ$by iftVW#,qRZ|S_»@B`̮C5-1 x3s˿_/_28 ao5v9va< hAԎ2 cI}+mXE}"T~GTrC7Aw-pKb@2N&M%=*Fa2b{Z˭y}Q" "ZdP;w{Ey;zJbM1{S)%g;P:L 6}Fj@5#8ƺbAXk C9g, IM0w"] g˒v҅ø'h jFAHi`<8&`afK2E pQ`dy$⎬ ^|=9jyke5a `Xv#S rsN-'RT"`|"N,]ㅟ![g9LEJww$3 :_{4f"vՅO8jaOk"[Z"߀hĭof`_[|ZT'H.}GJP><Ir ɴ13T CRU2KK\f^ bfae+IBs-$S6~&naV)<4s~Wڭx5GV|"ZH2&jqnj7*D[Q \D;h"צ݊ʐ\D)nOzӉutO`Y0o^nwݙ7o`VINom""f>1=w4rX ʓOVC96Jf;N -(/xwbL[dŘX:ѰW0>.>8\ʊuj,r̂ <+DS+A 2)g'%mWev=?܁'Zz f!/Q/8-=$ks-q,J=* qpcdx+BJ}@E23F21>gp7a{߅OQ\H0ivz. VQea(g%#K)iC1F!qDCiL@ |/,@5|G Kf*e*[Z!Tp[o%B R-= }"d2CtDzq,qJp#.FL}j_oƳqk?%5lY38gtf/gs.g.y!(n:]Qg)jG:ՠ.GQ-13(k+V_ 6l謏mt២5n]m]м}nostcaBROuFE<|0J q ]W.e4@PoOD.|L)'u VS؂ˁ1Hy(|TTnػ۠g r0(.EEߑ't4Svc\h!ҴsvS'nE1pȣNgwYr]v+~v+CBs-$S4\n̪ bTէ;r:*" wԹT L9RjÁfuX7_X'숈+Rn#bGLKDYG2l7mڌmX=)ZLZEJOJy9vVRu5|sRm;+FH)nRR2U]-S˶H8"}А2Bl'n*y&!ŶJQ< X( A""BKH? 5)XZG+#7)ngaq<;ڋ6/7h_wB`yQl!(5l>_aL;$w2{BMv2<ɸrZݙ AKk0L<ܰܚOw4rduLG &8~f̻&e w5 Sqp.hy*.A3e5;{, m8KBrj+ i/52R#f0%?o;'kv2JIQjҐ=4 NHQ×`%yg i(c2nP'`62>\]}W' Jr$nfSIJpI΍+^ӠzGwP).Jq*pIǕ#q sVBUUCZTx: /pV+Щ(h6RQq ǡDF0J(O*YzIԿM:½+$.ZKhrJbZlD>6Vpт1%\6CJcuB&ģ$nH>֩@]Tt%cY~g I~6V"-I)&B`db׫=f` liu:iϾVO:qgNƏo=g10 doLa++3=` 8Sf,LS ,^&pT>QD2YvS,ɬ7cI]d V;KiL"L!C cJNla ;*-uVdpQ&[J9S.Ȕ%J`ITŴ?&*Fup ;3PN5JV|$cdŧuc2?<S(bdJ.lbrJPc0|JS. {@(~rr+K$>g\zsrK;[poNRmnSnž!XfH6&`@WFLcߛ+;7o(6t4He&eC }鳏TDƣ~q{&׊>xRl*u`&v8 0q=?6(2 )U£?}xr\kQE:t[37q/zYHi v`1MVyfxmsF,}INE;{Gold;˗ [?CS-d6xcG}9A֨Vm3kن&XF'|KU4R|1Ced,y|s^.'ꕤ2;C:zZ&C8'6xǹ) Spq)HU\|P">\&kPSM*!QC$g7>.sB)|M"DU' sD/(g5%_MLy'p9-k6> m]Hʈ%G½׃z ĭYU*[,2b šS^2֗yB*FpV1*J-% Fl}q;T)feTf9sƵRGh 3^ ѷZ?EO|.Z3%~`S#N(ҌsgG (T ׄP8#l:ńQkܴvv9kPGCI"X6BJceJ&)=o)eMJYROfH)cnRPx&"(q)mr?JmgT4Jt0Z|[J r7AI!Y~`^Ѧ q`?.0!4o˔L<=&21{>?Nf(~$zQτmnz`3MEαwT^1(\"a!A_-J2뉗q8 }BncZPL;Ew)> ?y(V!a@$>7x Jd[FW7t)Ͽ^?Ako{q(ڋ> _^ *݋;:&w^llmɗ׽Aouo2MnzM Kד ׫˙84O7YߚoG^9uTI !P _n3v~*L[۸i^ߌ-)cWbGFB_GwѴc*/s쵟}}ZnxN&Gu{ `U?H >w%ޙ45L(|&&6.WCAWRؼg$OIkaBnA7S"it"L8;`'?!cfs.!e;[~1&ߺvV,'Jy\k+"Nùrxy4s }4"DP_@oܿe]Z=PIr Q>T.:{Ňz>T駤#R+Iʧ+PGiG %;π#TR#-"/ /\0"~ jQ>dXIj![{:cDֽOj3k p:77[_Ii0Y~Gn翿B1\L&7ፃFQ­# ^<Θ6c(Gwm͍IHqp+R9嵓T.LQ7)Ә!Mp7˻"GFwj 5HhxyxӽR[f 1%u`39xpXiaC⁐@s}3xW *% |1 Wwӂլy'-xZP@ H7@ApsZ5ĥG̥Ow\+Oq{g}4kCO]KAq)뵢ɥ󸴜LA2("ܙRy\*;ȹ<.U새|~WNT<.jSi\ܬgVN\z\j2c4DLM @csoN9/xr4GD'plw )l(`0"-"y $0ʁɦ:3%O!4DܧJ~ҿLpNߛ@˷n]YIad-^XjjO,Gr˝\mj_XnSI;)@]CVl>r'LT>o}A"w"DiyǬ( `sm<19X~l9SLM+;emaWN6( '9!ـ? Nȴ' ۩{Bku6ג< ]s#kt.)u-␜p%zr)[hK]?A1 Z-W9\ ך0(- V\_҈50J8w֋ '܆xI@`uoAyx˾wúc5ĿVZtBQH"^*BTR  6rRG"Kq [-(&KRJ _N/p AGF_EÛ6s=p69D![xCuK'MP91V:dND$~"iQU=A/)v><|֬ϽjolA9ewH[u!Rp;iUC!VWfd]ݎzT-EUCvB[S;h`26cw֐@ { j!;wt3 ?N<Ӛ:r9/Z$UGv|_&`k?nwW_m5EXr6Y4t_up"D"\W"+XŦ$v p׻}.xn`/ OhbU}9s.]_%U 8YٓթZywϱ!dR0=7PT0@3uʥP,2J  3 P<γkϡ jS>7E _x>᳝_Dϳ5bw#'&o}>Κ?㭆T\:k>Z?4/KU$ͻD |cD{?4Jui fo_taYUr۾@'-rW]j< _F&Cc3g=gRŭ52סX_fyK/S,v87"诃@j) ȲPoǤ{Di;@>ɳFDo+6$SղIRQ@y>hJ䍮NIC7J*<h5]A݆_'bV̤ ZB4s`(a 0t#qt((R20LhMsJ7B$u8"51,x3)W ya=bB`9FFl f.RJ0pM!$v:WMGyPuyW։J5|~0MȜ& gW!2U@Tp\Yv=0f +M NM3QLQL"bv,2Z/|h}"s.ʼl`$!`^0 DRXcaGFFdPޙNra۠efK຺WDŰ/gO)!3Tb QY/ @9HWxr  %H)ҰԀpW.o<%+5 kX8/L#VlyAp:Rs??/Ƴ~gYgFs7&煹O7F|% *:YX9Vha^=D2FHn`eG LVmƞ}B}{ZxD(~2/bur2/Ơs &'qf_0$Zݾy6Rjvع!B6;st0kME|'Nyj\W?RRKJvpdN89WOFœu`oA|O]xnT\w4hRI6Yc<sfVa} dzDvsDC8ۉ !?c{B! !d.m›:Kn8ׂϚ)Vpg.I]:춧+8~.|J¦ wf-j0)mZOX^^TG^j%4mD5m5/dtɷeF7JjКjgy{U~O% .6ޗ5zLE brb0aRKBί?3W^_+g{gڧ8;C8)!^O/K7]sLQ uھtۣ4edgҭ}YtC8)A\n j=JNw4n  0ۇ;Q?}L.ρB+ϫěahozכdQǿ qX<'_o Ⱥ#oUwC)MlG/y:ͤrH]N zAhK`g2|SoY&-~o2%DO GGZhQF#5rBŒre!ZG綨n?4GD'plw )l(E>Oi?NwZbjz"@sMOҰʦiC3%5DJhsi)7BkND!0؀"J|1j(Z-F^Q,ש Y)Om,Z)^py 1)]4]K&sfx|M2.LR`-Dab~^-N0eeN2[?we0BN[*30ִ! #{!JE K-f}de0" $JK൲5+[t%-zsI5z8U:Q URR8S{ac@t-tќ7Xq ,$bDJf,By+!]>D+ 3Y䥂,pFi\ZY5G8TS^{.]$tp( eUu./E@},RZ#QєJ$z\^lt<((M\E)(N⺮(:B*`1^ ()Wg{dhЅ/li5K\/zWVBY a>.zӪ"|͉V00}fr3E!E0aLp2V}ZXAhw KB )ըcI-hOzFUxip&S1 hgdޑ|C=Az󜔅K*5hmpYEiK0LYkUtkiY'gu㖔s'vy|q-`ꗶ;(k?K?!>~xpw1s^k)tqV8 ý"Zq~۽:ɰ|z.m v?6`[˭7$Lq.55Aۨ%`kzc3y;`)!:9 R)^)I)j*hZT~WŤ RFܶ~Cy*ZXM cQrX `D6z`lb0qdQj& 6q 5e*O Hƒ ٻ6n$WX|+C>TlmUvqbj5%x8n_`@rHbsfRHp@a@as<` G ;Ű+{OsX7Nk!@ިWU"00JBD̉>検`&sS"),##Cm[y ǸZNQyU(+ң SͬT0ڠ!λȎTK'`v&帋jU#QXbc?74BNVMyO& n,17;ӰRYG*|/Z-WrDc,p̭T#I~QӚTěR1T֜uܩr=OOcCo'S#3~7qk8Y'FK ҸֲTx&ZzXס>|~b5-uף[t\L)r^)=~ZGqcH ᭖:=5L*H1H\5;T@(ZLM1 `ښ[v)wucq,[]Qj&¯rNݹ%yHWú.՞Tsĭ"}MVjA'}A2Jr$žB ;e~ͯra5Zj ) jb҇eN(2qE LPG484մhښ9sifs"ֿ$J7> `|nL,9fBT-?bٛMAcJ&jK]b~IɎy -{wE\zu{gM򍋨LYz_n֡v偋QFuwxԘ +/i_sQ!!߸zTU@W>h7_e[VVWͿvCBq,S # [\;O ;Ԉy Wd@zs=u$8,MtGnȌy5Q1`:=d?*ˇp~5GW롯9-c+OIx韮3 Y[k|X+υkLc[Y!eKkVG6S+!̫r*=m^g[9=^g8-f>V,ΙӭXLF]2=[Tc $tWu;*W!v3˟6eFöz.,KLc)otM+n9,7ʁu+CAWú*: j.uA5;V۠E;@*DCYpˉ[J*[V AW"Oė.a*1DRӪS^1Z|"%S\~>nBEt|(ݺ;}L1nMWTwnuH7.^2NMBD|y":cn]ZZv|{ڭ EtO˪#i$9|* 'onsŖ?eW_&ϻ4?w3nBn7]ݧE 5-Sׇeh5#9}JڕeXvܓܶu%B EfIy/nvܥ1Zq6N;j:$Q/CX//=(uS;Gc*[;Q 5H7.{2%eJVj^nSn.o8 JJf]xwYEl!q0~ZZd8jʮAlB#>HBd>@SPS;3p=qYyMWVs5f> D|Zp)AF6q8,`LhT0TUZ՟o0Mh|aV[L޲ͦnO2nP-d)hBsXJR hʀ j27r ߼&gXM ۀƂ^',sv_`"|ȕ lh.ߡ+lpi:X ) ç~#kOzOYy x@>j $NHLIÔp8b$a LAD ye1U1-<[b5&; ;mL&&mhü[ftsdW3z% olx ΍ ;X`%u>RܼDKog3pߌ0|gc+erPiyx0aMtHt@b#w`[ pߝ?_> ;kHHh=J/N+pBm{sCGO $ @HP4T bTlLA$px3Tc./ E+l"ݤ STS- +^k]Hioh)MJ!6彐STS)}R*T8cqY 0 HAH) ۴gy<Gc҈q @a)#!UZ$f)*c-GcscEjgZ8"Xg9ͽ 7!0 u)K&+K{]˟z gW."~SVd&F4Pd2/7Q80^ɞT)'}0Bstc;{U[$Aξ:|t^֕9kMђJَgծ]l˰sͽf/K' N>2FY.`Ke2?YNV ֿ,w{xw3/1hNI/'΍p>TZ@S]c=(XWqzhkSv>WamE7[K>&Wo߿SZ-Ǭ!SGS(ĺP$n}uXxeV)@"hT[+$zB!VaVIZfzNEaVH-Z!}[z;}&Q tZz״Rx|M+EkE.@Ϩ^6_ /8qV܊L_l0%=CFxO#(t=-6-7ƭAVˡ^X}JpagTCW&>DVp g8z}.9dϳ^ 2H'5D} UNo_yh+~}\Y 8bDeF \G`dYvFm![CskFy㢍i]lI&_ƤJ868Ysxȼ(c~zWm'i9'ۯmי *Wz2aAhɞ rA߭lA#S $NB) d"G$(c YġH"DAc$#!j4B!ᔉJ92Ĉ.B)"!2x  ɋTjvgIO;Ye&tyӃdZ5Y?ݏo6o#(@Bݾ|>f:TIhޚ\e2~4i! j8Hr Uܩ;F!!=x$s p0 ֕OA9a(އG `ϫ'.3 %5k}d8Dod8|5{_H SK+ej1q,OReHSGPZsu߽6-#LLhqgu]L=oN傄 M.*,B!`PG Gi. ‰%Ep{mZVr!L~u}h4"ޚ'5d}e5s-&FDjTmy}܈;#wF,l\!VCa#q@B(e,ҒX E%1/f.zS=w\-.Wz'ƿGbe6@߲~x^] []7v]OrpߓOGRU8@GT|8J)I%_RO"Qĸژ0b! J$GZ*L %⃥^үB_j e(Sӈ@Hw4Mơ .p" E,QDɀ۬7֢@'Q%@'ͨǔ $fs6GA4L@|7w(Cɱk3r 8!#>'SK:‘S6Z;׻4BdVA*ɽHL?BZtZwݦ>n0*Pa@"^R\*% 1xg.ow4utqKe L+_&V4b8)Ex@ wyĢaEA@@K:G^V w; =Ne$hDBP8iSE1NcL9r!bFh0v6r=|O>|Zw/e;9< ynd\ |iVw4B8z0nH34!U1!8$0!|cHB!ZL?{Ǝ"K-0O;b}W؉,;`eǑf[I^"Yl~bݻh0swX~&mevN3Y|`r |{óuVtѨ]3wigDpсbmP = L:'@)}:W4jcn&+ ;O|4BP,Qe  S̕ʧ+ʾ{,3¡HC?xR8ڻ5k}V̶/&yLg_ ; R߹tq~-Zz[dcu3GeR?lWo%.땏VV[,O)ϩ %(}r£o:U!f|Ԝ{E{Xa`~,7,@biv9%$SMsTEVZ` a}\Xh(~dLo a'Fs$j)qi]3&MJD i(CνS*Xnj }uYoKwWZLq/R<䄉(++e/L:V+٠RF+OKT.WݬWidYK\P&UѦ4۹#1UXf4:aG\Tf,"y! Oq\籀} h-U XRXfL9w)$e9J22-``@;q@p:xP5vGnϋ-۱D0BS+".Jz]<؜R^6qrΣzCm]ϫ"eӌ.y Bp 2B«E2M*fzXJyܱ(u"GkV7DuSlčZr {,}5؉铗˰$["Tuh (;2D4ޛ?I3奜Z2O~՗2x~W5\^Ϋr"Hw(=0W{)ƃUn6|4RXkNmwF Pj31_e*25|^lDʦy![@tI&+rb ($\ FyϊnGl`NA *^,T6abӁm}`(#g:$-e cTN¤贖E6}oX(^Ŗg9\_}*JfBm+-QUHڔlciW=-m#չ9R,BH\R*o>&gAm,EZ׶f U#KvI!HTvf?97Z"\6mIŋlsu^BGUـaS;lyYzY>oVWKj77HEaUOT+3qձ5v#~< k);co&鳖F]a.22gPhZ<~ujdSD%I1gxP=dO*lUs\m~QT cUsAw~9HhϬgj! ȴJTʗV84{th탳ZF6#,1 W=ŨW Q**780K8ɶCQTBL E!UWt9NK auAXbF2cr 5`MAEq7Ȍ0X,ea #|eh"Vgf?D&fB+-Mb}ELR)[̵,̵CevBWRM~e9{Ro{E ="+j_A5SU\C$,,dr[΃=ErH' } nsW]ǶY?"0ySZ\1O {ARjSqȲD@%~i4.MSﱛe3: đtFH!.2IlUF;bCmqU¨UOR w~j,+ᅳnr=2 V!?Gɚ4Y$]HRcG-~ȦbFѠ%<8z*-[I-`=ƙVP-[I3,dk -rx?e:OV5LvSgW%R,%\J("'H3bk3Ĝ4񖜚&W▫{Bżz&gK"<]aU^ٰ҆]j&6VvU2lQiey+JƳlA_GM/X՗0h\J(^-}Ƥ*UV5=N|_iM[n Blm2c0Ѫε]- ⒘ٲN Ir5_)84>Gz9L-"X7i|Uwn* 4FAbilcqZ 8כUx_ؼI'eJ+˝۳qYHs8X59 ?pJϖ=ZuBmKiOkgA~D[CSWD6ݲyr7V# Ҙ:,?ӝغ>:O~7t&Sc`2C$s:y !{LnRheP H5s2:eI gkKٱlT 댮M%" :{RwuFZǒRʉMX62ؒ[SR̂|8[}^޶Do9*#v7IT&@vzDfx sy]XѰQ}zcC-O?7%sbt]Zs Le%B+I3_ ޸ZDdG(.k}G##׬X*NCx͹ڝ`\1P_װOɸ!J@ZJA֓HE6h _pg!g^4ιb ?^Bx0-ҵJ1 9!"zR/?OaW 7E5F#(D(! Q;vOT^6DFnL;raF[- d MYv$0!{Hh@c\76rI"\ɐs8)YuIsWGq/C1!c<!kH\%DmxD3sYE ir)&CB OtHsX "ORg?rnTǗWJbZC5&O&J[x JRE32파 ['5.vL!4D=Oy3 )I.wJN]nn@MΎcšX˔t:ǠHSd qjw1Pb C*F(^p!\RntI$jTH?64fiܐKXR,\M>71$ e(^B)%>f Jv+5еK+a4]LfLNYtics)\Jr BLȹǬ~t[\M¥o[\OS Qَvj#7Z_@W˶~M> 0Ndu c+kG=y%W9aRT'Bpwۣ_ q u8Bt^FCq5<8䚣ul 􈷫7Mdl*h.&c>_7&XusX%aؑ&b{_-viҮu  $3?putJEh4,eS\Otn\e@x 튍54bQj-R֙Mi:jtY]":= E%L.W6Ή)ОCF9 l q#ZqԠ]J#m݃R$EZlAåŽi,d (xcSr4grQ^4\3QOGouxFpEL%i|\wY (0 h q(B @F k|>kB"'qNrhHفkJ8=(KqO"hq87W@ґIH>);@IȽ:T=E&u\..%7A:4!GRb,4pAtAkE+ءLZfT @&kZm#ǵݢdu{Y`8WpbEMl7C[ѯ-3إ/Kܖhmw9ai il,Y1$K-() @{q~zݨ5oo^ t~FUʊW91׌Eқ+eRPnFK^*.s;/秲BYɈ],&ƣޤ(BϾ|Ӷu~^)-yӉqÅEYH(%o!M嚲{C{=pT#E`,k|%`ka}}ZGxj[ < ;#p]H]kYݐb(|ۙmV(BZ[#ZTûlbPk|%J`ۄ*X3X_7ǣvʾx٫r5pڏ=hYg1jo{8/XK5(?7LKQ;eꉽl-NKEvK-NKF"V(-T셖jDq;j*oZTa;-Ux 襛Ǚ׏u4?.>8/?BqGGZbdNQ +"291Ɣ-\kT'5ץc̼D_/x nQE/A`D!LG#'mnFtP~eg|Y"J^uN2gs5&_tƎ)v='O-^û$>3=}gJ$}3Oo-BF/Nm=Gn:%PuVx/i3۽7\k^T^n/bVJ5_<-ۨVAO1dcC7"UBQ`4KFi)Ts(AZW Ue~8oGI?\vr1ssuȑ_Z}pp<1OF $GUIzfj7 :yMA5OB19h AWč/sh?-`Cj?G߿Z(=uuW;.RJyGDŽV_-_x .-X]PW_R>L0mT;c!_/HIya: W9awcI"Md~,n4S pjsu|9<'$GA*G}<锝]aE>caaOFEUN6ɞ"GӇ`  ^ ҵȗct1_̞w>>ԟca;ę$r Bl~at5<^ZP;X?.fy_9.fk HeXȻ4>S8M'?W5yN׆29O/Hj+y_^KcMZtiFg8(sڶ.{t9"xK~óKz+e}>D˝X:M۹~'_g~P5#=7w"n1ЃzҀJNIJW V&~ỿDz'֧hʹHb'*}zد/k񱶡܈Ą#`=q$2F$LQ+@Kt}p$nC<,q,CV[3eɤM#A[u0K ,oa\>D3HˑA*yFgnH聸rq ́3 &Qʼn1?mPpAj3^v ّ9pGѲUp2y4fNTSQ"SkA)0957LKK41%mv[edz:𛤵 ֐걼q[j-6՜_Xz\rRX}(ou0 Oh [kBLVwuYSrO' m+oh9EE~WtϭE=XTZ|~u/%<ݸ$Õi.Ka'2ZXkV^?78'0dZ띩y9黔u~&A*QS,7;':B۱BEp̎jx>B~e"&xڸ޳X`ޥa:_b֊>d.EC>OƱhQNnFKW({AZ6^64+5d׼In7_PYi%m;{Faw,f z8k~l~2{j)_m8 n4E#Os[4op|7@2[\zo\DdJ?ncz [-%S.u/5Ãng[hLqu MJv [-%S.L w>$52% "dʼ5+acs?J3=}1kuzO 7Ax~oק 8q^kk@o=&aKZ!Œd*h} OSaJ>g0.8:釈v֑׉¸ÉJ| N (/:gFor}sE8~!EjIGҺ@ގt^E"|0d2[kU:f+;4zT9$%q"PRCHgQx\+ʹd=Bu=^.~)϶)Жw|LT1_4T^nDu} ?@+~k& Izo)Wgxw?4=K4 j7Bk>1m4B x=1HhlH,%my9zHΜ U^PQ * ͻ5QҏsbKkiCwG'.1WXZ uunmUe\lجsf5X#Xedwӳr ?UCL}esĢ5Gl\GGwuDLuuvHڳ_g]<=nF/Y;qTݮIbGqW1Psqm=W\[2Ns,M6jfmZ9D'eS!s*Y,ᗻOmUR0vHGq\jw^]yTRi]3|pj0*/Rm1Eg] g,q/ҜR<ϢNh;nƷ̽;nk #ũfGc[—b1Pb NI} Jv쨯cnΎzUvp0E 950M)Zk%Ygt;i5O`%Ɛt(ƒkGcH1Deka nZpy{B-T߆˱56!rbrM`=sM<$ x*46Dy7r:5aj jS*=o݁ 5'8C*;1%!.ӢXuj=3Sc\tjnlZ1F/z5Z~1_+,o&t_|s 5Qْ E&&8 h)3*)N2#jb5,mWT?j}q=V*In7Bkf-iMERrFIϺx.^26]05- u=KK▆@S"^Z!/j0דFsCSXYvp+ urX ;p]R8-d: &@%^QsRب%*Qw%h(o;ND_USlZ8!~ BWyֶQH""8PC3aM8>`5 /_EJO:~h&pA}Nd ( 2vQ֟RK(̴z.}p2"pd2('rZ'z|f՝ATRBpX"QA#s2/h<4p-2Ҫ'7`MHXD)FJ `9Q\'?o}S{%:'Dd~@FbaM"(m}BY- qUY]^:"=su XJG 8-DpE%F5= nN>]3eWu#vp r d ETDbJj1~ R]oOֿWKC;LtY(:x5S4ʵB4HOQ0Mhr R!5A/QE '۸_΁; Iyױk1Vy/ٍ}D7OR*ޥ ahppzJIl 9yJPXR&M?L~\/j177I`Brb8ZcU:>z-F4޾ zF%QU}ymo}Do*f+QJ"uc/QHd'EF@h rgt$2!8>U\r$%1|&H=t} g29(RĬg> Ԋ^Tī5q:/0ދ2h"cyk0(I@ykRpAP6>JnٻFrW* 8}؈e{-[Uve?mT"d2&$K21F; kSg;cu!ؖƈuXHT#|ݚ4{4p&?~xbwZz7-bZGCEKY:@=&|mSF]uֱXx/oi%A($&@l⸓FXyxxnj{\ kʷ%lxh4w}4$L=ϙ/kKҞy*.gB!dVj2kwG+X>.=H׊Ƣ|c*6T |  䈯/.pIb"ڤ6Xy Ss5#:-\J"QVQǾ]C7|82=9a=`l]ԫs@ͅ`JwWqq B OEJFUTXx,t9Xn~|]jۣqwJqHRS|FFh:9MEߡ+8v`5/?'୅?=##sПQ^̅s ; 1*_6C]0UtMj arր@dOT(x}%9q 6[9^%EqfobpsՆ 2馕OrV`([/z)'xXݮz`j;w[]ghNo.d|1]Ԧ_! ݟls8]k.xNm|&'ǜ!%@55+b"@6cIuv-Jh'MK񖗥-/SpT.x ՚"^Gr$s& _Nnr$<~z8@Ļg|{iV)[$4`fk9``g 6um^_ˡύ/s$3$,lE~& t@dE 2#&~I}в{U 5Rp +؋.Z Q7-8fO|=49]r+‡vQELUau^+OBw R𻎖 8nB=D.[ ]K>fJU +4֭4dkfylO>kmygł'dŔAnguLף&fs]?AkC}Sssp4OJv9'Ʒڋ~=;M4bzU.",$} LŕQ7+~7٪m#O? R@FB:Jc. w:w.}imb?sq'z|&%$)k KMEcRevΝV}ѶoN+pqTKIBr!{Qz JU!ߑ]ƭw=0?V_=̧a~.C|P{ų6,#_'QOC{$!};뱬v}j7R}mz~5P!Fߧ \~Řu ҏLW$2i&4ɔjjs \Eo}5E}I q2nq-~Zq@t?_, g<ҙwu@?vnkt_Iy%ĺ?msu5Wnj=Rj?Z=RqT{_o 9}Ѵ$߼"xf_{~X5-x1pEJK!PPVI%/*u=]nft~N|*d/^5SdF"SitH'6)ٓε`jjbHzW>+ZtŹ`\c~l1-Ƌ{H{/o b[!ڿ "_=8Swy-a{ Y%VsFFJ `db7 +^wau[28-Y(Q} \O"SῬq 9⊺& Ŀ]kbƾ8\qOݥ{-[)*^ ArVxI$>XBπtv=/.䰩b3޿Jy1PE:rЬu6|}x=}9%oCi Nي\ddӡ_u Bo,y"+S;U8S @zD{U{\ 6Fd*D{S4N֩rt:'9_L!FMm_Pt'J<7m/jALI0zg tX~O%'n>) bȎ<*;&HEAMZp2}~(~ϩLֺd%Ua(@Q̵KȳdϮo^$BDaKDZ|>K[ESv2>ۼw]\)|m&f]@!eԳsq5|Y[xBϿKqkb(+(NKJɠ|Ƀ] :Y{N/f\J{JmPlh;1h8{iWKzs&Fhey5SCt~$cRTbة`?Yiپy>Iz/خh0ϴUȽ|㢊`b?Xꇣ^,y.-EgKUnJ}jR=2}`K xnZ+NTZ=փwH xʽp^> `Ɋ@ p'gO >_/]^lNw" [v5NHɼ}\#-($wEe-9N`e6\-]8tDW<OpD?W!6=ǻY%! k%]Qv.`<NJ$Ew:Ѽ" 7$7n Rczk\?62LJRR'S꣹!&yJ]o ^4?>2^ I tn)PٰEZUcJ.48Лo}`O 8.n9sP_pdU'-ij%^t|mԢ}p-D}>~_"IuYM 鎔Ϛu*`3zȀ<{V_P>ou|E+_oJuKo+/{FsfPx E=`炝Q/Sٞ~ffukϥ0ˑ~ xqgzp–* 2c @~ncy\rN{q%=%d #Z5#mІ0JPXLJ*^*mmF\*R9,$0>+BZgg-bIQt.fhV9, P;|p2q*Mq+Mѥ@]l@5 Tŵ`][7LFOüecy;? 0Sh^q;>em"zbtyZ5X50Hh3)NZ<' Fj&yJ)y=Rg פ@z1jXiC(dyJ:ܸ\SkI315*_ *PijըJLP,2¢"h<2Rf 3Nn3IdV=Z0<F815J˩5|Sci9͈T!k,dc4*љwNYG:G4*\{M<|TIx% 6}HȭSKJAd04i5 Qy@ot_.b*6}o庩׹VL%o~{{(/zěi(~E`> *ߜ$2͗zY(mNEӜ ք7];Ll'ͯw48$&7ڃ>WLb*3yx_F,4Mfɯd>['".ջQŅXP1/(f,w+1 /GHJQ,8@3E|iֹhq5ė,u+Ǵ0uQG Lc}&Z =%Vӵ6RIR^ t\E_: B(Rj(-fow Z+Mom&=M@r:ʥ5R~%`N.U߾TOfXcmF@99=hQ @X33o@&g |eX(NBuiP͝K$H[;mZm[)tMPF1O7@߇ 3Z/9-ܬpD[<\ 9lU"薀'ۃbLq=:a]=`[I6qmxClcfc\J 7Bܓ!IwA<(ݾ9Ul f3\=[N|8NqE:ʘ[Rp쭧<8e%PXMï[n0ǔ)q8M'6z% ʙL4TjMp5-SvʙDۈ}8l'+=PHWTaSV{So XQ]GO_k`FouZѴhj>Jr@[iv\d<֧ͬeJRɯeanCffp1(Ui*z^ͮO(XǑҁ .{Ppf<@AZKFKB2}V3l CRt`YT©b5K.۩~`r)*൛T`KL<;ʴN=?! [r*M[>7d --@>D?;#Ԕ鲊R>ā;!Âh1g)ZMţiNDb4ƸCH(:ׁR&zSkp*ppk*;Dz//O2.ވfڑd&*:SQtGK,b&m+O- D, D+=@twчH}hMV{$}LxzHnF+5ՔѮ}Dէmd,ބCe,Vp/lDாX>@ZI$R'e*&)v:mNiҘvANVښ9M>ÑN!X{]ntjb}.xȟ  0ʱ)F:# i>el|Vӕ\L3eQYC8k[+mdBQ4((,-L`-!E FG:o$3UMŠT \wG3/!|eQ]=}SYZB% ; gōT/Kk􇕲x|tiq-Ga^ V%kTL O[ >`D,}ۜҳ&R1 mBp6uZCܜs JAS i!i.Ie|vܗR.&IQ|zj-QK1NYm#hAQU| Ƹ?pnP˰}0_?+O(D GT;C8Bbq_Pڹi.[/*p:u{ٸ*8W/f]OwWfVvx)jM%}ہLQ&ȋ@)c/8A d3a[MBsR:m>.烈,*h @"}3 'i&09ldZq-Ѷn[c1Ɍ"i%Œc4D2MLepxf"4!PTSEEQE2Ua~b %kA;FVǨ,ǒA \G x=`a.e/m7|ۮSwۭې@Omj8J@fh861'vRi5Z0'HJB8Lfn+I<tG9L^t>wV/Iؽ07#e\(T k&vlOs]8LT_*Oc@2F!ePv=gZ!Ɍj\F-f B4J+.R(Nd/tl '\- Y=^^4qs==_~_>3_X܊3_1֤Oޱq OmSR~PVim$q?9gDn;R\HtDw33J甝_QPw]J!rDD5%;[>,@hl9VMR9 |%bqKŻNf fpruy]-7yMZ\Z֒l qǙ&G(qdt>s8*'fEFuUJzB[휢Y০Vy ;~~BR{a!HX0,{KwVnY.U3OLJuS@bI ,|EĕEO%jDNBYϡKPD ~顈aŇ"l<(,[ tw@ ؆!?:j85XpŁmi>y[{~fWͫ pԺ\gʵj7c9uJ˙Lrq}w/>.ӫKL?JI.[+[qrWq֊RHM}˥,AvzP10 pN*9j FvZ;ii\fYbmW\0!&p*"eJ"A$ yQDŽa+#]8k9_ ?*2!+.OZ< sųq{{ 2xZKz45'aTv^YPFC&)iiΔS,}Vg,V9ku<%C#N)]Vx_bG'PL(29I@AJL0,-J+$e 11U4"FKs,%{LTEw,RvP=LhtbF[#A)k|T 2'0||!XۇI֡KfO!p;օ1>p c cBQNF.sT֌F^pVb_IG{.VTĪkoIGF'%A5x%J 0(k]K 0tRDH"2ˏ c-+C6H#݂Ѷ p }ְ$$|zBMmP`qPO J{UPS9^`O/pm$^yCivhtH Uْeլ]lZH9vjd5)pճ휾9]D:.I I&RBrCSAMs\g4u+00#_o¤f1J"@|Yka+fVunR'"%̱f^['OfV)l̚pX?)yle`?ےf<MOo3?)|gX͌l9boe?_Z=#n))~G}s^K( L\0\K‰(kyI2I9,u .̥/KR*FR3#[$߿~ 3@@v~`olt ǃx/i $qUV B%NFPE_c?ȗNMбҦh %$Im6@,Nz(z t6'̾T5PeCr{ђVUL>?jAcaFvbUHG ㍩1qg;* m% g 9N\&CI$\$ mN9>o(S{f) t2NE#D{<9TćWEȥ19raH&ɟiJŁhs8RgKsyࢬup> is޸'v Lp 446O"'PSgP9"\WLەCVF8cr|_=HT*#:`S4i' ñH%2;YI lXi Cb0Gvl; wN18tsV^ J9"pcRg >i 6] ~piBʙ'yA#io1 9YSժNqrnzbGп*j`jO05)_QA2]fDF Rr"OTHEi" pduj5FX7z;eo 8+ġmlAJNc"Tr rNq!~m&WS2`gvLIe mYE%p=ϾOkqlO蛴 VՃ|Y]x"UެuMF\ܼ^/6PTWf{kO2"}7P`v}`f|WSg mx̰znMpW!y=P&t'{XuOsGp*[)(#0sӈhDZ؃9A*zw 3Ž5;/F@9!L =NHSDv,Ń ϝ6+V 4ժ] 6X X_Cl/!V{b3C 85񵓖uS@'RPΓeUHP^̠"sHK B<=0-^D_CcG{ 8݋{N4 w)UTs$!bq:nGf&3<[h O)NI7v&'vāb|Gv[D9gE:np7΢5"o@ 8?N=Dx_d '4RWlޮL8C 䆗|DV~\7$+?.1ɂEs)E~\JQ%KIp>oPo9\BrQ var/home/core/zuul-output/logs/kubelet.log0000644000000000000000004553537515136337571017731 0ustar rootrootJan 28 06:47:56 crc systemd[1]: Starting Kubernetes Kubelet... Jan 28 06:47:56 crc restorecon[4562]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 06:47:56 crc restorecon[4562]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 06:47:56 crc restorecon[4562]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 28 06:47:56 crc kubenswrapper[4642]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 06:47:56 crc kubenswrapper[4642]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 28 06:47:56 crc kubenswrapper[4642]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 06:47:56 crc kubenswrapper[4642]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 06:47:56 crc kubenswrapper[4642]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 28 06:47:56 crc kubenswrapper[4642]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.962284 4642 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.967491 4642 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.967563 4642 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.967569 4642 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.967575 4642 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.967581 4642 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.967587 4642 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.967593 4642 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.967600 4642 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.967604 4642 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.967608 4642 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.967612 4642 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.967616 4642 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.967623 4642 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.967628 4642 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.967633 4642 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.967637 4642 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.967641 4642 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.967645 4642 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.967649 4642 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.967653 4642 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.967656 4642 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.967660 4642 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.967663 4642 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.967666 4642 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.967670 4642 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.967673 4642 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.967677 4642 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.967680 4642 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.967684 4642 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.967687 4642 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.967690 4642 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.967695 4642 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.967702 4642 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.967709 4642 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.967713 4642 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.967718 4642 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.967723 4642 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.967727 4642 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.967734 4642 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.967743 4642 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.967749 4642 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.967754 4642 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.967759 4642 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.967765 4642 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.967770 4642 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.967775 4642 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.967780 4642 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.967784 4642 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.967788 4642 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.967792 4642 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.967797 4642 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.967802 4642 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.967805 4642 feature_gate.go:330] unrecognized feature gate: Example Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.967809 4642 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.967812 4642 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.967816 4642 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.967819 4642 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.967823 4642 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.967826 4642 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.967835 4642 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.967838 4642 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.967843 4642 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.967847 4642 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.967850 4642 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.967853 4642 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.967857 4642 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.967861 4642 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.967865 4642 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.967869 4642 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.967873 4642 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.967877 4642 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968011 4642 flags.go:64] FLAG: --address="0.0.0.0" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968026 4642 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968036 4642 flags.go:64] FLAG: --anonymous-auth="true" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968044 4642 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968052 4642 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968057 4642 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968064 4642 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968072 4642 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968077 4642 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968081 4642 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968086 4642 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968091 4642 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968096 4642 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968102 4642 flags.go:64] FLAG: --cgroup-root="" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968106 4642 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968110 4642 flags.go:64] FLAG: --client-ca-file="" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968114 4642 flags.go:64] FLAG: --cloud-config="" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968121 4642 flags.go:64] FLAG: --cloud-provider="" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968126 4642 flags.go:64] FLAG: --cluster-dns="[]" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968134 4642 flags.go:64] FLAG: --cluster-domain="" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968142 4642 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968147 4642 flags.go:64] FLAG: --config-dir="" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968154 4642 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968159 4642 flags.go:64] FLAG: --container-log-max-files="5" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968166 4642 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968171 4642 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968176 4642 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968180 4642 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968207 4642 flags.go:64] FLAG: --contention-profiling="false" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968211 4642 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968215 4642 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968220 4642 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968224 4642 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968230 4642 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968234 4642 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968239 4642 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968243 4642 flags.go:64] FLAG: --enable-load-reader="false" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968247 4642 flags.go:64] FLAG: --enable-server="true" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968252 4642 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968258 4642 flags.go:64] FLAG: --event-burst="100" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968262 4642 flags.go:64] FLAG: --event-qps="50" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968266 4642 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968271 4642 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968275 4642 flags.go:64] FLAG: --eviction-hard="" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968281 4642 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968285 4642 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968291 4642 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968297 4642 flags.go:64] FLAG: --eviction-soft="" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968302 4642 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968306 4642 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968310 4642 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968314 4642 flags.go:64] FLAG: --experimental-mounter-path="" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968320 4642 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968324 4642 flags.go:64] FLAG: --fail-swap-on="true" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968329 4642 flags.go:64] FLAG: --feature-gates="" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968334 4642 flags.go:64] FLAG: --file-check-frequency="20s" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968338 4642 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968343 4642 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968348 4642 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968352 4642 flags.go:64] FLAG: --healthz-port="10248" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968358 4642 flags.go:64] FLAG: --help="false" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968363 4642 flags.go:64] FLAG: --hostname-override="" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968368 4642 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968372 4642 flags.go:64] FLAG: --http-check-frequency="20s" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968376 4642 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968381 4642 flags.go:64] FLAG: --image-credential-provider-config="" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968385 4642 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968389 4642 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968393 4642 flags.go:64] FLAG: --image-service-endpoint="" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968397 4642 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968401 4642 flags.go:64] FLAG: --kube-api-burst="100" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968405 4642 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968410 4642 flags.go:64] FLAG: --kube-api-qps="50" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968414 4642 flags.go:64] FLAG: --kube-reserved="" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968418 4642 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968422 4642 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968426 4642 flags.go:64] FLAG: --kubelet-cgroups="" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968430 4642 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968435 4642 flags.go:64] FLAG: --lock-file="" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968439 4642 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968443 4642 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968448 4642 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968458 4642 flags.go:64] FLAG: --log-json-split-stream="false" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968463 4642 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968468 4642 flags.go:64] FLAG: --log-text-split-stream="false" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968473 4642 flags.go:64] FLAG: --logging-format="text" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968477 4642 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968482 4642 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968487 4642 flags.go:64] FLAG: --manifest-url="" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968491 4642 flags.go:64] FLAG: --manifest-url-header="" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968499 4642 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968504 4642 flags.go:64] FLAG: --max-open-files="1000000" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968510 4642 flags.go:64] FLAG: --max-pods="110" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968514 4642 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968519 4642 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968524 4642 flags.go:64] FLAG: --memory-manager-policy="None" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968540 4642 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968546 4642 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968552 4642 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968557 4642 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968571 4642 flags.go:64] FLAG: --node-status-max-images="50" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968575 4642 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968580 4642 flags.go:64] FLAG: --oom-score-adj="-999" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968584 4642 flags.go:64] FLAG: --pod-cidr="" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968588 4642 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968598 4642 flags.go:64] FLAG: --pod-manifest-path="" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968602 4642 flags.go:64] FLAG: --pod-max-pids="-1" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968606 4642 flags.go:64] FLAG: --pods-per-core="0" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968610 4642 flags.go:64] FLAG: --port="10250" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968615 4642 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968620 4642 flags.go:64] FLAG: --provider-id="" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968624 4642 flags.go:64] FLAG: --qos-reserved="" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968628 4642 flags.go:64] FLAG: --read-only-port="10255" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968632 4642 flags.go:64] FLAG: --register-node="true" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968636 4642 flags.go:64] FLAG: --register-schedulable="true" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968640 4642 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968650 4642 flags.go:64] FLAG: --registry-burst="10" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968656 4642 flags.go:64] FLAG: --registry-qps="5" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968660 4642 flags.go:64] FLAG: --reserved-cpus="" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968666 4642 flags.go:64] FLAG: --reserved-memory="" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968673 4642 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968678 4642 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968682 4642 flags.go:64] FLAG: --rotate-certificates="false" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968687 4642 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968691 4642 flags.go:64] FLAG: --runonce="false" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968696 4642 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968700 4642 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968705 4642 flags.go:64] FLAG: --seccomp-default="false" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968709 4642 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968713 4642 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968719 4642 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968724 4642 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968729 4642 flags.go:64] FLAG: --storage-driver-password="root" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968733 4642 flags.go:64] FLAG: --storage-driver-secure="false" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968738 4642 flags.go:64] FLAG: --storage-driver-table="stats" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968742 4642 flags.go:64] FLAG: --storage-driver-user="root" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968754 4642 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968759 4642 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968763 4642 flags.go:64] FLAG: --system-cgroups="" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968767 4642 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968774 4642 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968779 4642 flags.go:64] FLAG: --tls-cert-file="" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968784 4642 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968791 4642 flags.go:64] FLAG: --tls-min-version="" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968795 4642 flags.go:64] FLAG: --tls-private-key-file="" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968799 4642 flags.go:64] FLAG: --topology-manager-policy="none" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968803 4642 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968807 4642 flags.go:64] FLAG: --topology-manager-scope="container" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968812 4642 flags.go:64] FLAG: --v="2" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968823 4642 flags.go:64] FLAG: --version="false" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968830 4642 flags.go:64] FLAG: --vmodule="" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968836 4642 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.968840 4642 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.968997 4642 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.969003 4642 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.969008 4642 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.969012 4642 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.969016 4642 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.969020 4642 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.969024 4642 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.969027 4642 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.969031 4642 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.969034 4642 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.969038 4642 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.969042 4642 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.969045 4642 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.969049 4642 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.969053 4642 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.969059 4642 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.969063 4642 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.969067 4642 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.969072 4642 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.969076 4642 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.969080 4642 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.969084 4642 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.969088 4642 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.969091 4642 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.969095 4642 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.969098 4642 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.969102 4642 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.969105 4642 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.969112 4642 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.969115 4642 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.969119 4642 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.969122 4642 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.969125 4642 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.969129 4642 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.969132 4642 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.969136 4642 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.969139 4642 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.969143 4642 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.969147 4642 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.969150 4642 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.969154 4642 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.969157 4642 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.969161 4642 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.969165 4642 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.969169 4642 feature_gate.go:330] unrecognized feature gate: Example Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.969173 4642 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.969177 4642 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.969198 4642 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.969202 4642 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.969205 4642 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.969208 4642 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.969212 4642 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.969215 4642 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.969218 4642 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.969222 4642 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.969225 4642 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.969229 4642 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.969232 4642 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.969236 4642 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.969239 4642 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.969244 4642 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.969248 4642 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.969252 4642 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.969255 4642 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.969259 4642 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.969262 4642 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.969266 4642 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.969271 4642 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.969274 4642 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.969279 4642 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.969284 4642 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.969298 4642 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.976863 4642 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.976919 4642 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977002 4642 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977017 4642 feature_gate.go:330] unrecognized feature gate: Example Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977022 4642 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977025 4642 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977030 4642 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977035 4642 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977039 4642 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977044 4642 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977052 4642 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977056 4642 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977059 4642 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977064 4642 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977069 4642 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977073 4642 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977077 4642 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977081 4642 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977085 4642 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977089 4642 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977092 4642 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977096 4642 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977101 4642 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977106 4642 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977109 4642 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977113 4642 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977117 4642 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977121 4642 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977125 4642 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977129 4642 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977132 4642 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977135 4642 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977139 4642 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977143 4642 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977147 4642 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977152 4642 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977158 4642 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977163 4642 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977166 4642 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977170 4642 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977174 4642 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977177 4642 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977203 4642 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977209 4642 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977212 4642 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977216 4642 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977220 4642 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977224 4642 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977228 4642 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977231 4642 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977235 4642 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977239 4642 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977242 4642 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977246 4642 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977250 4642 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977255 4642 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977259 4642 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977263 4642 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977266 4642 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977270 4642 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977273 4642 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977277 4642 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977280 4642 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977284 4642 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977287 4642 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977290 4642 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977294 4642 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977297 4642 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977301 4642 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977305 4642 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977308 4642 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977312 4642 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977316 4642 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.977324 4642 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977447 4642 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977453 4642 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977458 4642 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977462 4642 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977465 4642 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977469 4642 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977473 4642 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977476 4642 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977480 4642 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977484 4642 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977488 4642 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977491 4642 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977494 4642 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977498 4642 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977502 4642 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977505 4642 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977508 4642 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977512 4642 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977515 4642 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977518 4642 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977521 4642 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977524 4642 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977528 4642 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977541 4642 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977544 4642 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977548 4642 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977551 4642 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977554 4642 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977558 4642 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977561 4642 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977565 4642 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977569 4642 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977572 4642 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977575 4642 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977579 4642 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977583 4642 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977586 4642 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977589 4642 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977593 4642 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977596 4642 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977599 4642 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977602 4642 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977606 4642 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977609 4642 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977612 4642 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977616 4642 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977620 4642 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977623 4642 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977626 4642 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977629 4642 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977633 4642 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977636 4642 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977640 4642 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977644 4642 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977649 4642 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977653 4642 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977657 4642 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977660 4642 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977664 4642 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977668 4642 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977672 4642 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977675 4642 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977679 4642 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977683 4642 feature_gate.go:330] unrecognized feature gate: Example Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977686 4642 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977689 4642 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977693 4642 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977696 4642 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977700 4642 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977703 4642 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 28 06:47:56 crc kubenswrapper[4642]: W0128 06:47:56.977707 4642 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.977713 4642 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.977935 4642 server.go:940] "Client rotation is on, will bootstrap in background" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.981152 4642 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.981257 4642 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.982096 4642 server.go:997] "Starting client certificate rotation" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.982123 4642 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.982309 4642 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-15 10:31:15.599335855 +0000 UTC Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.982377 4642 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.992260 4642 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 28 06:47:56 crc kubenswrapper[4642]: I0128 06:47:56.993700 4642 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 28 06:47:56 crc kubenswrapper[4642]: E0128 06:47:56.993757 4642 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.25.248:6443: connect: connection refused" logger="UnhandledError" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.004033 4642 log.go:25] "Validated CRI v1 runtime API" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.023998 4642 log.go:25] "Validated CRI v1 image API" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.025812 4642 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.029799 4642 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-28-06-44-42-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.029830 4642 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:49 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/containers/storage/overlay-containers/75d81934760b26101869fbd8e4b5954c62b019c1cc3e5a0c9f82ed8de46b3b22/userdata/shm:{mountpoint:/var/lib/containers/storage/overlay-containers/75d81934760b26101869fbd8e4b5954c62b019c1cc3e5a0c9f82ed8de46b3b22/userdata/shm major:0 minor:42 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:50 fsType:tmpfs blockSize:0} overlay_0-43:{mountpoint:/var/lib/containers/storage/overlay/94b752e0a51c0134b00ddef6dc7a933a9d7c1d9bdc88a18dae4192a0d557d623/merged major:0 minor:43 fsType:overlay blockSize:0}] Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.042863 4642 manager.go:217] Machine: {Timestamp:2026-01-28 06:47:57.041248469 +0000 UTC m=+0.273337289 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2445406 MemoryCapacity:33654112256 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:6d7f2c45-295c-4dcd-b97d-d5c383274b44 BootID:af9fb491-28d4-46e5-857c-727aaf9d83d0 Filesystems:[{Device:/run/user/1000 DeviceMajor:0 DeviceMinor:49 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827056128 Type:vfs Inodes:1048576 HasInodes:true} {Device:/var/lib/containers/storage/overlay-containers/75d81934760b26101869fbd8e4b5954c62b019c1cc3e5a0c9f82ed8de46b3b22/userdata/shm DeviceMajor:0 DeviceMinor:42 Capacity:65536000 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-43 DeviceMajor:0 DeviceMinor:43 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:50 Capacity:1073741824 Type:vfs Inodes:4108168 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827056128 Type:vfs Inodes:4108168 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:20:d4:78 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:enp3s0 MacAddress:fa:16:3e:20:d4:78 Speed:-1 Mtu:1500} {Name:enp7s0 MacAddress:fa:16:3e:3f:69:b8 Speed:-1 Mtu:1440} {Name:enp7s0.20 MacAddress:52:54:00:f4:99:f7 Speed:-1 Mtu:1436} {Name:enp7s0.21 MacAddress:52:54:00:7b:cd:9d Speed:-1 Mtu:1436} {Name:enp7s0.22 MacAddress:52:54:00:9c:2e:a7 Speed:-1 Mtu:1436} {Name:eth10 MacAddress:22:d4:c3:e9:cd:1f Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:5a:ea:5b:02:31:2b Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654112256 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:65536 Type:Data Level:1} {Id:0 Size:65536 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:65536 Type:Data Level:1} {Id:1 Size:65536 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:65536 Type:Data Level:1} {Id:10 Size:65536 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:65536 Type:Data Level:1} {Id:11 Size:65536 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:65536 Type:Data Level:1} {Id:2 Size:65536 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:65536 Type:Data Level:1} {Id:3 Size:65536 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:65536 Type:Data Level:1} {Id:4 Size:65536 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:65536 Type:Data Level:1} {Id:5 Size:65536 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:65536 Type:Data Level:1} {Id:6 Size:65536 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:65536 Type:Data Level:1} {Id:7 Size:65536 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:65536 Type:Data Level:1} {Id:8 Size:65536 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:65536 Type:Data Level:1} {Id:9 Size:65536 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.043038 4642 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.043120 4642 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.044037 4642 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.044256 4642 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.044286 4642 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.044459 4642 topology_manager.go:138] "Creating topology manager with none policy" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.044469 4642 container_manager_linux.go:303] "Creating device plugin manager" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.044805 4642 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.044833 4642 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.045324 4642 state_mem.go:36] "Initialized new in-memory state store" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.045409 4642 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.047374 4642 kubelet.go:418] "Attempting to sync node with API server" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.047395 4642 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.047418 4642 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.047429 4642 kubelet.go:324] "Adding apiserver pod source" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.047439 4642 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.050172 4642 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 28 06:47:57 crc kubenswrapper[4642]: W0128 06:47:57.050399 4642 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.25.248:6443: connect: connection refused Jan 28 06:47:57 crc kubenswrapper[4642]: W0128 06:47:57.050409 4642 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 192.168.25.248:6443: connect: connection refused Jan 28 06:47:57 crc kubenswrapper[4642]: E0128 06:47:57.050461 4642 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.25.248:6443: connect: connection refused" logger="UnhandledError" Jan 28 06:47:57 crc kubenswrapper[4642]: E0128 06:47:57.050482 4642 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 192.168.25.248:6443: connect: connection refused" logger="UnhandledError" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.050870 4642 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.052396 4642 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.053479 4642 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.053499 4642 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.053507 4642 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.053520 4642 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.053542 4642 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.053549 4642 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.053556 4642 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.053567 4642 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.053574 4642 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.053582 4642 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.053594 4642 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.053601 4642 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.054216 4642 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.054656 4642 server.go:1280] "Started kubelet" Jan 28 06:47:57 crc systemd[1]: Started Kubernetes Kubelet. Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.057455 4642 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 192.168.25.248:6443: connect: connection refused Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.057509 4642 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.058048 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.058135 4642 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.058298 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 13:40:11.096833026 +0000 UTC Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.058350 4642 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.058685 4642 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 28 06:47:57 crc kubenswrapper[4642]: E0128 06:47:57.061141 4642 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.061170 4642 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.061176 4642 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.061284 4642 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 28 06:47:57 crc kubenswrapper[4642]: W0128 06:47:57.061761 4642 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.25.248:6443: connect: connection refused Jan 28 06:47:57 crc kubenswrapper[4642]: E0128 06:47:57.061810 4642 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.25.248:6443: connect: connection refused" logger="UnhandledError" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.061967 4642 server.go:460] "Adding debug handlers to kubelet server" Jan 28 06:47:57 crc kubenswrapper[4642]: E0128 06:47:57.061965 4642 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.25.248:6443: connect: connection refused" interval="200ms" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.062139 4642 factory.go:55] Registering systemd factory Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.062155 4642 factory.go:221] Registration of the systemd container factory successfully Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.063201 4642 factory.go:153] Registering CRI-O factory Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.063219 4642 factory.go:221] Registration of the crio container factory successfully Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.063273 4642 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.063298 4642 factory.go:103] Registering Raw factory Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.063314 4642 manager.go:1196] Started watching for new ooms in manager Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.064011 4642 manager.go:319] Starting recovery of all containers Jan 28 06:47:57 crc kubenswrapper[4642]: E0128 06:47:57.061970 4642 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 192.168.25.248:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188ed2404374733a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-28 06:47:57.054628666 +0000 UTC m=+0.286717475,LastTimestamp:2026-01-28 06:47:57.054628666 +0000 UTC m=+0.286717475,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.068880 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.068930 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.068940 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.068949 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.068960 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.068968 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.068975 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.068982 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.068991 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.068999 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.069008 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.069015 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.069023 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.069033 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.069043 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.069051 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.069059 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.069067 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.069074 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.069083 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.069091 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.071647 4642 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.071683 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.071695 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.071706 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.071718 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.071728 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.071805 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.071815 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.071827 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.071837 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.071849 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.071866 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.071877 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.071885 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.071894 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.071906 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.071915 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.072301 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.072353 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.072368 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.072391 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.072404 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.072452 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.072489 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.072501 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.072517 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.072529 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.072558 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.072570 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.072582 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.072598 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.072609 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.072634 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.072650 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.072667 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.072683 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.072694 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.072708 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.072719 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.072732 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.072742 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.072752 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.072764 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.072775 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.072789 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.072804 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.072813 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.072826 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.072835 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.072852 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.072862 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.072870 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.072883 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.072895 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.072909 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.072921 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.072931 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.072946 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.072955 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.072965 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.072978 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.072987 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.073000 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.073012 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.073021 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.073037 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.073047 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.073063 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.073072 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.073081 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.073095 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.073105 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.073117 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.073127 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.073137 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.073152 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.073163 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.073180 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.073205 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.073215 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.073230 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.073243 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.073259 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.073271 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.073291 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.073303 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.073318 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.073334 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.073349 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.073359 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.073372 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.073383 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.073397 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.073408 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.073424 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.073432 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.073446 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.073456 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.073469 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.073591 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.073620 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.073632 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.073642 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.073657 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.073666 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.073679 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.073690 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.073724 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.073737 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.073746 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.073759 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.073768 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.073778 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.073791 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.073802 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.073813 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.073822 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.073834 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.073845 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.073854 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.073864 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.073876 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.073886 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.073899 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.073907 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.073917 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.073930 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.073939 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.073950 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.073962 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.073972 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.073985 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.074000 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.074011 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.074020 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.074029 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.074040 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.074049 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.074059 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.074134 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.074166 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.074198 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.074221 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.074233 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.074247 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.074257 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.074268 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.074279 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.074289 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.074301 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.074311 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.074323 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.074334 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.074344 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.074355 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.074366 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.074379 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.074399 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.074409 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.074420 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.074429 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.074438 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.074449 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.074460 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.074472 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.074483 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.074491 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.074505 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.074514 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.074526 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.074549 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.074644 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.074662 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.074682 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.074693 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.074709 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.074719 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.074765 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.074776 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.074790 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.074800 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.074809 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.074820 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.074832 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.074846 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.074856 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.074865 4642 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.074878 4642 reconstruct.go:97] "Volume reconstruction finished" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.074888 4642 reconciler.go:26] "Reconciler: start to sync state" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.080892 4642 manager.go:324] Recovery completed Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.088770 4642 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.090062 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.090097 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.090107 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.091678 4642 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.091814 4642 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.091878 4642 state_mem.go:36] "Initialized new in-memory state store" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.095641 4642 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.097088 4642 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.097126 4642 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.097157 4642 kubelet.go:2335] "Starting kubelet main sync loop" Jan 28 06:47:57 crc kubenswrapper[4642]: E0128 06:47:57.097207 4642 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.097433 4642 policy_none.go:49] "None policy: Start" Jan 28 06:47:57 crc kubenswrapper[4642]: W0128 06:47:57.097579 4642 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.25.248:6443: connect: connection refused Jan 28 06:47:57 crc kubenswrapper[4642]: E0128 06:47:57.097626 4642 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.25.248:6443: connect: connection refused" logger="UnhandledError" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.098552 4642 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.098583 4642 state_mem.go:35] "Initializing new in-memory state store" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.138107 4642 manager.go:334] "Starting Device Plugin manager" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.138145 4642 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.138157 4642 server.go:79] "Starting device plugin registration server" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.139309 4642 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.139327 4642 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.139488 4642 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.139573 4642 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.139584 4642 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 28 06:47:57 crc kubenswrapper[4642]: E0128 06:47:57.145385 4642 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.198007 4642 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.198091 4642 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.198906 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.198941 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.198952 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.199065 4642 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.199321 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.199377 4642 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.199608 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.199639 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.199646 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.199741 4642 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.199909 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.199955 4642 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.200176 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.200217 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.200226 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.200261 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.200276 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.200284 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.200366 4642 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.200496 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.200556 4642 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.200648 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.200678 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.200688 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.200881 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.200907 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.200917 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.201008 4642 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.201052 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.201065 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.201075 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.201109 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.201130 4642 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.201752 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.201763 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.201777 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.201780 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.201787 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.201789 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.201896 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.201914 4642 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.202472 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.202499 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.202507 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.239686 4642 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.240716 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.240747 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.240759 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.240775 4642 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 28 06:47:57 crc kubenswrapper[4642]: E0128 06:47:57.241976 4642 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 192.168.25.248:6443: connect: connection refused" node="crc" Jan 28 06:47:57 crc kubenswrapper[4642]: E0128 06:47:57.262798 4642 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.25.248:6443: connect: connection refused" interval="400ms" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.276795 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.276851 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.276917 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.276954 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.276972 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.276997 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.277012 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.277027 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.277047 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.277060 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.277074 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.277088 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.277101 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.277114 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.277140 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.377867 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.377928 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.377947 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.377962 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.377995 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.378012 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.378029 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.378041 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.378037 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.378055 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.378096 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.378103 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.378134 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.378142 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.378110 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.378059 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.378168 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.378174 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.378108 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.378210 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.378215 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.378238 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.378059 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.378241 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.378272 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.378300 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.378321 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.378310 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.378394 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.378440 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.442956 4642 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.444129 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.444170 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.444182 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.444229 4642 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 28 06:47:57 crc kubenswrapper[4642]: E0128 06:47:57.444560 4642 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 192.168.25.248:6443: connect: connection refused" node="crc" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.536994 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.548065 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 28 06:47:57 crc kubenswrapper[4642]: W0128 06:47:57.558262 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-4359c8a0737ec358ad39b8bec6f888a25510b222919bbe539ee025ecbe2f5722 WatchSource:0}: Error finding container 4359c8a0737ec358ad39b8bec6f888a25510b222919bbe539ee025ecbe2f5722: Status 404 returned error can't find the container with id 4359c8a0737ec358ad39b8bec6f888a25510b222919bbe539ee025ecbe2f5722 Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.564064 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 06:47:57 crc kubenswrapper[4642]: W0128 06:47:57.564538 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-0ef7b7fa0d2806829b903ac8b6109663c66564f2d98dfbc89889f40f801b25f3 WatchSource:0}: Error finding container 0ef7b7fa0d2806829b903ac8b6109663c66564f2d98dfbc89889f40f801b25f3: Status 404 returned error can't find the container with id 0ef7b7fa0d2806829b903ac8b6109663c66564f2d98dfbc89889f40f801b25f3 Jan 28 06:47:57 crc kubenswrapper[4642]: W0128 06:47:57.575143 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-4b9d82b6c7912ee71b8ba96113f849a7820ed240cad870e14997af96818923b8 WatchSource:0}: Error finding container 4b9d82b6c7912ee71b8ba96113f849a7820ed240cad870e14997af96818923b8: Status 404 returned error can't find the container with id 4b9d82b6c7912ee71b8ba96113f849a7820ed240cad870e14997af96818923b8 Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.577774 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.581766 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 06:47:57 crc kubenswrapper[4642]: W0128 06:47:57.595137 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-6a108f945dfb1a1905612a9789bdba046ab356a435a1926ab66c7bb5a3a97a48 WatchSource:0}: Error finding container 6a108f945dfb1a1905612a9789bdba046ab356a435a1926ab66c7bb5a3a97a48: Status 404 returned error can't find the container with id 6a108f945dfb1a1905612a9789bdba046ab356a435a1926ab66c7bb5a3a97a48 Jan 28 06:47:57 crc kubenswrapper[4642]: W0128 06:47:57.596292 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-e99f2c33baa90a3fc6a1268df5bfbbcf700d2cf99106b743d82d90bbb5888bc1 WatchSource:0}: Error finding container e99f2c33baa90a3fc6a1268df5bfbbcf700d2cf99106b743d82d90bbb5888bc1: Status 404 returned error can't find the container with id e99f2c33baa90a3fc6a1268df5bfbbcf700d2cf99106b743d82d90bbb5888bc1 Jan 28 06:47:57 crc kubenswrapper[4642]: E0128 06:47:57.664295 4642 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.25.248:6443: connect: connection refused" interval="800ms" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.845617 4642 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.846560 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.846595 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.846605 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:47:57 crc kubenswrapper[4642]: I0128 06:47:57.846630 4642 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 28 06:47:57 crc kubenswrapper[4642]: E0128 06:47:57.846957 4642 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 192.168.25.248:6443: connect: connection refused" node="crc" Jan 28 06:47:58 crc kubenswrapper[4642]: W0128 06:47:58.011289 4642 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.25.248:6443: connect: connection refused Jan 28 06:47:58 crc kubenswrapper[4642]: E0128 06:47:58.011393 4642 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.25.248:6443: connect: connection refused" logger="UnhandledError" Jan 28 06:47:58 crc kubenswrapper[4642]: I0128 06:47:58.058501 4642 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 192.168.25.248:6443: connect: connection refused Jan 28 06:47:58 crc kubenswrapper[4642]: I0128 06:47:58.059602 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 14:16:13.691452005 +0000 UTC Jan 28 06:47:58 crc kubenswrapper[4642]: I0128 06:47:58.103598 4642 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="3159057a36a02a3782786be649812ee0be2e1af5df8aeda45327dcee067a08dc" exitCode=0 Jan 28 06:47:58 crc kubenswrapper[4642]: I0128 06:47:58.103689 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"3159057a36a02a3782786be649812ee0be2e1af5df8aeda45327dcee067a08dc"} Jan 28 06:47:58 crc kubenswrapper[4642]: I0128 06:47:58.103797 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"e99f2c33baa90a3fc6a1268df5bfbbcf700d2cf99106b743d82d90bbb5888bc1"} Jan 28 06:47:58 crc kubenswrapper[4642]: I0128 06:47:58.103901 4642 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 06:47:58 crc kubenswrapper[4642]: I0128 06:47:58.105081 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:47:58 crc kubenswrapper[4642]: I0128 06:47:58.105108 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:47:58 crc kubenswrapper[4642]: I0128 06:47:58.105117 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:47:58 crc kubenswrapper[4642]: I0128 06:47:58.106329 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0f31fe6b99be2f9edd215e88aadd39790ce7f144061336772561af3eab969fb7"} Jan 28 06:47:58 crc kubenswrapper[4642]: I0128 06:47:58.106356 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"6a108f945dfb1a1905612a9789bdba046ab356a435a1926ab66c7bb5a3a97a48"} Jan 28 06:47:58 crc kubenswrapper[4642]: I0128 06:47:58.108045 4642 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d544adc85b9e84ad7685c6ce4e1ae6e7679fb24a21b7d44eb402b613ef6be7ad" exitCode=0 Jan 28 06:47:58 crc kubenswrapper[4642]: I0128 06:47:58.108125 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"d544adc85b9e84ad7685c6ce4e1ae6e7679fb24a21b7d44eb402b613ef6be7ad"} Jan 28 06:47:58 crc kubenswrapper[4642]: I0128 06:47:58.108200 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4b9d82b6c7912ee71b8ba96113f849a7820ed240cad870e14997af96818923b8"} Jan 28 06:47:58 crc kubenswrapper[4642]: I0128 06:47:58.108320 4642 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 06:47:58 crc kubenswrapper[4642]: I0128 06:47:58.109264 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:47:58 crc kubenswrapper[4642]: I0128 06:47:58.109291 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:47:58 crc kubenswrapper[4642]: I0128 06:47:58.109300 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:47:58 crc kubenswrapper[4642]: I0128 06:47:58.110181 4642 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="dd416d95b0a09e8e551a5d24bf2a55ae70ef5c4cffe697f53917be7358bbebed" exitCode=0 Jan 28 06:47:58 crc kubenswrapper[4642]: I0128 06:47:58.110653 4642 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 06:47:58 crc kubenswrapper[4642]: I0128 06:47:58.110269 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"dd416d95b0a09e8e551a5d24bf2a55ae70ef5c4cffe697f53917be7358bbebed"} Jan 28 06:47:58 crc kubenswrapper[4642]: I0128 06:47:58.110729 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"0ef7b7fa0d2806829b903ac8b6109663c66564f2d98dfbc89889f40f801b25f3"} Jan 28 06:47:58 crc kubenswrapper[4642]: I0128 06:47:58.110860 4642 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 06:47:58 crc kubenswrapper[4642]: I0128 06:47:58.111484 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:47:58 crc kubenswrapper[4642]: I0128 06:47:58.111504 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:47:58 crc kubenswrapper[4642]: I0128 06:47:58.111514 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:47:58 crc kubenswrapper[4642]: I0128 06:47:58.111730 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:47:58 crc kubenswrapper[4642]: I0128 06:47:58.111756 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:47:58 crc kubenswrapper[4642]: I0128 06:47:58.111764 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:47:58 crc kubenswrapper[4642]: I0128 06:47:58.112251 4642 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="d99a890a07886a968dd0b94da1663f3e212252cb8ef372bcb1fb13dff30f2445" exitCode=0 Jan 28 06:47:58 crc kubenswrapper[4642]: I0128 06:47:58.112283 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"d99a890a07886a968dd0b94da1663f3e212252cb8ef372bcb1fb13dff30f2445"} Jan 28 06:47:58 crc kubenswrapper[4642]: I0128 06:47:58.112300 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"4359c8a0737ec358ad39b8bec6f888a25510b222919bbe539ee025ecbe2f5722"} Jan 28 06:47:58 crc kubenswrapper[4642]: I0128 06:47:58.112366 4642 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 06:47:58 crc kubenswrapper[4642]: I0128 06:47:58.113273 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:47:58 crc kubenswrapper[4642]: I0128 06:47:58.113317 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:47:58 crc kubenswrapper[4642]: I0128 06:47:58.113326 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:47:58 crc kubenswrapper[4642]: E0128 06:47:58.465612 4642 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.25.248:6443: connect: connection refused" interval="1.6s" Jan 28 06:47:58 crc kubenswrapper[4642]: W0128 06:47:58.533178 4642 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.25.248:6443: connect: connection refused Jan 28 06:47:58 crc kubenswrapper[4642]: E0128 06:47:58.533267 4642 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.25.248:6443: connect: connection refused" logger="UnhandledError" Jan 28 06:47:58 crc kubenswrapper[4642]: W0128 06:47:58.555555 4642 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.25.248:6443: connect: connection refused Jan 28 06:47:58 crc kubenswrapper[4642]: E0128 06:47:58.555625 4642 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.25.248:6443: connect: connection refused" logger="UnhandledError" Jan 28 06:47:58 crc kubenswrapper[4642]: I0128 06:47:58.648060 4642 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 06:47:58 crc kubenswrapper[4642]: I0128 06:47:58.649042 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:47:58 crc kubenswrapper[4642]: I0128 06:47:58.649078 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:47:58 crc kubenswrapper[4642]: I0128 06:47:58.649088 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:47:58 crc kubenswrapper[4642]: I0128 06:47:58.649113 4642 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 28 06:47:58 crc kubenswrapper[4642]: E0128 06:47:58.649447 4642 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 192.168.25.248:6443: connect: connection refused" node="crc" Jan 28 06:47:58 crc kubenswrapper[4642]: W0128 06:47:58.650983 4642 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 192.168.25.248:6443: connect: connection refused Jan 28 06:47:58 crc kubenswrapper[4642]: E0128 06:47:58.651045 4642 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 192.168.25.248:6443: connect: connection refused" logger="UnhandledError" Jan 28 06:47:59 crc kubenswrapper[4642]: I0128 06:47:59.000219 4642 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 28 06:47:59 crc kubenswrapper[4642]: I0128 06:47:59.059862 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 06:46:57.282545596 +0000 UTC Jan 28 06:47:59 crc kubenswrapper[4642]: I0128 06:47:59.115735 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"26ce5d04a42dc4cfb6547e7bc0e2a9957b5eba820b37ad9f1c50dd412b28aec1"} Jan 28 06:47:59 crc kubenswrapper[4642]: I0128 06:47:59.115782 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"25a19a0551b29145aabbd9238cf56dafbbbdc28ad9fc6454753375eb5871265b"} Jan 28 06:47:59 crc kubenswrapper[4642]: I0128 06:47:59.115797 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"aec208f2eb87996ab0f1883e6b216ded8157366b59ee57759273141e3e12d243"} Jan 28 06:47:59 crc kubenswrapper[4642]: I0128 06:47:59.115885 4642 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 06:47:59 crc kubenswrapper[4642]: I0128 06:47:59.116555 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:47:59 crc kubenswrapper[4642]: I0128 06:47:59.116585 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:47:59 crc kubenswrapper[4642]: I0128 06:47:59.116595 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:47:59 crc kubenswrapper[4642]: I0128 06:47:59.118055 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"cf1949a0768be6ba1b7cfc9107df62005fa50466747e22440e5ea8cd04ec908c"} Jan 28 06:47:59 crc kubenswrapper[4642]: I0128 06:47:59.118068 4642 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 06:47:59 crc kubenswrapper[4642]: I0128 06:47:59.118086 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f826dab9c51a49c5b54e934e0be64c809ff09247f8aa766b9a20734a3f1277da"} Jan 28 06:47:59 crc kubenswrapper[4642]: I0128 06:47:59.118137 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"76f1f4a4bf66b6661b0b869ac209c793c9066518674be305037372a22141c1d6"} Jan 28 06:47:59 crc kubenswrapper[4642]: I0128 06:47:59.118660 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:47:59 crc kubenswrapper[4642]: I0128 06:47:59.118685 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:47:59 crc kubenswrapper[4642]: I0128 06:47:59.118693 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:47:59 crc kubenswrapper[4642]: I0128 06:47:59.120753 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"70d9e422e29c4563585ee85b03f4d3e5705c7de8c388d337a52b30a7f0ccc096"} Jan 28 06:47:59 crc kubenswrapper[4642]: I0128 06:47:59.120778 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ad6d564af60cb16fea7167aea68703adced71f83b8ff6a2270e2e6d109f6e9c9"} Jan 28 06:47:59 crc kubenswrapper[4642]: I0128 06:47:59.120788 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"75be46ee251716ce59f7d4e31811114c790e07ff2068465e1bba6ee4abc6909b"} Jan 28 06:47:59 crc kubenswrapper[4642]: I0128 06:47:59.120798 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ee1c4b797e1cd78fa6a850c8c7491a690dab16a3c06eeb71b2dc7e6799cf285a"} Jan 28 06:47:59 crc kubenswrapper[4642]: I0128 06:47:59.120818 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6769a5284fe4dae7df7948532cabc41d1f4a27a7e61ab818d61b26eb8165a750"} Jan 28 06:47:59 crc kubenswrapper[4642]: I0128 06:47:59.120868 4642 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 06:47:59 crc kubenswrapper[4642]: I0128 06:47:59.121381 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:47:59 crc kubenswrapper[4642]: I0128 06:47:59.121414 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:47:59 crc kubenswrapper[4642]: I0128 06:47:59.121422 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:47:59 crc kubenswrapper[4642]: I0128 06:47:59.122630 4642 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="775f9843a2dd1e7cce276625e6be7ce3b4e51bbf902f33ddd44dc506f2d09f05" exitCode=0 Jan 28 06:47:59 crc kubenswrapper[4642]: I0128 06:47:59.122681 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"775f9843a2dd1e7cce276625e6be7ce3b4e51bbf902f33ddd44dc506f2d09f05"} Jan 28 06:47:59 crc kubenswrapper[4642]: I0128 06:47:59.122824 4642 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 06:47:59 crc kubenswrapper[4642]: I0128 06:47:59.123429 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:47:59 crc kubenswrapper[4642]: I0128 06:47:59.123448 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:47:59 crc kubenswrapper[4642]: I0128 06:47:59.123457 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:47:59 crc kubenswrapper[4642]: I0128 06:47:59.124278 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"00dfea44d7001a114156a7be7349cff0f3b2207a408a93ac76b0535876fb4f18"} Jan 28 06:47:59 crc kubenswrapper[4642]: I0128 06:47:59.124331 4642 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 06:47:59 crc kubenswrapper[4642]: I0128 06:47:59.124803 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:47:59 crc kubenswrapper[4642]: I0128 06:47:59.124826 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:47:59 crc kubenswrapper[4642]: I0128 06:47:59.124834 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:47:59 crc kubenswrapper[4642]: I0128 06:47:59.921060 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 06:47:59 crc kubenswrapper[4642]: I0128 06:47:59.927108 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 06:48:00 crc kubenswrapper[4642]: I0128 06:48:00.015665 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 06:48:00 crc kubenswrapper[4642]: I0128 06:48:00.060743 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 00:54:18.589079055 +0000 UTC Jan 28 06:48:00 crc kubenswrapper[4642]: I0128 06:48:00.129800 4642 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="804bb38839933015daf31dfa5893c9ceb56641ff31ad96f3696c8421e4b30ba1" exitCode=0 Jan 28 06:48:00 crc kubenswrapper[4642]: I0128 06:48:00.129892 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"804bb38839933015daf31dfa5893c9ceb56641ff31ad96f3696c8421e4b30ba1"} Jan 28 06:48:00 crc kubenswrapper[4642]: I0128 06:48:00.129979 4642 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 28 06:48:00 crc kubenswrapper[4642]: I0128 06:48:00.130059 4642 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 06:48:00 crc kubenswrapper[4642]: I0128 06:48:00.130071 4642 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 06:48:00 crc kubenswrapper[4642]: I0128 06:48:00.130080 4642 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 06:48:00 crc kubenswrapper[4642]: I0128 06:48:00.131575 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:00 crc kubenswrapper[4642]: I0128 06:48:00.131603 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:00 crc kubenswrapper[4642]: I0128 06:48:00.131613 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:00 crc kubenswrapper[4642]: I0128 06:48:00.131648 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:00 crc kubenswrapper[4642]: I0128 06:48:00.131671 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:00 crc kubenswrapper[4642]: I0128 06:48:00.131674 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:00 crc kubenswrapper[4642]: I0128 06:48:00.131682 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:00 crc kubenswrapper[4642]: I0128 06:48:00.131632 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:00 crc kubenswrapper[4642]: I0128 06:48:00.131945 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:00 crc kubenswrapper[4642]: I0128 06:48:00.249811 4642 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 06:48:00 crc kubenswrapper[4642]: I0128 06:48:00.251118 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:00 crc kubenswrapper[4642]: I0128 06:48:00.251155 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:00 crc kubenswrapper[4642]: I0128 06:48:00.251166 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:00 crc kubenswrapper[4642]: I0128 06:48:00.251214 4642 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 28 06:48:01 crc kubenswrapper[4642]: I0128 06:48:01.061174 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 23:48:48.137030659 +0000 UTC Jan 28 06:48:01 crc kubenswrapper[4642]: I0128 06:48:01.135975 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"0c24a6d04920c0bca6339c96dad720c43253f29d88dc8024142a54ed579c3be2"} Jan 28 06:48:01 crc kubenswrapper[4642]: I0128 06:48:01.136038 4642 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 28 06:48:01 crc kubenswrapper[4642]: I0128 06:48:01.136054 4642 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 28 06:48:01 crc kubenswrapper[4642]: I0128 06:48:01.136088 4642 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 06:48:01 crc kubenswrapper[4642]: I0128 06:48:01.136105 4642 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 06:48:01 crc kubenswrapper[4642]: I0128 06:48:01.136167 4642 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 06:48:01 crc kubenswrapper[4642]: I0128 06:48:01.136042 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"24b044de7f7defa8f02a1cbf7533140e1d7db48bacf8ba8f712ec06894884d5d"} Jan 28 06:48:01 crc kubenswrapper[4642]: I0128 06:48:01.136237 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"89aeff0a865741b52eac31ea0348bac1ad46e0ea2d02a9418bd3d54962ced4b0"} Jan 28 06:48:01 crc kubenswrapper[4642]: I0128 06:48:01.136262 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"7f33a0fcd4bb9f2c421ecbba53f0f91739300960dc39210ccfe09a7a34354da2"} Jan 28 06:48:01 crc kubenswrapper[4642]: I0128 06:48:01.136280 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4066ece05190c23d6f3616b681a6ffa95bba26a803aa7f382a268e64d428c07d"} Jan 28 06:48:01 crc kubenswrapper[4642]: I0128 06:48:01.137427 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:01 crc kubenswrapper[4642]: I0128 06:48:01.137456 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:01 crc kubenswrapper[4642]: I0128 06:48:01.137429 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:01 crc kubenswrapper[4642]: I0128 06:48:01.137475 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:01 crc kubenswrapper[4642]: I0128 06:48:01.137485 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:01 crc kubenswrapper[4642]: I0128 06:48:01.137495 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:01 crc kubenswrapper[4642]: I0128 06:48:01.137742 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:01 crc kubenswrapper[4642]: I0128 06:48:01.137763 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:01 crc kubenswrapper[4642]: I0128 06:48:01.137770 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:01 crc kubenswrapper[4642]: I0128 06:48:01.862938 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 28 06:48:02 crc kubenswrapper[4642]: I0128 06:48:02.061541 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 01:55:28.662426815 +0000 UTC Jan 28 06:48:02 crc kubenswrapper[4642]: I0128 06:48:02.138275 4642 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 06:48:02 crc kubenswrapper[4642]: I0128 06:48:02.139047 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:02 crc kubenswrapper[4642]: I0128 06:48:02.139093 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:02 crc kubenswrapper[4642]: I0128 06:48:02.139103 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:03 crc kubenswrapper[4642]: I0128 06:48:03.061826 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 14:50:02.469596468 +0000 UTC Jan 28 06:48:03 crc kubenswrapper[4642]: I0128 06:48:03.140318 4642 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 06:48:03 crc kubenswrapper[4642]: I0128 06:48:03.141171 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:03 crc kubenswrapper[4642]: I0128 06:48:03.141230 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:03 crc kubenswrapper[4642]: I0128 06:48:03.141240 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:03 crc kubenswrapper[4642]: I0128 06:48:03.968903 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 06:48:03 crc kubenswrapper[4642]: I0128 06:48:03.969288 4642 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 28 06:48:03 crc kubenswrapper[4642]: I0128 06:48:03.969405 4642 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 06:48:03 crc kubenswrapper[4642]: I0128 06:48:03.970572 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:03 crc kubenswrapper[4642]: I0128 06:48:03.970669 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:03 crc kubenswrapper[4642]: I0128 06:48:03.970729 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:04 crc kubenswrapper[4642]: I0128 06:48:04.062037 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 23:20:44.811015923 +0000 UTC Jan 28 06:48:04 crc kubenswrapper[4642]: I0128 06:48:04.827743 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 06:48:04 crc kubenswrapper[4642]: I0128 06:48:04.828038 4642 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 28 06:48:04 crc kubenswrapper[4642]: I0128 06:48:04.828157 4642 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 06:48:04 crc kubenswrapper[4642]: I0128 06:48:04.829573 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:04 crc kubenswrapper[4642]: I0128 06:48:04.829619 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:04 crc kubenswrapper[4642]: I0128 06:48:04.829628 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:05 crc kubenswrapper[4642]: I0128 06:48:05.062162 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 22:56:14.333244238 +0000 UTC Jan 28 06:48:05 crc kubenswrapper[4642]: I0128 06:48:05.673320 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 06:48:05 crc kubenswrapper[4642]: I0128 06:48:05.673444 4642 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 28 06:48:05 crc kubenswrapper[4642]: I0128 06:48:05.673482 4642 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 06:48:05 crc kubenswrapper[4642]: I0128 06:48:05.674412 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:05 crc kubenswrapper[4642]: I0128 06:48:05.674444 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:05 crc kubenswrapper[4642]: I0128 06:48:05.674452 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:05 crc kubenswrapper[4642]: I0128 06:48:05.911813 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 28 06:48:05 crc kubenswrapper[4642]: I0128 06:48:05.911926 4642 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 06:48:05 crc kubenswrapper[4642]: I0128 06:48:05.912666 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:05 crc kubenswrapper[4642]: I0128 06:48:05.912694 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:05 crc kubenswrapper[4642]: I0128 06:48:05.912703 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:06 crc kubenswrapper[4642]: I0128 06:48:06.063086 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 18:01:02.142422154 +0000 UTC Jan 28 06:48:06 crc kubenswrapper[4642]: I0128 06:48:06.289934 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 06:48:06 crc kubenswrapper[4642]: I0128 06:48:06.290019 4642 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 06:48:06 crc kubenswrapper[4642]: I0128 06:48:06.290776 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:06 crc kubenswrapper[4642]: I0128 06:48:06.290810 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:06 crc kubenswrapper[4642]: I0128 06:48:06.290819 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:07 crc kubenswrapper[4642]: I0128 06:48:07.031203 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 06:48:07 crc kubenswrapper[4642]: I0128 06:48:07.031577 4642 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 06:48:07 crc kubenswrapper[4642]: I0128 06:48:07.032623 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:07 crc kubenswrapper[4642]: I0128 06:48:07.032730 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:07 crc kubenswrapper[4642]: I0128 06:48:07.032788 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:07 crc kubenswrapper[4642]: I0128 06:48:07.063907 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 23:46:05.557506256 +0000 UTC Jan 28 06:48:07 crc kubenswrapper[4642]: E0128 06:48:07.145441 4642 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 28 06:48:07 crc kubenswrapper[4642]: I0128 06:48:07.828421 4642 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 06:48:07 crc kubenswrapper[4642]: I0128 06:48:07.828691 4642 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 06:48:07 crc kubenswrapper[4642]: I0128 06:48:07.846587 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 06:48:07 crc kubenswrapper[4642]: I0128 06:48:07.846834 4642 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 06:48:07 crc kubenswrapper[4642]: I0128 06:48:07.847878 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:07 crc kubenswrapper[4642]: I0128 06:48:07.847909 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:07 crc kubenswrapper[4642]: I0128 06:48:07.847917 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:08 crc kubenswrapper[4642]: I0128 06:48:08.064442 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 22:17:00.758094065 +0000 UTC Jan 28 06:48:09 crc kubenswrapper[4642]: E0128 06:48:09.001840 4642 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 28 06:48:09 crc kubenswrapper[4642]: I0128 06:48:09.059757 4642 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 28 06:48:09 crc kubenswrapper[4642]: I0128 06:48:09.064931 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 17:54:25.334945341 +0000 UTC Jan 28 06:48:09 crc kubenswrapper[4642]: I0128 06:48:09.073478 4642 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 28 06:48:09 crc kubenswrapper[4642]: I0128 06:48:09.073623 4642 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 28 06:48:09 crc kubenswrapper[4642]: I0128 06:48:09.077127 4642 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 28 06:48:09 crc kubenswrapper[4642]: I0128 06:48:09.077257 4642 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 28 06:48:10 crc kubenswrapper[4642]: I0128 06:48:10.021340 4642 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 28 06:48:10 crc kubenswrapper[4642]: [+]log ok Jan 28 06:48:10 crc kubenswrapper[4642]: [+]etcd ok Jan 28 06:48:10 crc kubenswrapper[4642]: [+]poststarthook/openshift.io-api-request-count-filter ok Jan 28 06:48:10 crc kubenswrapper[4642]: [+]poststarthook/openshift.io-startkubeinformers ok Jan 28 06:48:10 crc kubenswrapper[4642]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Jan 28 06:48:10 crc kubenswrapper[4642]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Jan 28 06:48:10 crc kubenswrapper[4642]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 28 06:48:10 crc kubenswrapper[4642]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 28 06:48:10 crc kubenswrapper[4642]: [+]poststarthook/generic-apiserver-start-informers ok Jan 28 06:48:10 crc kubenswrapper[4642]: [+]poststarthook/priority-and-fairness-config-consumer ok Jan 28 06:48:10 crc kubenswrapper[4642]: [+]poststarthook/priority-and-fairness-filter ok Jan 28 06:48:10 crc kubenswrapper[4642]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 28 06:48:10 crc kubenswrapper[4642]: [+]poststarthook/start-apiextensions-informers ok Jan 28 06:48:10 crc kubenswrapper[4642]: [+]poststarthook/start-apiextensions-controllers ok Jan 28 06:48:10 crc kubenswrapper[4642]: [+]poststarthook/crd-informer-synced ok Jan 28 06:48:10 crc kubenswrapper[4642]: [+]poststarthook/start-system-namespaces-controller ok Jan 28 06:48:10 crc kubenswrapper[4642]: [+]poststarthook/start-cluster-authentication-info-controller ok Jan 28 06:48:10 crc kubenswrapper[4642]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Jan 28 06:48:10 crc kubenswrapper[4642]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Jan 28 06:48:10 crc kubenswrapper[4642]: [+]poststarthook/start-legacy-token-tracking-controller ok Jan 28 06:48:10 crc kubenswrapper[4642]: [+]poststarthook/start-service-ip-repair-controllers ok Jan 28 06:48:10 crc kubenswrapper[4642]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Jan 28 06:48:10 crc kubenswrapper[4642]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Jan 28 06:48:10 crc kubenswrapper[4642]: [+]poststarthook/priority-and-fairness-config-producer ok Jan 28 06:48:10 crc kubenswrapper[4642]: [+]poststarthook/bootstrap-controller ok Jan 28 06:48:10 crc kubenswrapper[4642]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Jan 28 06:48:10 crc kubenswrapper[4642]: [+]poststarthook/start-kube-aggregator-informers ok Jan 28 06:48:10 crc kubenswrapper[4642]: [+]poststarthook/apiservice-status-local-available-controller ok Jan 28 06:48:10 crc kubenswrapper[4642]: [+]poststarthook/apiservice-status-remote-available-controller ok Jan 28 06:48:10 crc kubenswrapper[4642]: [+]poststarthook/apiservice-registration-controller ok Jan 28 06:48:10 crc kubenswrapper[4642]: [+]poststarthook/apiservice-wait-for-first-sync ok Jan 28 06:48:10 crc kubenswrapper[4642]: [+]poststarthook/apiservice-discovery-controller ok Jan 28 06:48:10 crc kubenswrapper[4642]: [+]poststarthook/kube-apiserver-autoregistration ok Jan 28 06:48:10 crc kubenswrapper[4642]: [+]autoregister-completion ok Jan 28 06:48:10 crc kubenswrapper[4642]: [+]poststarthook/apiservice-openapi-controller ok Jan 28 06:48:10 crc kubenswrapper[4642]: [+]poststarthook/apiservice-openapiv3-controller ok Jan 28 06:48:10 crc kubenswrapper[4642]: livez check failed Jan 28 06:48:10 crc kubenswrapper[4642]: I0128 06:48:10.021974 4642 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 06:48:10 crc kubenswrapper[4642]: I0128 06:48:10.065019 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 16:47:55.831270141 +0000 UTC Jan 28 06:48:11 crc kubenswrapper[4642]: I0128 06:48:11.065668 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 13:51:08.134000756 +0000 UTC Jan 28 06:48:11 crc kubenswrapper[4642]: I0128 06:48:11.883761 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 28 06:48:11 crc kubenswrapper[4642]: I0128 06:48:11.883916 4642 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 06:48:11 crc kubenswrapper[4642]: I0128 06:48:11.884879 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:11 crc kubenswrapper[4642]: I0128 06:48:11.884923 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:11 crc kubenswrapper[4642]: I0128 06:48:11.884933 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:11 crc kubenswrapper[4642]: I0128 06:48:11.892865 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 28 06:48:12 crc kubenswrapper[4642]: I0128 06:48:12.066307 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 06:47:41.695414596 +0000 UTC Jan 28 06:48:12 crc kubenswrapper[4642]: I0128 06:48:12.155802 4642 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 06:48:12 crc kubenswrapper[4642]: I0128 06:48:12.156509 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:12 crc kubenswrapper[4642]: I0128 06:48:12.156550 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:12 crc kubenswrapper[4642]: I0128 06:48:12.156562 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:13 crc kubenswrapper[4642]: I0128 06:48:13.067010 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 00:37:29.066078799 +0000 UTC Jan 28 06:48:13 crc kubenswrapper[4642]: I0128 06:48:13.074498 4642 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 28 06:48:13 crc kubenswrapper[4642]: I0128 06:48:13.084417 4642 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 28 06:48:14 crc kubenswrapper[4642]: I0128 06:48:14.067914 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 20:00:03.974191697 +0000 UTC Jan 28 06:48:14 crc kubenswrapper[4642]: E0128 06:48:14.075625 4642 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="3.2s" Jan 28 06:48:14 crc kubenswrapper[4642]: I0128 06:48:14.077332 4642 trace.go:236] Trace[2010769556]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (28-Jan-2026 06:47:59.650) (total time: 14427ms): Jan 28 06:48:14 crc kubenswrapper[4642]: Trace[2010769556]: ---"Objects listed" error: 14427ms (06:48:14.077) Jan 28 06:48:14 crc kubenswrapper[4642]: Trace[2010769556]: [14.427108479s] [14.427108479s] END Jan 28 06:48:14 crc kubenswrapper[4642]: I0128 06:48:14.077375 4642 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 28 06:48:14 crc kubenswrapper[4642]: I0128 06:48:14.077343 4642 trace.go:236] Trace[992831268]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (28-Jan-2026 06:48:01.061) (total time: 13016ms): Jan 28 06:48:14 crc kubenswrapper[4642]: Trace[992831268]: ---"Objects listed" error: 13016ms (06:48:14.077) Jan 28 06:48:14 crc kubenswrapper[4642]: Trace[992831268]: [13.016058362s] [13.016058362s] END Jan 28 06:48:14 crc kubenswrapper[4642]: I0128 06:48:14.077397 4642 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 28 06:48:14 crc kubenswrapper[4642]: I0128 06:48:14.078241 4642 trace.go:236] Trace[1752329406]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (28-Jan-2026 06:48:01.010) (total time: 13067ms): Jan 28 06:48:14 crc kubenswrapper[4642]: Trace[1752329406]: ---"Objects listed" error: 13067ms (06:48:14.078) Jan 28 06:48:14 crc kubenswrapper[4642]: Trace[1752329406]: [13.067536974s] [13.067536974s] END Jan 28 06:48:14 crc kubenswrapper[4642]: I0128 06:48:14.078261 4642 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 28 06:48:14 crc kubenswrapper[4642]: I0128 06:48:14.078715 4642 trace.go:236] Trace[661051247]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (28-Jan-2026 06:48:01.349) (total time: 12729ms): Jan 28 06:48:14 crc kubenswrapper[4642]: Trace[661051247]: ---"Objects listed" error: 12729ms (06:48:14.078) Jan 28 06:48:14 crc kubenswrapper[4642]: Trace[661051247]: [12.729601616s] [12.729601616s] END Jan 28 06:48:14 crc kubenswrapper[4642]: I0128 06:48:14.078761 4642 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 28 06:48:14 crc kubenswrapper[4642]: I0128 06:48:14.079052 4642 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 28 06:48:14 crc kubenswrapper[4642]: E0128 06:48:14.079665 4642 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 28 06:48:14 crc kubenswrapper[4642]: I0128 06:48:14.831917 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 06:48:14 crc kubenswrapper[4642]: I0128 06:48:14.834452 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 06:48:14 crc kubenswrapper[4642]: I0128 06:48:14.835496 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.020780 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.021414 4642 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.021482 4642 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.024364 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.058848 4642 apiserver.go:52] "Watching apiserver" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.060556 4642 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.060847 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.061252 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.061277 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.061348 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.061382 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.061419 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 06:48:15 crc kubenswrapper[4642]: E0128 06:48:15.061438 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.061464 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 06:48:15 crc kubenswrapper[4642]: E0128 06:48:15.061595 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.061878 4642 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 28 06:48:15 crc kubenswrapper[4642]: E0128 06:48:15.061884 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.063010 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.063435 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.063801 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.063912 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.064039 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.064101 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.064320 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.064663 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.065416 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.068347 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 09:44:22.37710659 +0000 UTC Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.081908 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"286f1a08-e4be-475e-a3ff-c37c99c41ea6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6769a5284fe4dae7df7948532cabc41d1f4a27a7e61ab818d61b26eb8165a750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75be46ee251716ce59f7d4e31811114c790e07ff2068465e1bba6ee4abc6909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1c4b797e1cd78fa6a850c8c7491a690dab16a3c06eeb71b2dc7e6799cf285a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70d9e422e29c4563585ee85b03f4d3e5705c7de8c388d337a52b30a7f0ccc096\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6d564af60cb16fea7167aea68703adced71f83b8ff6a2270e2e6d109f6e9c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d544adc85b9e84ad7685c6ce4e1ae6e7679fb24a21b7d44eb402b613ef6be7ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d544adc85b9e84ad7685c6ce4e1ae6e7679fb24a21b7d44eb402b613ef6be7ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.085773 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.085800 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.086131 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.086157 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.086424 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.086448 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.086463 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.086480 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.086067 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.086390 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.086392 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.086718 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.086790 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.086814 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.086816 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.086868 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.087061 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.087096 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.087111 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.087125 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.087140 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.087206 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.087224 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.087445 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.087502 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.087642 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.087652 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.087710 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.088102 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.088216 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 06:48:15 crc kubenswrapper[4642]: E0128 06:48:15.088330 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 06:48:15.588310227 +0000 UTC m=+18.820399036 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.088370 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.088387 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.088528 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.088616 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.088693 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.088971 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.088718 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.089020 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.089027 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.089062 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.089078 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.089094 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.089112 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.089125 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.089139 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.089151 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.089164 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.089179 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.089215 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.089228 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.089242 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.089257 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.089271 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.089284 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.089297 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.089306 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.089309 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.089336 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.089367 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.089384 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.089398 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.089412 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.089426 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.089421 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.089442 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.089497 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.089513 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.089530 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.089545 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.089560 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.089573 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.089588 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.089603 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.089618 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.089633 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.089642 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.089645 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.089698 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.089714 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.089730 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.089746 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.089760 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.089765 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.089774 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.089805 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.089821 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.089837 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.089851 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.089827 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d609634-f051-46d5-9a1d-97785c7d8c67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76f1f4a4bf66b6661b0b869ac209c793c9066518674be305037372a22141c1d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f31fe6b99be2f9edd215e88aadd39790ce7f144061336772561af3eab969fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f826dab9c51a49c5b54e934e0be64c809ff09247f8aa766b9a20734a3f1277da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf1949a0768be6ba1b7cfc9107df62005fa50466747e22440e5ea8cd04ec908c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.089864 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.089987 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.090010 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.090027 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.090045 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.090063 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.090081 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.090097 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.090111 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.090125 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.090141 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.090155 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.090169 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.090201 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.090217 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.090232 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.090246 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.090262 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.090276 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.090290 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.090303 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.090318 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.090337 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.090365 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.090380 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.090396 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.090413 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.090429 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.090443 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.090458 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.090475 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.090489 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.090504 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.090518 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.090537 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.090551 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.090566 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.090581 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.090624 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.090640 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.090656 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.090672 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.090690 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.090706 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.090727 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.090742 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.090758 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.090776 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.090790 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.090805 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.090820 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.090834 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.090849 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.090866 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.090900 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.090915 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.090930 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.090945 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.090961 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.090977 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.090995 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.091011 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.091027 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.091042 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.091056 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.091070 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.091085 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.091120 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.091135 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.091162 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.091176 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.091209 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.091224 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.091239 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.091253 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.091268 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.091288 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.091303 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.091320 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.091335 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.091350 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.091375 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.091391 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.091406 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.091421 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.091437 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.091452 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.091467 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.091481 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.091496 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.091516 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.091530 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.091544 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.091560 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.091576 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.091593 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.091608 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.091624 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.091639 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.091656 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.091672 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.091687 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.091702 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.091717 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.091732 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.091747 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.091762 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.091778 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.091793 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.091809 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.091824 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.091840 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.091857 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.091874 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.091888 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.091903 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.091920 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.091936 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.091952 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.091969 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.091986 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.092002 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.092018 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.092033 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.092048 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.092068 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.092085 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.092104 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.092150 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.092168 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.092201 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.092218 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.092241 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.092258 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.092276 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.092293 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.092309 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.092334 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.092351 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.092380 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.092395 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.092412 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.092445 4642 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.092456 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.092467 4642 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.092477 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.092488 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.092498 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.092507 4642 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.092516 4642 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.092526 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.092535 4642 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.092544 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.092553 4642 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.092561 4642 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.092570 4642 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.092579 4642 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.092588 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.092596 4642 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.092606 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.092616 4642 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.092625 4642 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.092633 4642 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.093134 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.089850 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.090027 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.090090 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.090399 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.090665 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.090833 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.090902 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.091307 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.091606 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.091621 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.091662 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.091690 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.091733 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.091845 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.091897 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.091937 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.091882 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.092041 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.092070 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.092249 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.092263 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.092390 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.092680 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.092716 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.093231 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.093280 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.093303 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.093389 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.093632 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.094300 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.093651 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.093748 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.094315 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.093753 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.093782 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.093988 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.094008 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.094021 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.094063 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.094084 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.094085 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.094460 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.094610 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.094682 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.094713 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.094813 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.095347 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.095434 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.095760 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.095781 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.095952 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.096265 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.096455 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.095714 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.096781 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.096970 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.097108 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.097102 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.097084 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.097249 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.097286 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.097400 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.097710 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.097518 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.097657 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.097870 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.097947 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.097956 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.098071 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.098135 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.098122 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.098267 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.098252 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.098280 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.098384 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.098395 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.098519 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.098476 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.098495 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.098754 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.098896 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.098977 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.099067 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.099236 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.099297 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.099527 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.099550 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.099616 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.099719 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.099820 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.100009 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.100441 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.100490 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: E0128 06:48:15.100543 4642 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 06:48:15 crc kubenswrapper[4642]: E0128 06:48:15.100623 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 06:48:15.600595923 +0000 UTC m=+18.832684732 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.100669 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: E0128 06:48:15.100872 4642 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.100896 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: E0128 06:48:15.100927 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 06:48:15.6009137 +0000 UTC m=+18.833002509 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.101101 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.101173 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.101206 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.101266 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.101587 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.101692 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.101744 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.101674 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.101934 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.102004 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.102298 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.101115 4642 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.102981 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.103826 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.106957 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.107107 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.108528 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.108590 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.108593 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.108613 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.108687 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.108874 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.109087 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.109176 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.109295 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.109454 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.109265 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.109304 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.109509 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.109598 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.109753 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.109762 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.109893 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.109930 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.110122 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.110216 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.110249 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.110718 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.109830 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.110889 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.111109 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.111181 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.111225 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.111523 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.111576 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.111624 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.112210 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.112484 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.113661 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.113979 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.114349 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.119245 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.119624 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.120069 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.120330 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.120408 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.120741 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.120826 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.120874 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.121167 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.121335 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.121470 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.121715 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.122082 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.122336 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.122432 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.122736 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.122789 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.122846 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.122968 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.123048 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.123568 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.123970 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: E0128 06:48:15.124212 4642 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 06:48:15 crc kubenswrapper[4642]: E0128 06:48:15.124230 4642 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 06:48:15 crc kubenswrapper[4642]: E0128 06:48:15.124243 4642 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 06:48:15 crc kubenswrapper[4642]: E0128 06:48:15.124383 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 06:48:15.624329279 +0000 UTC m=+18.856418088 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.125475 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.125534 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.126134 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.126143 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.127806 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.128422 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.128585 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.130075 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.131872 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.132160 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.132416 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.132505 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.132543 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.132652 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.132826 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.134584 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.135495 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.135697 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.136141 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.137454 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.138105 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.139499 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 28 06:48:15 crc kubenswrapper[4642]: E0128 06:48:15.139527 4642 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 06:48:15 crc kubenswrapper[4642]: E0128 06:48:15.139807 4642 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 06:48:15 crc kubenswrapper[4642]: E0128 06:48:15.139821 4642 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 06:48:15 crc kubenswrapper[4642]: E0128 06:48:15.139859 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 06:48:15.639843875 +0000 UTC m=+18.871932684 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.140383 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.141586 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.142333 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.142416 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d609634-f051-46d5-9a1d-97785c7d8c67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76f1f4a4bf66b6661b0b869ac209c793c9066518674be305037372a22141c1d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f31fe6b99be2f9edd215e88aadd39790ce7f144061336772561af3eab969fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f826dab9c51a49c5b54e934e0be64c809ff09247f8aa766b9a20734a3f1277da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf1949a0768be6ba1b7cfc9107df62005fa50466747e22440e5ea8cd04ec908c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.142547 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.143000 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.144087 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.144886 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.145606 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.146946 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.147528 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.149875 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.150775 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.152323 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.153653 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.155041 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.155137 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.155840 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.156934 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.157617 4642 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.157805 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.157863 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.159550 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.160410 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.160837 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.162327 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.162725 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.162957 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.163438 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.163945 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.164877 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.165569 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.166405 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.167000 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.167457 4642 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="70d9e422e29c4563585ee85b03f4d3e5705c7de8c388d337a52b30a7f0ccc096" exitCode=255 Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.167961 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.168528 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.169262 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.169790 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.170562 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.171205 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.171943 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 28 06:48:15 crc kubenswrapper[4642]: E0128 06:48:15.172370 4642 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.172462 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 28 06:48:15 crc kubenswrapper[4642]: E0128 06:48:15.172539 4642 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-crc\" already exists" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.172826 4642 scope.go:117] "RemoveContainer" containerID="70d9e422e29c4563585ee85b03f4d3e5705c7de8c388d337a52b30a7f0ccc096" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.172997 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.173240 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.173757 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.174269 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.175027 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.175459 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"70d9e422e29c4563585ee85b03f4d3e5705c7de8c388d337a52b30a7f0ccc096"} Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.180310 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.188327 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"286f1a08-e4be-475e-a3ff-c37c99c41ea6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6769a5284fe4dae7df7948532cabc41d1f4a27a7e61ab818d61b26eb8165a750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75be46ee251716ce59f7d4e31811114c790e07ff2068465e1bba6ee4abc6909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1c4b797e1cd78fa6a850c8c7491a690dab16a3c06eeb71b2dc7e6799cf285a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70d9e422e29c4563585ee85b03f4d3e5705c7de8c388d337a52b30a7f0ccc096\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6d564af60cb16fea7167aea68703adced71f83b8ff6a2270e2e6d109f6e9c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d544adc85b9e84ad7685c6ce4e1ae6e7679fb24a21b7d44eb402b613ef6be7ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d544adc85b9e84ad7685c6ce4e1ae6e7679fb24a21b7d44eb402b613ef6be7ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.193209 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.193240 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.193321 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.193399 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.193531 4642 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.193549 4642 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.193577 4642 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.193595 4642 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.193606 4642 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.193616 4642 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.193626 4642 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.193635 4642 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.193642 4642 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.193652 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.193662 4642 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.193671 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.193679 4642 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.193689 4642 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.193709 4642 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.193720 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.193729 4642 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.193737 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.193746 4642 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.193754 4642 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.193762 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.193770 4642 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.193777 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.193786 4642 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.193795 4642 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.193804 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.193812 4642 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.193821 4642 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.193830 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.193837 4642 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.193845 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.193853 4642 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.193860 4642 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.193868 4642 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.193876 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.193885 4642 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.193891 4642 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.193899 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.193907 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.193915 4642 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.193923 4642 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.193930 4642 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.193937 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.193946 4642 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.193954 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.193980 4642 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.193989 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.193998 4642 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194006 4642 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194015 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194022 4642 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194030 4642 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194039 4642 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194047 4642 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194075 4642 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194093 4642 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194103 4642 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194112 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194121 4642 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194130 4642 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194138 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194146 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194162 4642 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194176 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194202 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194211 4642 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194219 4642 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194227 4642 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194234 4642 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194243 4642 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194251 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194233 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194258 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194330 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194340 4642 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194350 4642 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194372 4642 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194379 4642 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194387 4642 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194394 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194402 4642 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194409 4642 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194417 4642 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194425 4642 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194432 4642 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194442 4642 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194449 4642 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194456 4642 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194463 4642 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194470 4642 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194477 4642 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194485 4642 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194492 4642 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194499 4642 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194506 4642 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194514 4642 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194521 4642 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194530 4642 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194539 4642 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194547 4642 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194554 4642 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194561 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194568 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194576 4642 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194583 4642 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194592 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194606 4642 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194614 4642 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194621 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194630 4642 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194638 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194645 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194653 4642 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194660 4642 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194667 4642 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194675 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194682 4642 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194696 4642 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194706 4642 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194713 4642 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194721 4642 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194729 4642 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194737 4642 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194744 4642 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194752 4642 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194759 4642 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194766 4642 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194774 4642 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194781 4642 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194788 4642 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194796 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194804 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194811 4642 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194819 4642 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194827 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194833 4642 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194841 4642 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194847 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194855 4642 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194862 4642 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194870 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194878 4642 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194885 4642 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194891 4642 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194899 4642 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194907 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194915 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194922 4642 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194928 4642 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194938 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194945 4642 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194952 4642 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194961 4642 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194968 4642 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194975 4642 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194983 4642 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194990 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.194997 4642 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.195004 4642 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.195010 4642 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.195018 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.195025 4642 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.195031 4642 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.195038 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.195045 4642 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.195053 4642 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.195061 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.195069 4642 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.195077 4642 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.195084 4642 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.195091 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.195098 4642 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.195106 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.195113 4642 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.200454 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.207032 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"286f1a08-e4be-475e-a3ff-c37c99c41ea6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6769a5284fe4dae7df7948532cabc41d1f4a27a7e61ab818d61b26eb8165a750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75be46ee251716ce59f7d4e31811114c790e07ff2068465e1bba6ee4abc6909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1c4b797e1cd78fa6a850c8c7491a690dab16a3c06eeb71b2dc7e6799cf285a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70d9e422e29c4563585ee85b03f4d3e5705c7de8c388d337a52b30a7f0ccc096\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70d9e422e29c4563585ee85b03f4d3e5705c7de8c388d337a52b30a7f0ccc096\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0128 06:48:09.026722 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 06:48:09.029824 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3147749855/tls.crt::/tmp/serving-cert-3147749855/tls.key\\\\\\\"\\\\nI0128 06:48:14.183760 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 06:48:14.185955 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 06:48:14.185992 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 06:48:14.186011 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 06:48:14.186020 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 06:48:14.192438 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0128 06:48:14.192454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0128 06:48:14.192477 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 06:48:14.192484 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 06:48:14.192488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 06:48:14.192490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 06:48:14.192493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 06:48:14.192496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0128 06:48:14.194350 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6d564af60cb16fea7167aea68703adced71f83b8ff6a2270e2e6d109f6e9c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d544adc85b9e84ad7685c6ce4e1ae6e7679fb24a21b7d44eb402b613ef6be7ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d544adc85b9e84ad7685c6ce4e1ae6e7679fb24a21b7d44eb402b613ef6be7ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.213297 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.219640 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.225882 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.232127 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.238760 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.245276 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d609634-f051-46d5-9a1d-97785c7d8c67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76f1f4a4bf66b6661b0b869ac209c793c9066518674be305037372a22141c1d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f31fe6b99be2f9edd215e88aadd39790ce7f144061336772561af3eab969fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f826dab9c51a49c5b54e934e0be64c809ff09247f8aa766b9a20734a3f1277da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf1949a0768be6ba1b7cfc9107df62005fa50466747e22440e5ea8cd04ec908c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.251509 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.371603 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.379304 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 06:48:15 crc kubenswrapper[4642]: W0128 06:48:15.382374 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-0f89e0973936c8cb6fbfa1ea4acdb8d8f70273459ba15702a472d2ed51d81af6 WatchSource:0}: Error finding container 0f89e0973936c8cb6fbfa1ea4acdb8d8f70273459ba15702a472d2ed51d81af6: Status 404 returned error can't find the container with id 0f89e0973936c8cb6fbfa1ea4acdb8d8f70273459ba15702a472d2ed51d81af6 Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.383493 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 06:48:15 crc kubenswrapper[4642]: W0128 06:48:15.392898 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-71ae5508668bc2a99161379995cd7682b39d7e9f86ed9796ddd0720d61a40aa3 WatchSource:0}: Error finding container 71ae5508668bc2a99161379995cd7682b39d7e9f86ed9796ddd0720d61a40aa3: Status 404 returned error can't find the container with id 71ae5508668bc2a99161379995cd7682b39d7e9f86ed9796ddd0720d61a40aa3 Jan 28 06:48:15 crc kubenswrapper[4642]: W0128 06:48:15.395978 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-9226ebfb7cc308f3177c8779fc4b2105a7bc254b399eca171ae012c465fe7cdb WatchSource:0}: Error finding container 9226ebfb7cc308f3177c8779fc4b2105a7bc254b399eca171ae012c465fe7cdb: Status 404 returned error can't find the container with id 9226ebfb7cc308f3177c8779fc4b2105a7bc254b399eca171ae012c465fe7cdb Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.601761 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.601850 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.601884 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 06:48:15 crc kubenswrapper[4642]: E0128 06:48:15.601985 4642 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 06:48:15 crc kubenswrapper[4642]: E0128 06:48:15.602018 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 06:48:16.601995999 +0000 UTC m=+19.834084818 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 06:48:15 crc kubenswrapper[4642]: E0128 06:48:15.602066 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 06:48:16.602037537 +0000 UTC m=+19.834126356 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 06:48:15 crc kubenswrapper[4642]: E0128 06:48:15.601985 4642 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 06:48:15 crc kubenswrapper[4642]: E0128 06:48:15.602119 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 06:48:16.602108269 +0000 UTC m=+19.834197078 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.702897 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 06:48:15 crc kubenswrapper[4642]: I0128 06:48:15.702943 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 06:48:15 crc kubenswrapper[4642]: E0128 06:48:15.703131 4642 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 06:48:15 crc kubenswrapper[4642]: E0128 06:48:15.703151 4642 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 06:48:15 crc kubenswrapper[4642]: E0128 06:48:15.703148 4642 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 06:48:15 crc kubenswrapper[4642]: E0128 06:48:15.703212 4642 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 06:48:15 crc kubenswrapper[4642]: E0128 06:48:15.703228 4642 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 06:48:15 crc kubenswrapper[4642]: E0128 06:48:15.703165 4642 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 06:48:15 crc kubenswrapper[4642]: E0128 06:48:15.703290 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 06:48:16.703269711 +0000 UTC m=+19.935358530 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 06:48:15 crc kubenswrapper[4642]: E0128 06:48:15.703339 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 06:48:16.703320767 +0000 UTC m=+19.935409576 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 06:48:16 crc kubenswrapper[4642]: I0128 06:48:16.068740 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 09:40:46.600824895 +0000 UTC Jan 28 06:48:16 crc kubenswrapper[4642]: I0128 06:48:16.172034 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"71ae5508668bc2a99161379995cd7682b39d7e9f86ed9796ddd0720d61a40aa3"} Jan 28 06:48:16 crc kubenswrapper[4642]: I0128 06:48:16.173658 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"5886e382e4852d848554f98099a7cbdd4d0bc3f161f96fd6f34f300a748b0172"} Jan 28 06:48:16 crc kubenswrapper[4642]: I0128 06:48:16.173727 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"0f89e0973936c8cb6fbfa1ea4acdb8d8f70273459ba15702a472d2ed51d81af6"} Jan 28 06:48:16 crc kubenswrapper[4642]: I0128 06:48:16.176020 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 28 06:48:16 crc kubenswrapper[4642]: I0128 06:48:16.177843 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7181241cbadd0b8edc9a6ff7f035813a1b2a06cdec451dc94d869bb56424a485"} Jan 28 06:48:16 crc kubenswrapper[4642]: I0128 06:48:16.178221 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 06:48:16 crc kubenswrapper[4642]: I0128 06:48:16.179575 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"4acb8b3be7913130bfe0acca1f8ed39f30ee2121eec228e4878f5645899f53cd"} Jan 28 06:48:16 crc kubenswrapper[4642]: I0128 06:48:16.179629 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"cfb4620ab8d587f286964f74ee575b4837fe55979a2f70767d30709cdb4fb2a0"} Jan 28 06:48:16 crc kubenswrapper[4642]: I0128 06:48:16.179643 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"9226ebfb7cc308f3177c8779fc4b2105a7bc254b399eca171ae012c465fe7cdb"} Jan 28 06:48:16 crc kubenswrapper[4642]: I0128 06:48:16.186129 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:16Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:16 crc kubenswrapper[4642]: I0128 06:48:16.194570 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d609634-f051-46d5-9a1d-97785c7d8c67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76f1f4a4bf66b6661b0b869ac209c793c9066518674be305037372a22141c1d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f31fe6b99be2f9edd215e88aadd39790ce7f144061336772561af3eab969fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f826dab9c51a49c5b54e934e0be64c809ff09247f8aa766b9a20734a3f1277da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf1949a0768be6ba1b7cfc9107df62005fa50466747e22440e5ea8cd04ec908c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:16Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:16 crc kubenswrapper[4642]: I0128 06:48:16.202473 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5886e382e4852d848554f98099a7cbdd4d0bc3f161f96fd6f34f300a748b0172\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:16Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:16 crc kubenswrapper[4642]: I0128 06:48:16.212465 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:16Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:16 crc kubenswrapper[4642]: I0128 06:48:16.227615 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:16Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:16 crc kubenswrapper[4642]: I0128 06:48:16.255258 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"286f1a08-e4be-475e-a3ff-c37c99c41ea6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6769a5284fe4dae7df7948532cabc41d1f4a27a7e61ab818d61b26eb8165a750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75be46ee251716ce59f7d4e31811114c790e07ff2068465e1bba6ee4abc6909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1c4b797e1cd78fa6a850c8c7491a690dab16a3c06eeb71b2dc7e6799cf285a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70d9e422e29c4563585ee85b03f4d3e5705c7de8c388d337a52b30a7f0ccc096\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70d9e422e29c4563585ee85b03f4d3e5705c7de8c388d337a52b30a7f0ccc096\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0128 06:48:09.026722 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 06:48:09.029824 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3147749855/tls.crt::/tmp/serving-cert-3147749855/tls.key\\\\\\\"\\\\nI0128 06:48:14.183760 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 06:48:14.185955 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 06:48:14.185992 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 06:48:14.186011 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 06:48:14.186020 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 06:48:14.192438 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0128 06:48:14.192454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0128 06:48:14.192477 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 06:48:14.192484 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 06:48:14.192488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 06:48:14.192490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 06:48:14.192493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 06:48:14.192496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0128 06:48:14.194350 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6d564af60cb16fea7167aea68703adced71f83b8ff6a2270e2e6d109f6e9c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d544adc85b9e84ad7685c6ce4e1ae6e7679fb24a21b7d44eb402b613ef6be7ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d544adc85b9e84ad7685c6ce4e1ae6e7679fb24a21b7d44eb402b613ef6be7ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:16Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:16 crc kubenswrapper[4642]: I0128 06:48:16.263973 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:16Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:16 crc kubenswrapper[4642]: I0128 06:48:16.273958 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:16Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:16 crc kubenswrapper[4642]: I0128 06:48:16.291094 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d609634-f051-46d5-9a1d-97785c7d8c67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76f1f4a4bf66b6661b0b869ac209c793c9066518674be305037372a22141c1d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f31fe6b99be2f9edd215e88aadd39790ce7f144061336772561af3eab969fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f826dab9c51a49c5b54e934e0be64c809ff09247f8aa766b9a20734a3f1277da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf1949a0768be6ba1b7cfc9107df62005fa50466747e22440e5ea8cd04ec908c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:16Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:16 crc kubenswrapper[4642]: I0128 06:48:16.302713 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5886e382e4852d848554f98099a7cbdd4d0bc3f161f96fd6f34f300a748b0172\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:16Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:16 crc kubenswrapper[4642]: I0128 06:48:16.313804 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:16Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:16 crc kubenswrapper[4642]: I0128 06:48:16.322512 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:16Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:16 crc kubenswrapper[4642]: I0128 06:48:16.332506 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4acb8b3be7913130bfe0acca1f8ed39f30ee2121eec228e4878f5645899f53cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfb4620ab8d587f286964f74ee575b4837fe55979a2f70767d30709cdb4fb2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:16Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:16 crc kubenswrapper[4642]: I0128 06:48:16.365393 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"286f1a08-e4be-475e-a3ff-c37c99c41ea6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6769a5284fe4dae7df7948532cabc41d1f4a27a7e61ab818d61b26eb8165a750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75be46ee251716ce59f7d4e31811114c790e07ff2068465e1bba6ee4abc6909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1c4b797e1cd78fa6a850c8c7491a690dab16a3c06eeb71b2dc7e6799cf285a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7181241cbadd0b8edc9a6ff7f035813a1b2a06cdec451dc94d869bb56424a485\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70d9e422e29c4563585ee85b03f4d3e5705c7de8c388d337a52b30a7f0ccc096\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0128 06:48:09.026722 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 06:48:09.029824 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3147749855/tls.crt::/tmp/serving-cert-3147749855/tls.key\\\\\\\"\\\\nI0128 06:48:14.183760 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 06:48:14.185955 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 06:48:14.185992 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 06:48:14.186011 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 06:48:14.186020 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 06:48:14.192438 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0128 06:48:14.192454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0128 06:48:14.192477 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 06:48:14.192484 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 06:48:14.192488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 06:48:14.192490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 06:48:14.192493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 06:48:14.192496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0128 06:48:14.194350 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6d564af60cb16fea7167aea68703adced71f83b8ff6a2270e2e6d109f6e9c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d544adc85b9e84ad7685c6ce4e1ae6e7679fb24a21b7d44eb402b613ef6be7ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d544adc85b9e84ad7685c6ce4e1ae6e7679fb24a21b7d44eb402b613ef6be7ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:16Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:16 crc kubenswrapper[4642]: I0128 06:48:16.379133 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:16Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:16 crc kubenswrapper[4642]: I0128 06:48:16.388833 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:16Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:16 crc kubenswrapper[4642]: I0128 06:48:16.609784 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 06:48:16 crc kubenswrapper[4642]: I0128 06:48:16.609937 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 06:48:16 crc kubenswrapper[4642]: I0128 06:48:16.609966 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 06:48:16 crc kubenswrapper[4642]: E0128 06:48:16.610010 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 06:48:18.609971098 +0000 UTC m=+21.842059906 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 06:48:16 crc kubenswrapper[4642]: E0128 06:48:16.610112 4642 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 06:48:16 crc kubenswrapper[4642]: E0128 06:48:16.610150 4642 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 06:48:16 crc kubenswrapper[4642]: E0128 06:48:16.610242 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 06:48:18.610181042 +0000 UTC m=+21.842269851 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 06:48:16 crc kubenswrapper[4642]: E0128 06:48:16.610266 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 06:48:18.61025997 +0000 UTC m=+21.842348780 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 06:48:16 crc kubenswrapper[4642]: I0128 06:48:16.711084 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 06:48:16 crc kubenswrapper[4642]: I0128 06:48:16.711121 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 06:48:16 crc kubenswrapper[4642]: E0128 06:48:16.711261 4642 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 06:48:16 crc kubenswrapper[4642]: E0128 06:48:16.711278 4642 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 06:48:16 crc kubenswrapper[4642]: E0128 06:48:16.711291 4642 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 06:48:16 crc kubenswrapper[4642]: E0128 06:48:16.711344 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 06:48:18.711326434 +0000 UTC m=+21.943415243 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 06:48:16 crc kubenswrapper[4642]: E0128 06:48:16.711341 4642 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 06:48:16 crc kubenswrapper[4642]: E0128 06:48:16.711378 4642 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 06:48:16 crc kubenswrapper[4642]: E0128 06:48:16.711391 4642 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 06:48:16 crc kubenswrapper[4642]: E0128 06:48:16.711475 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 06:48:18.711457992 +0000 UTC m=+21.943546801 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 06:48:17 crc kubenswrapper[4642]: I0128 06:48:17.069131 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 10:27:14.585576763 +0000 UTC Jan 28 06:48:17 crc kubenswrapper[4642]: I0128 06:48:17.098316 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 06:48:17 crc kubenswrapper[4642]: I0128 06:48:17.098348 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 06:48:17 crc kubenswrapper[4642]: E0128 06:48:17.098481 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 06:48:17 crc kubenswrapper[4642]: I0128 06:48:17.098504 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 06:48:17 crc kubenswrapper[4642]: E0128 06:48:17.098567 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 06:48:17 crc kubenswrapper[4642]: E0128 06:48:17.098661 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 06:48:17 crc kubenswrapper[4642]: I0128 06:48:17.101691 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 28 06:48:17 crc kubenswrapper[4642]: I0128 06:48:17.102216 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 28 06:48:17 crc kubenswrapper[4642]: I0128 06:48:17.103306 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 28 06:48:17 crc kubenswrapper[4642]: I0128 06:48:17.103820 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 28 06:48:17 crc kubenswrapper[4642]: I0128 06:48:17.104773 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 28 06:48:17 crc kubenswrapper[4642]: I0128 06:48:17.114141 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"286f1a08-e4be-475e-a3ff-c37c99c41ea6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6769a5284fe4dae7df7948532cabc41d1f4a27a7e61ab818d61b26eb8165a750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75be46ee251716ce59f7d4e31811114c790e07ff2068465e1bba6ee4abc6909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1c4b797e1cd78fa6a850c8c7491a690dab16a3c06eeb71b2dc7e6799cf285a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7181241cbadd0b8edc9a6ff7f035813a1b2a06cdec451dc94d869bb56424a485\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70d9e422e29c4563585ee85b03f4d3e5705c7de8c388d337a52b30a7f0ccc096\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0128 06:48:09.026722 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 06:48:09.029824 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3147749855/tls.crt::/tmp/serving-cert-3147749855/tls.key\\\\\\\"\\\\nI0128 06:48:14.183760 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 06:48:14.185955 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 06:48:14.185992 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 06:48:14.186011 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 06:48:14.186020 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 06:48:14.192438 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0128 06:48:14.192454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0128 06:48:14.192477 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 06:48:14.192484 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 06:48:14.192488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 06:48:14.192490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 06:48:14.192493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 06:48:14.192496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0128 06:48:14.194350 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6d564af60cb16fea7167aea68703adced71f83b8ff6a2270e2e6d109f6e9c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d544adc85b9e84ad7685c6ce4e1ae6e7679fb24a21b7d44eb402b613ef6be7ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d544adc85b9e84ad7685c6ce4e1ae6e7679fb24a21b7d44eb402b613ef6be7ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:17Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:17 crc kubenswrapper[4642]: I0128 06:48:17.123151 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:17Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:17 crc kubenswrapper[4642]: I0128 06:48:17.139803 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:17Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:17 crc kubenswrapper[4642]: I0128 06:48:17.161315 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:17Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:17 crc kubenswrapper[4642]: I0128 06:48:17.174164 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:17Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:17 crc kubenswrapper[4642]: I0128 06:48:17.184223 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4acb8b3be7913130bfe0acca1f8ed39f30ee2121eec228e4878f5645899f53cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfb4620ab8d587f286964f74ee575b4837fe55979a2f70767d30709cdb4fb2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:17Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:17 crc kubenswrapper[4642]: I0128 06:48:17.194081 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d609634-f051-46d5-9a1d-97785c7d8c67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76f1f4a4bf66b6661b0b869ac209c793c9066518674be305037372a22141c1d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f31fe6b99be2f9edd215e88aadd39790ce7f144061336772561af3eab969fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f826dab9c51a49c5b54e934e0be64c809ff09247f8aa766b9a20734a3f1277da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf1949a0768be6ba1b7cfc9107df62005fa50466747e22440e5ea8cd04ec908c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:17Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:17 crc kubenswrapper[4642]: I0128 06:48:17.204931 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5886e382e4852d848554f98099a7cbdd4d0bc3f161f96fd6f34f300a748b0172\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:17Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:17 crc kubenswrapper[4642]: I0128 06:48:17.279911 4642 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 06:48:17 crc kubenswrapper[4642]: I0128 06:48:17.281393 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:17 crc kubenswrapper[4642]: I0128 06:48:17.281429 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:17 crc kubenswrapper[4642]: I0128 06:48:17.281438 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:17 crc kubenswrapper[4642]: I0128 06:48:17.281503 4642 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 28 06:48:17 crc kubenswrapper[4642]: I0128 06:48:17.290941 4642 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 28 06:48:17 crc kubenswrapper[4642]: I0128 06:48:17.291148 4642 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 28 06:48:17 crc kubenswrapper[4642]: I0128 06:48:17.292098 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:17 crc kubenswrapper[4642]: I0128 06:48:17.292132 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:17 crc kubenswrapper[4642]: I0128 06:48:17.292143 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:17 crc kubenswrapper[4642]: I0128 06:48:17.292160 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:17 crc kubenswrapper[4642]: I0128 06:48:17.292171 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:17Z","lastTransitionTime":"2026-01-28T06:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:17 crc kubenswrapper[4642]: E0128 06:48:17.306761 4642 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"af9fb491-28d4-46e5-857c-727aaf9d83d0\\\",\\\"systemUUID\\\":\\\"6d7f2c45-295c-4dcd-b97d-d5c383274b44\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:17Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:17 crc kubenswrapper[4642]: I0128 06:48:17.309240 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:17 crc kubenswrapper[4642]: I0128 06:48:17.309272 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:17 crc kubenswrapper[4642]: I0128 06:48:17.309283 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:17 crc kubenswrapper[4642]: I0128 06:48:17.309300 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:17 crc kubenswrapper[4642]: I0128 06:48:17.309309 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:17Z","lastTransitionTime":"2026-01-28T06:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:17 crc kubenswrapper[4642]: E0128 06:48:17.319428 4642 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"af9fb491-28d4-46e5-857c-727aaf9d83d0\\\",\\\"systemUUID\\\":\\\"6d7f2c45-295c-4dcd-b97d-d5c383274b44\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:17Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:17 crc kubenswrapper[4642]: I0128 06:48:17.322148 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:17 crc kubenswrapper[4642]: I0128 06:48:17.322175 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:17 crc kubenswrapper[4642]: I0128 06:48:17.322202 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:17 crc kubenswrapper[4642]: I0128 06:48:17.322219 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:17 crc kubenswrapper[4642]: I0128 06:48:17.322230 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:17Z","lastTransitionTime":"2026-01-28T06:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:17 crc kubenswrapper[4642]: E0128 06:48:17.333820 4642 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"af9fb491-28d4-46e5-857c-727aaf9d83d0\\\",\\\"systemUUID\\\":\\\"6d7f2c45-295c-4dcd-b97d-d5c383274b44\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:17Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:17 crc kubenswrapper[4642]: I0128 06:48:17.336654 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:17 crc kubenswrapper[4642]: I0128 06:48:17.336688 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:17 crc kubenswrapper[4642]: I0128 06:48:17.336700 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:17 crc kubenswrapper[4642]: I0128 06:48:17.336712 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:17 crc kubenswrapper[4642]: I0128 06:48:17.336722 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:17Z","lastTransitionTime":"2026-01-28T06:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:17 crc kubenswrapper[4642]: E0128 06:48:17.344535 4642 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"af9fb491-28d4-46e5-857c-727aaf9d83d0\\\",\\\"systemUUID\\\":\\\"6d7f2c45-295c-4dcd-b97d-d5c383274b44\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:17Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:17 crc kubenswrapper[4642]: I0128 06:48:17.346915 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:17 crc kubenswrapper[4642]: I0128 06:48:17.346943 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:17 crc kubenswrapper[4642]: I0128 06:48:17.346954 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:17 crc kubenswrapper[4642]: I0128 06:48:17.346970 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:17 crc kubenswrapper[4642]: I0128 06:48:17.346980 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:17Z","lastTransitionTime":"2026-01-28T06:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:17 crc kubenswrapper[4642]: E0128 06:48:17.356031 4642 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"af9fb491-28d4-46e5-857c-727aaf9d83d0\\\",\\\"systemUUID\\\":\\\"6d7f2c45-295c-4dcd-b97d-d5c383274b44\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:17Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:17 crc kubenswrapper[4642]: E0128 06:48:17.356132 4642 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 06:48:17 crc kubenswrapper[4642]: I0128 06:48:17.357047 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:17 crc kubenswrapper[4642]: I0128 06:48:17.357074 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:17 crc kubenswrapper[4642]: I0128 06:48:17.357082 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:17 crc kubenswrapper[4642]: I0128 06:48:17.357096 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:17 crc kubenswrapper[4642]: I0128 06:48:17.357105 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:17Z","lastTransitionTime":"2026-01-28T06:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:17 crc kubenswrapper[4642]: I0128 06:48:17.462166 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:17 crc kubenswrapper[4642]: I0128 06:48:17.462248 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:17 crc kubenswrapper[4642]: I0128 06:48:17.462259 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:17 crc kubenswrapper[4642]: I0128 06:48:17.462280 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:17 crc kubenswrapper[4642]: I0128 06:48:17.462292 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:17Z","lastTransitionTime":"2026-01-28T06:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:17 crc kubenswrapper[4642]: I0128 06:48:17.564254 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:17 crc kubenswrapper[4642]: I0128 06:48:17.564284 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:17 crc kubenswrapper[4642]: I0128 06:48:17.564294 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:17 crc kubenswrapper[4642]: I0128 06:48:17.564305 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:17 crc kubenswrapper[4642]: I0128 06:48:17.564314 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:17Z","lastTransitionTime":"2026-01-28T06:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:17 crc kubenswrapper[4642]: I0128 06:48:17.666548 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:17 crc kubenswrapper[4642]: I0128 06:48:17.666577 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:17 crc kubenswrapper[4642]: I0128 06:48:17.666588 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:17 crc kubenswrapper[4642]: I0128 06:48:17.666600 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:17 crc kubenswrapper[4642]: I0128 06:48:17.666609 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:17Z","lastTransitionTime":"2026-01-28T06:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:17 crc kubenswrapper[4642]: I0128 06:48:17.769178 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:17 crc kubenswrapper[4642]: I0128 06:48:17.769259 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:17 crc kubenswrapper[4642]: I0128 06:48:17.769272 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:17 crc kubenswrapper[4642]: I0128 06:48:17.769295 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:17 crc kubenswrapper[4642]: I0128 06:48:17.769310 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:17Z","lastTransitionTime":"2026-01-28T06:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:17 crc kubenswrapper[4642]: I0128 06:48:17.872424 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:17 crc kubenswrapper[4642]: I0128 06:48:17.872466 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:17 crc kubenswrapper[4642]: I0128 06:48:17.872477 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:17 crc kubenswrapper[4642]: I0128 06:48:17.872492 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:17 crc kubenswrapper[4642]: I0128 06:48:17.872502 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:17Z","lastTransitionTime":"2026-01-28T06:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:17 crc kubenswrapper[4642]: I0128 06:48:17.974247 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:17 crc kubenswrapper[4642]: I0128 06:48:17.974285 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:17 crc kubenswrapper[4642]: I0128 06:48:17.974297 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:17 crc kubenswrapper[4642]: I0128 06:48:17.974312 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:17 crc kubenswrapper[4642]: I0128 06:48:17.974324 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:17Z","lastTransitionTime":"2026-01-28T06:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:18 crc kubenswrapper[4642]: I0128 06:48:18.069256 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 02:56:45.394176735 +0000 UTC Jan 28 06:48:18 crc kubenswrapper[4642]: I0128 06:48:18.075881 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:18 crc kubenswrapper[4642]: I0128 06:48:18.075929 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:18 crc kubenswrapper[4642]: I0128 06:48:18.075940 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:18 crc kubenswrapper[4642]: I0128 06:48:18.075957 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:18 crc kubenswrapper[4642]: I0128 06:48:18.075969 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:18Z","lastTransitionTime":"2026-01-28T06:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:18 crc kubenswrapper[4642]: I0128 06:48:18.179405 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:18 crc kubenswrapper[4642]: I0128 06:48:18.179545 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:18 crc kubenswrapper[4642]: I0128 06:48:18.179555 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:18 crc kubenswrapper[4642]: I0128 06:48:18.179754 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:18 crc kubenswrapper[4642]: I0128 06:48:18.179767 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:18Z","lastTransitionTime":"2026-01-28T06:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:18 crc kubenswrapper[4642]: I0128 06:48:18.186550 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"4e3e00c82a4c8d33e89ddc75f7c6c393e11f7a98c88c8e3e160fd8f206ece474"} Jan 28 06:48:18 crc kubenswrapper[4642]: I0128 06:48:18.201051 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5886e382e4852d848554f98099a7cbdd4d0bc3f161f96fd6f34f300a748b0172\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:18Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:18 crc kubenswrapper[4642]: I0128 06:48:18.211279 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:18Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:18 crc kubenswrapper[4642]: I0128 06:48:18.220595 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:18Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:18 crc kubenswrapper[4642]: I0128 06:48:18.229971 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4acb8b3be7913130bfe0acca1f8ed39f30ee2121eec228e4878f5645899f53cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfb4620ab8d587f286964f74ee575b4837fe55979a2f70767d30709cdb4fb2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:18Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:18 crc kubenswrapper[4642]: I0128 06:48:18.239414 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d609634-f051-46d5-9a1d-97785c7d8c67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76f1f4a4bf66b6661b0b869ac209c793c9066518674be305037372a22141c1d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f31fe6b99be2f9edd215e88aadd39790ce7f144061336772561af3eab969fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f826dab9c51a49c5b54e934e0be64c809ff09247f8aa766b9a20734a3f1277da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf1949a0768be6ba1b7cfc9107df62005fa50466747e22440e5ea8cd04ec908c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:18Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:18 crc kubenswrapper[4642]: I0128 06:48:18.247739 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:18Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:18 crc kubenswrapper[4642]: I0128 06:48:18.257528 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"286f1a08-e4be-475e-a3ff-c37c99c41ea6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6769a5284fe4dae7df7948532cabc41d1f4a27a7e61ab818d61b26eb8165a750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75be46ee251716ce59f7d4e31811114c790e07ff2068465e1bba6ee4abc6909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1c4b797e1cd78fa6a850c8c7491a690dab16a3c06eeb71b2dc7e6799cf285a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7181241cbadd0b8edc9a6ff7f035813a1b2a06cdec451dc94d869bb56424a485\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70d9e422e29c4563585ee85b03f4d3e5705c7de8c388d337a52b30a7f0ccc096\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0128 06:48:09.026722 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 06:48:09.029824 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3147749855/tls.crt::/tmp/serving-cert-3147749855/tls.key\\\\\\\"\\\\nI0128 06:48:14.183760 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 06:48:14.185955 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 06:48:14.185992 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 06:48:14.186011 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 06:48:14.186020 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 06:48:14.192438 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0128 06:48:14.192454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0128 06:48:14.192477 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 06:48:14.192484 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 06:48:14.192488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 06:48:14.192490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 06:48:14.192493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 06:48:14.192496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0128 06:48:14.194350 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6d564af60cb16fea7167aea68703adced71f83b8ff6a2270e2e6d109f6e9c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d544adc85b9e84ad7685c6ce4e1ae6e7679fb24a21b7d44eb402b613ef6be7ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d544adc85b9e84ad7685c6ce4e1ae6e7679fb24a21b7d44eb402b613ef6be7ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:18Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:18 crc kubenswrapper[4642]: I0128 06:48:18.265998 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e3e00c82a4c8d33e89ddc75f7c6c393e11f7a98c88c8e3e160fd8f206ece474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:18Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:18 crc kubenswrapper[4642]: I0128 06:48:18.281467 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:18 crc kubenswrapper[4642]: I0128 06:48:18.281500 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:18 crc kubenswrapper[4642]: I0128 06:48:18.281510 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:18 crc kubenswrapper[4642]: I0128 06:48:18.281525 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:18 crc kubenswrapper[4642]: I0128 06:48:18.281534 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:18Z","lastTransitionTime":"2026-01-28T06:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:18 crc kubenswrapper[4642]: I0128 06:48:18.383133 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:18 crc kubenswrapper[4642]: I0128 06:48:18.383167 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:18 crc kubenswrapper[4642]: I0128 06:48:18.383177 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:18 crc kubenswrapper[4642]: I0128 06:48:18.383213 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:18 crc kubenswrapper[4642]: I0128 06:48:18.383225 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:18Z","lastTransitionTime":"2026-01-28T06:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:18 crc kubenswrapper[4642]: I0128 06:48:18.485598 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:18 crc kubenswrapper[4642]: I0128 06:48:18.485661 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:18 crc kubenswrapper[4642]: I0128 06:48:18.485673 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:18 crc kubenswrapper[4642]: I0128 06:48:18.485689 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:18 crc kubenswrapper[4642]: I0128 06:48:18.485700 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:18Z","lastTransitionTime":"2026-01-28T06:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:18 crc kubenswrapper[4642]: I0128 06:48:18.587843 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:18 crc kubenswrapper[4642]: I0128 06:48:18.587884 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:18 crc kubenswrapper[4642]: I0128 06:48:18.587893 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:18 crc kubenswrapper[4642]: I0128 06:48:18.587911 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:18 crc kubenswrapper[4642]: I0128 06:48:18.587921 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:18Z","lastTransitionTime":"2026-01-28T06:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:18 crc kubenswrapper[4642]: I0128 06:48:18.626113 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 06:48:18 crc kubenswrapper[4642]: I0128 06:48:18.626176 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 06:48:18 crc kubenswrapper[4642]: I0128 06:48:18.626249 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 06:48:18 crc kubenswrapper[4642]: E0128 06:48:18.626300 4642 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 06:48:18 crc kubenswrapper[4642]: E0128 06:48:18.626309 4642 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 06:48:18 crc kubenswrapper[4642]: E0128 06:48:18.626331 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 06:48:22.626284549 +0000 UTC m=+25.858373368 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 06:48:18 crc kubenswrapper[4642]: E0128 06:48:18.626407 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 06:48:22.626394966 +0000 UTC m=+25.858483785 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 06:48:18 crc kubenswrapper[4642]: E0128 06:48:18.626440 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 06:48:22.626429572 +0000 UTC m=+25.858518391 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 06:48:18 crc kubenswrapper[4642]: I0128 06:48:18.690173 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:18 crc kubenswrapper[4642]: I0128 06:48:18.690241 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:18 crc kubenswrapper[4642]: I0128 06:48:18.690251 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:18 crc kubenswrapper[4642]: I0128 06:48:18.690264 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:18 crc kubenswrapper[4642]: I0128 06:48:18.690274 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:18Z","lastTransitionTime":"2026-01-28T06:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:18 crc kubenswrapper[4642]: I0128 06:48:18.728319 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 06:48:18 crc kubenswrapper[4642]: E0128 06:48:18.728556 4642 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 06:48:18 crc kubenswrapper[4642]: E0128 06:48:18.728591 4642 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 06:48:18 crc kubenswrapper[4642]: E0128 06:48:18.728609 4642 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 06:48:18 crc kubenswrapper[4642]: I0128 06:48:18.731102 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 06:48:18 crc kubenswrapper[4642]: E0128 06:48:18.731242 4642 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 06:48:18 crc kubenswrapper[4642]: E0128 06:48:18.731264 4642 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 06:48:18 crc kubenswrapper[4642]: E0128 06:48:18.731275 4642 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 06:48:18 crc kubenswrapper[4642]: E0128 06:48:18.731327 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 06:48:22.731307029 +0000 UTC m=+25.963395838 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 06:48:18 crc kubenswrapper[4642]: E0128 06:48:18.731365 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 06:48:22.731348688 +0000 UTC m=+25.963437496 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 06:48:18 crc kubenswrapper[4642]: I0128 06:48:18.792487 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:18 crc kubenswrapper[4642]: I0128 06:48:18.792521 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:18 crc kubenswrapper[4642]: I0128 06:48:18.792530 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:18 crc kubenswrapper[4642]: I0128 06:48:18.792546 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:18 crc kubenswrapper[4642]: I0128 06:48:18.792556 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:18Z","lastTransitionTime":"2026-01-28T06:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:18 crc kubenswrapper[4642]: I0128 06:48:18.894740 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:18 crc kubenswrapper[4642]: I0128 06:48:18.894804 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:18 crc kubenswrapper[4642]: I0128 06:48:18.894817 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:18 crc kubenswrapper[4642]: I0128 06:48:18.894834 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:18 crc kubenswrapper[4642]: I0128 06:48:18.894845 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:18Z","lastTransitionTime":"2026-01-28T06:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:18 crc kubenswrapper[4642]: I0128 06:48:18.997424 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:18 crc kubenswrapper[4642]: I0128 06:48:18.997466 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:18 crc kubenswrapper[4642]: I0128 06:48:18.997477 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:18 crc kubenswrapper[4642]: I0128 06:48:18.997492 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:18 crc kubenswrapper[4642]: I0128 06:48:18.997504 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:18Z","lastTransitionTime":"2026-01-28T06:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:19 crc kubenswrapper[4642]: I0128 06:48:19.070014 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 00:20:12.170129866 +0000 UTC Jan 28 06:48:19 crc kubenswrapper[4642]: I0128 06:48:19.097853 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 06:48:19 crc kubenswrapper[4642]: I0128 06:48:19.097883 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 06:48:19 crc kubenswrapper[4642]: E0128 06:48:19.098236 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 06:48:19 crc kubenswrapper[4642]: E0128 06:48:19.098345 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 06:48:19 crc kubenswrapper[4642]: I0128 06:48:19.097950 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 06:48:19 crc kubenswrapper[4642]: E0128 06:48:19.098497 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 06:48:19 crc kubenswrapper[4642]: I0128 06:48:19.099643 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:19 crc kubenswrapper[4642]: I0128 06:48:19.099687 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:19 crc kubenswrapper[4642]: I0128 06:48:19.099699 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:19 crc kubenswrapper[4642]: I0128 06:48:19.099718 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:19 crc kubenswrapper[4642]: I0128 06:48:19.099729 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:19Z","lastTransitionTime":"2026-01-28T06:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:19 crc kubenswrapper[4642]: I0128 06:48:19.202160 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:19 crc kubenswrapper[4642]: I0128 06:48:19.202247 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:19 crc kubenswrapper[4642]: I0128 06:48:19.202260 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:19 crc kubenswrapper[4642]: I0128 06:48:19.202280 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:19 crc kubenswrapper[4642]: I0128 06:48:19.202293 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:19Z","lastTransitionTime":"2026-01-28T06:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:19 crc kubenswrapper[4642]: I0128 06:48:19.303985 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:19 crc kubenswrapper[4642]: I0128 06:48:19.304034 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:19 crc kubenswrapper[4642]: I0128 06:48:19.304045 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:19 crc kubenswrapper[4642]: I0128 06:48:19.304061 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:19 crc kubenswrapper[4642]: I0128 06:48:19.304079 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:19Z","lastTransitionTime":"2026-01-28T06:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:19 crc kubenswrapper[4642]: I0128 06:48:19.406227 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:19 crc kubenswrapper[4642]: I0128 06:48:19.406281 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:19 crc kubenswrapper[4642]: I0128 06:48:19.406292 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:19 crc kubenswrapper[4642]: I0128 06:48:19.406311 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:19 crc kubenswrapper[4642]: I0128 06:48:19.406324 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:19Z","lastTransitionTime":"2026-01-28T06:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:19 crc kubenswrapper[4642]: I0128 06:48:19.508760 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:19 crc kubenswrapper[4642]: I0128 06:48:19.508832 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:19 crc kubenswrapper[4642]: I0128 06:48:19.508845 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:19 crc kubenswrapper[4642]: I0128 06:48:19.508865 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:19 crc kubenswrapper[4642]: I0128 06:48:19.508882 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:19Z","lastTransitionTime":"2026-01-28T06:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:19 crc kubenswrapper[4642]: I0128 06:48:19.611542 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:19 crc kubenswrapper[4642]: I0128 06:48:19.611594 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:19 crc kubenswrapper[4642]: I0128 06:48:19.611605 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:19 crc kubenswrapper[4642]: I0128 06:48:19.611624 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:19 crc kubenswrapper[4642]: I0128 06:48:19.611636 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:19Z","lastTransitionTime":"2026-01-28T06:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:19 crc kubenswrapper[4642]: I0128 06:48:19.713686 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:19 crc kubenswrapper[4642]: I0128 06:48:19.713734 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:19 crc kubenswrapper[4642]: I0128 06:48:19.713748 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:19 crc kubenswrapper[4642]: I0128 06:48:19.713768 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:19 crc kubenswrapper[4642]: I0128 06:48:19.713781 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:19Z","lastTransitionTime":"2026-01-28T06:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:19 crc kubenswrapper[4642]: I0128 06:48:19.816341 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:19 crc kubenswrapper[4642]: I0128 06:48:19.816400 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:19 crc kubenswrapper[4642]: I0128 06:48:19.816409 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:19 crc kubenswrapper[4642]: I0128 06:48:19.816433 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:19 crc kubenswrapper[4642]: I0128 06:48:19.816444 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:19Z","lastTransitionTime":"2026-01-28T06:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:19 crc kubenswrapper[4642]: I0128 06:48:19.918937 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:19 crc kubenswrapper[4642]: I0128 06:48:19.918987 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:19 crc kubenswrapper[4642]: I0128 06:48:19.918998 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:19 crc kubenswrapper[4642]: I0128 06:48:19.919016 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:19 crc kubenswrapper[4642]: I0128 06:48:19.919030 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:19Z","lastTransitionTime":"2026-01-28T06:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:20 crc kubenswrapper[4642]: I0128 06:48:20.021834 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:20 crc kubenswrapper[4642]: I0128 06:48:20.021902 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:20 crc kubenswrapper[4642]: I0128 06:48:20.021914 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:20 crc kubenswrapper[4642]: I0128 06:48:20.021942 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:20 crc kubenswrapper[4642]: I0128 06:48:20.021953 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:20Z","lastTransitionTime":"2026-01-28T06:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:20 crc kubenswrapper[4642]: I0128 06:48:20.070370 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 01:17:52.717381752 +0000 UTC Jan 28 06:48:20 crc kubenswrapper[4642]: I0128 06:48:20.123865 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:20 crc kubenswrapper[4642]: I0128 06:48:20.123973 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:20 crc kubenswrapper[4642]: I0128 06:48:20.123986 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:20 crc kubenswrapper[4642]: I0128 06:48:20.124007 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:20 crc kubenswrapper[4642]: I0128 06:48:20.124019 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:20Z","lastTransitionTime":"2026-01-28T06:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:20 crc kubenswrapper[4642]: I0128 06:48:20.226837 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:20 crc kubenswrapper[4642]: I0128 06:48:20.226885 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:20 crc kubenswrapper[4642]: I0128 06:48:20.226895 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:20 crc kubenswrapper[4642]: I0128 06:48:20.226917 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:20 crc kubenswrapper[4642]: I0128 06:48:20.226928 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:20Z","lastTransitionTime":"2026-01-28T06:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:20 crc kubenswrapper[4642]: I0128 06:48:20.329411 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:20 crc kubenswrapper[4642]: I0128 06:48:20.329720 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:20 crc kubenswrapper[4642]: I0128 06:48:20.329730 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:20 crc kubenswrapper[4642]: I0128 06:48:20.329746 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:20 crc kubenswrapper[4642]: I0128 06:48:20.329756 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:20Z","lastTransitionTime":"2026-01-28T06:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:20 crc kubenswrapper[4642]: I0128 06:48:20.431698 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:20 crc kubenswrapper[4642]: I0128 06:48:20.431738 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:20 crc kubenswrapper[4642]: I0128 06:48:20.431746 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:20 crc kubenswrapper[4642]: I0128 06:48:20.431765 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:20 crc kubenswrapper[4642]: I0128 06:48:20.431774 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:20Z","lastTransitionTime":"2026-01-28T06:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:20 crc kubenswrapper[4642]: I0128 06:48:20.453556 4642 csr.go:261] certificate signing request csr-hjbdj is approved, waiting to be issued Jan 28 06:48:20 crc kubenswrapper[4642]: I0128 06:48:20.461067 4642 csr.go:257] certificate signing request csr-hjbdj is issued Jan 28 06:48:20 crc kubenswrapper[4642]: I0128 06:48:20.534317 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:20 crc kubenswrapper[4642]: I0128 06:48:20.534363 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:20 crc kubenswrapper[4642]: I0128 06:48:20.534372 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:20 crc kubenswrapper[4642]: I0128 06:48:20.534387 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:20 crc kubenswrapper[4642]: I0128 06:48:20.534398 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:20Z","lastTransitionTime":"2026-01-28T06:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:20 crc kubenswrapper[4642]: I0128 06:48:20.637784 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:20 crc kubenswrapper[4642]: I0128 06:48:20.637824 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:20 crc kubenswrapper[4642]: I0128 06:48:20.637833 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:20 crc kubenswrapper[4642]: I0128 06:48:20.637849 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:20 crc kubenswrapper[4642]: I0128 06:48:20.637863 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:20Z","lastTransitionTime":"2026-01-28T06:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:20 crc kubenswrapper[4642]: I0128 06:48:20.739933 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:20 crc kubenswrapper[4642]: I0128 06:48:20.739990 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:20 crc kubenswrapper[4642]: I0128 06:48:20.740009 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:20 crc kubenswrapper[4642]: I0128 06:48:20.740031 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:20 crc kubenswrapper[4642]: I0128 06:48:20.740044 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:20Z","lastTransitionTime":"2026-01-28T06:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:20 crc kubenswrapper[4642]: I0128 06:48:20.842679 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:20 crc kubenswrapper[4642]: I0128 06:48:20.842732 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:20 crc kubenswrapper[4642]: I0128 06:48:20.842741 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:20 crc kubenswrapper[4642]: I0128 06:48:20.842766 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:20 crc kubenswrapper[4642]: I0128 06:48:20.842779 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:20Z","lastTransitionTime":"2026-01-28T06:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:20 crc kubenswrapper[4642]: I0128 06:48:20.945323 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:20 crc kubenswrapper[4642]: I0128 06:48:20.945378 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:20 crc kubenswrapper[4642]: I0128 06:48:20.945388 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:20 crc kubenswrapper[4642]: I0128 06:48:20.945407 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:20 crc kubenswrapper[4642]: I0128 06:48:20.945416 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:20Z","lastTransitionTime":"2026-01-28T06:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.047814 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.047848 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.047856 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.047876 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.047886 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:21Z","lastTransitionTime":"2026-01-28T06:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.071117 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 18:11:05.60411189 +0000 UTC Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.097528 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.097566 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.097615 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 06:48:21 crc kubenswrapper[4642]: E0128 06:48:21.097671 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 06:48:21 crc kubenswrapper[4642]: E0128 06:48:21.097783 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 06:48:21 crc kubenswrapper[4642]: E0128 06:48:21.098225 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.137994 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-tzmpk"] Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.138279 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-hdsmf"] Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.138436 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-zwkn6"] Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.138853 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-zwkn6" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.139176 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-tzmpk" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.139543 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.144267 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.144597 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-28n48"] Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.144761 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-28n48" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.144997 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.145300 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.145403 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.145563 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.145624 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.145572 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.145745 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.145849 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.145882 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.145850 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.145952 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.146305 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.146854 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.146994 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.149170 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.149213 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.149255 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.149271 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.149281 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:21Z","lastTransitionTime":"2026-01-28T06:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.157484 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d609634-f051-46d5-9a1d-97785c7d8c67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76f1f4a4bf66b6661b0b869ac209c793c9066518674be305037372a22141c1d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f31fe6b99be2f9edd215e88aadd39790ce7f144061336772561af3eab969fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f826dab9c51a49c5b54e934e0be64c809ff09247f8aa766b9a20734a3f1277da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf1949a0768be6ba1b7cfc9107df62005fa50466747e22440e5ea8cd04ec908c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:21Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.170531 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5886e382e4852d848554f98099a7cbdd4d0bc3f161f96fd6f34f300a748b0172\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:21Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.187303 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"286f1a08-e4be-475e-a3ff-c37c99c41ea6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6769a5284fe4dae7df7948532cabc41d1f4a27a7e61ab818d61b26eb8165a750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75be46ee251716ce59f7d4e31811114c790e07ff2068465e1bba6ee4abc6909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1c4b797e1cd78fa6a850c8c7491a690dab16a3c06eeb71b2dc7e6799cf285a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7181241cbadd0b8edc9a6ff7f035813a1b2a06cdec451dc94d869bb56424a485\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70d9e422e29c4563585ee85b03f4d3e5705c7de8c388d337a52b30a7f0ccc096\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0128 06:48:09.026722 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 06:48:09.029824 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3147749855/tls.crt::/tmp/serving-cert-3147749855/tls.key\\\\\\\"\\\\nI0128 06:48:14.183760 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 06:48:14.185955 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 06:48:14.185992 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 06:48:14.186011 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 06:48:14.186020 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 06:48:14.192438 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0128 06:48:14.192454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0128 06:48:14.192477 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 06:48:14.192484 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 06:48:14.192488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 06:48:14.192490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 06:48:14.192493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 06:48:14.192496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0128 06:48:14.194350 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6d564af60cb16fea7167aea68703adced71f83b8ff6a2270e2e6d109f6e9c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d544adc85b9e84ad7685c6ce4e1ae6e7679fb24a21b7d44eb402b613ef6be7ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d544adc85b9e84ad7685c6ce4e1ae6e7679fb24a21b7d44eb402b613ef6be7ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:21Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.198981 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:21Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.208694 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zwkn6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e8cd657-e170-4331-9f82-7b84d122c8e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zwkn6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:21Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.217617 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"338ae955-434d-40bd-8519-580badf3e175\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdf4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdf4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hdsmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:21Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.227542 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:21Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.236947 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:21Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.245445 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4acb8b3be7913130bfe0acca1f8ed39f30ee2121eec228e4878f5645899f53cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfb4620ab8d587f286964f74ee575b4837fe55979a2f70767d30709cdb4fb2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:21Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.250223 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/338ae955-434d-40bd-8519-580badf3e175-mcd-auth-proxy-config\") pod \"machine-config-daemon-hdsmf\" (UID: \"338ae955-434d-40bd-8519-580badf3e175\") " pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.250321 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rfx6\" (UniqueName: \"kubernetes.io/projected/e61b4710-6f7c-4ab1-b7bb-50d445aeda93-kube-api-access-9rfx6\") pod \"node-resolver-tzmpk\" (UID: \"e61b4710-6f7c-4ab1-b7bb-50d445aeda93\") " pod="openshift-dns/node-resolver-tzmpk" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.250507 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3d569b7c-8a0e-4074-b61f-4139413b9849-multus-conf-dir\") pod \"multus-28n48\" (UID: \"3d569b7c-8a0e-4074-b61f-4139413b9849\") " pod="openshift-multus/multus-28n48" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.250618 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/e61b4710-6f7c-4ab1-b7bb-50d445aeda93-hosts-file\") pod \"node-resolver-tzmpk\" (UID: \"e61b4710-6f7c-4ab1-b7bb-50d445aeda93\") " pod="openshift-dns/node-resolver-tzmpk" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.250668 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3d569b7c-8a0e-4074-b61f-4139413b9849-cnibin\") pod \"multus-28n48\" (UID: \"3d569b7c-8a0e-4074-b61f-4139413b9849\") " pod="openshift-multus/multus-28n48" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.250690 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3d569b7c-8a0e-4074-b61f-4139413b9849-cni-binary-copy\") pod \"multus-28n48\" (UID: \"3d569b7c-8a0e-4074-b61f-4139413b9849\") " pod="openshift-multus/multus-28n48" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.250707 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdf4x\" (UniqueName: \"kubernetes.io/projected/338ae955-434d-40bd-8519-580badf3e175-kube-api-access-fdf4x\") pod \"machine-config-daemon-hdsmf\" (UID: \"338ae955-434d-40bd-8519-580badf3e175\") " pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.250730 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/3d569b7c-8a0e-4074-b61f-4139413b9849-multus-socket-dir-parent\") pod \"multus-28n48\" (UID: \"3d569b7c-8a0e-4074-b61f-4139413b9849\") " pod="openshift-multus/multus-28n48" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.250743 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/3d569b7c-8a0e-4074-b61f-4139413b9849-hostroot\") pod \"multus-28n48\" (UID: \"3d569b7c-8a0e-4074-b61f-4139413b9849\") " pod="openshift-multus/multus-28n48" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.250758 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5e8cd657-e170-4331-9f82-7b84d122c8e4-tuning-conf-dir\") pod \"multus-additional-cni-plugins-zwkn6\" (UID: \"5e8cd657-e170-4331-9f82-7b84d122c8e4\") " pod="openshift-multus/multus-additional-cni-plugins-zwkn6" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.250771 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3d569b7c-8a0e-4074-b61f-4139413b9849-os-release\") pod \"multus-28n48\" (UID: \"3d569b7c-8a0e-4074-b61f-4139413b9849\") " pod="openshift-multus/multus-28n48" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.250785 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/338ae955-434d-40bd-8519-580badf3e175-proxy-tls\") pod \"machine-config-daemon-hdsmf\" (UID: \"338ae955-434d-40bd-8519-580badf3e175\") " pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.250806 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kp8zj\" (UniqueName: \"kubernetes.io/projected/3d569b7c-8a0e-4074-b61f-4139413b9849-kube-api-access-kp8zj\") pod \"multus-28n48\" (UID: \"3d569b7c-8a0e-4074-b61f-4139413b9849\") " pod="openshift-multus/multus-28n48" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.250823 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3d569b7c-8a0e-4074-b61f-4139413b9849-system-cni-dir\") pod \"multus-28n48\" (UID: \"3d569b7c-8a0e-4074-b61f-4139413b9849\") " pod="openshift-multus/multus-28n48" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.250836 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3d569b7c-8a0e-4074-b61f-4139413b9849-host-run-netns\") pod \"multus-28n48\" (UID: \"3d569b7c-8a0e-4074-b61f-4139413b9849\") " pod="openshift-multus/multus-28n48" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.250853 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3d569b7c-8a0e-4074-b61f-4139413b9849-host-var-lib-cni-bin\") pod \"multus-28n48\" (UID: \"3d569b7c-8a0e-4074-b61f-4139413b9849\") " pod="openshift-multus/multus-28n48" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.250867 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/3d569b7c-8a0e-4074-b61f-4139413b9849-host-var-lib-cni-multus\") pod \"multus-28n48\" (UID: \"3d569b7c-8a0e-4074-b61f-4139413b9849\") " pod="openshift-multus/multus-28n48" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.250881 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3d569b7c-8a0e-4074-b61f-4139413b9849-multus-cni-dir\") pod \"multus-28n48\" (UID: \"3d569b7c-8a0e-4074-b61f-4139413b9849\") " pod="openshift-multus/multus-28n48" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.250936 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/3d569b7c-8a0e-4074-b61f-4139413b9849-host-run-k8s-cni-cncf-io\") pod \"multus-28n48\" (UID: \"3d569b7c-8a0e-4074-b61f-4139413b9849\") " pod="openshift-multus/multus-28n48" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.250986 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/3d569b7c-8a0e-4074-b61f-4139413b9849-multus-daemon-config\") pod \"multus-28n48\" (UID: \"3d569b7c-8a0e-4074-b61f-4139413b9849\") " pod="openshift-multus/multus-28n48" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.251007 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/338ae955-434d-40bd-8519-580badf3e175-rootfs\") pod \"machine-config-daemon-hdsmf\" (UID: \"338ae955-434d-40bd-8519-580badf3e175\") " pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.251031 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5e8cd657-e170-4331-9f82-7b84d122c8e4-system-cni-dir\") pod \"multus-additional-cni-plugins-zwkn6\" (UID: \"5e8cd657-e170-4331-9f82-7b84d122c8e4\") " pod="openshift-multus/multus-additional-cni-plugins-zwkn6" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.251057 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hw2j9\" (UniqueName: \"kubernetes.io/projected/5e8cd657-e170-4331-9f82-7b84d122c8e4-kube-api-access-hw2j9\") pod \"multus-additional-cni-plugins-zwkn6\" (UID: \"5e8cd657-e170-4331-9f82-7b84d122c8e4\") " pod="openshift-multus/multus-additional-cni-plugins-zwkn6" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.251077 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3d569b7c-8a0e-4074-b61f-4139413b9849-etc-kubernetes\") pod \"multus-28n48\" (UID: \"3d569b7c-8a0e-4074-b61f-4139413b9849\") " pod="openshift-multus/multus-28n48" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.251119 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5e8cd657-e170-4331-9f82-7b84d122c8e4-os-release\") pod \"multus-additional-cni-plugins-zwkn6\" (UID: \"5e8cd657-e170-4331-9f82-7b84d122c8e4\") " pod="openshift-multus/multus-additional-cni-plugins-zwkn6" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.251155 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5e8cd657-e170-4331-9f82-7b84d122c8e4-cni-binary-copy\") pod \"multus-additional-cni-plugins-zwkn6\" (UID: \"5e8cd657-e170-4331-9f82-7b84d122c8e4\") " pod="openshift-multus/multus-additional-cni-plugins-zwkn6" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.251173 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/5e8cd657-e170-4331-9f82-7b84d122c8e4-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-zwkn6\" (UID: \"5e8cd657-e170-4331-9f82-7b84d122c8e4\") " pod="openshift-multus/multus-additional-cni-plugins-zwkn6" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.251234 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/3d569b7c-8a0e-4074-b61f-4139413b9849-host-var-lib-kubelet\") pod \"multus-28n48\" (UID: \"3d569b7c-8a0e-4074-b61f-4139413b9849\") " pod="openshift-multus/multus-28n48" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.251257 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.251297 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.251308 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.251313 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/3d569b7c-8a0e-4074-b61f-4139413b9849-host-run-multus-certs\") pod \"multus-28n48\" (UID: \"3d569b7c-8a0e-4074-b61f-4139413b9849\") " pod="openshift-multus/multus-28n48" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.251329 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.251368 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5e8cd657-e170-4331-9f82-7b84d122c8e4-cnibin\") pod \"multus-additional-cni-plugins-zwkn6\" (UID: \"5e8cd657-e170-4331-9f82-7b84d122c8e4\") " pod="openshift-multus/multus-additional-cni-plugins-zwkn6" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.251342 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:21Z","lastTransitionTime":"2026-01-28T06:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.253278 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tzmpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e61b4710-6f7c-4ab1-b7bb-50d445aeda93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rfx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tzmpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:21Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.260947 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e3e00c82a4c8d33e89ddc75f7c6c393e11f7a98c88c8e3e160fd8f206ece474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:21Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.268771 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e3e00c82a4c8d33e89ddc75f7c6c393e11f7a98c88c8e3e160fd8f206ece474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:21Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.278998 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d609634-f051-46d5-9a1d-97785c7d8c67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76f1f4a4bf66b6661b0b869ac209c793c9066518674be305037372a22141c1d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f31fe6b99be2f9edd215e88aadd39790ce7f144061336772561af3eab969fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f826dab9c51a49c5b54e934e0be64c809ff09247f8aa766b9a20734a3f1277da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf1949a0768be6ba1b7cfc9107df62005fa50466747e22440e5ea8cd04ec908c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:21Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.291839 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5886e382e4852d848554f98099a7cbdd4d0bc3f161f96fd6f34f300a748b0172\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:21Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.302003 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"286f1a08-e4be-475e-a3ff-c37c99c41ea6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6769a5284fe4dae7df7948532cabc41d1f4a27a7e61ab818d61b26eb8165a750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75be46ee251716ce59f7d4e31811114c790e07ff2068465e1bba6ee4abc6909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1c4b797e1cd78fa6a850c8c7491a690dab16a3c06eeb71b2dc7e6799cf285a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7181241cbadd0b8edc9a6ff7f035813a1b2a06cdec451dc94d869bb56424a485\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70d9e422e29c4563585ee85b03f4d3e5705c7de8c388d337a52b30a7f0ccc096\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0128 06:48:09.026722 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 06:48:09.029824 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3147749855/tls.crt::/tmp/serving-cert-3147749855/tls.key\\\\\\\"\\\\nI0128 06:48:14.183760 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 06:48:14.185955 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 06:48:14.185992 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 06:48:14.186011 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 06:48:14.186020 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 06:48:14.192438 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0128 06:48:14.192454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0128 06:48:14.192477 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 06:48:14.192484 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 06:48:14.192488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 06:48:14.192490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 06:48:14.192493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 06:48:14.192496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0128 06:48:14.194350 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6d564af60cb16fea7167aea68703adced71f83b8ff6a2270e2e6d109f6e9c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d544adc85b9e84ad7685c6ce4e1ae6e7679fb24a21b7d44eb402b613ef6be7ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d544adc85b9e84ad7685c6ce4e1ae6e7679fb24a21b7d44eb402b613ef6be7ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:21Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.310129 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:21Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.320332 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zwkn6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e8cd657-e170-4331-9f82-7b84d122c8e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zwkn6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:21Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.328146 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-28n48" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d569b7c-8a0e-4074-b61f-4139413b9849\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kp8zj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-28n48\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:21Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.336722 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4acb8b3be7913130bfe0acca1f8ed39f30ee2121eec228e4878f5645899f53cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfb4620ab8d587f286964f74ee575b4837fe55979a2f70767d30709cdb4fb2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:21Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.343532 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tzmpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e61b4710-6f7c-4ab1-b7bb-50d445aeda93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rfx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tzmpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:21Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.351151 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"338ae955-434d-40bd-8519-580badf3e175\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdf4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdf4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hdsmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:21Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.352513 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/338ae955-434d-40bd-8519-580badf3e175-mcd-auth-proxy-config\") pod \"machine-config-daemon-hdsmf\" (UID: \"338ae955-434d-40bd-8519-580badf3e175\") " pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.352558 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rfx6\" (UniqueName: \"kubernetes.io/projected/e61b4710-6f7c-4ab1-b7bb-50d445aeda93-kube-api-access-9rfx6\") pod \"node-resolver-tzmpk\" (UID: \"e61b4710-6f7c-4ab1-b7bb-50d445aeda93\") " pod="openshift-dns/node-resolver-tzmpk" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.352580 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3d569b7c-8a0e-4074-b61f-4139413b9849-multus-conf-dir\") pod \"multus-28n48\" (UID: \"3d569b7c-8a0e-4074-b61f-4139413b9849\") " pod="openshift-multus/multus-28n48" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.352613 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3d569b7c-8a0e-4074-b61f-4139413b9849-cnibin\") pod \"multus-28n48\" (UID: \"3d569b7c-8a0e-4074-b61f-4139413b9849\") " pod="openshift-multus/multus-28n48" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.352632 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3d569b7c-8a0e-4074-b61f-4139413b9849-cni-binary-copy\") pod \"multus-28n48\" (UID: \"3d569b7c-8a0e-4074-b61f-4139413b9849\") " pod="openshift-multus/multus-28n48" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.352652 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdf4x\" (UniqueName: \"kubernetes.io/projected/338ae955-434d-40bd-8519-580badf3e175-kube-api-access-fdf4x\") pod \"machine-config-daemon-hdsmf\" (UID: \"338ae955-434d-40bd-8519-580badf3e175\") " pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.352671 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/e61b4710-6f7c-4ab1-b7bb-50d445aeda93-hosts-file\") pod \"node-resolver-tzmpk\" (UID: \"e61b4710-6f7c-4ab1-b7bb-50d445aeda93\") " pod="openshift-dns/node-resolver-tzmpk" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.352687 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3d569b7c-8a0e-4074-b61f-4139413b9849-multus-conf-dir\") pod \"multus-28n48\" (UID: \"3d569b7c-8a0e-4074-b61f-4139413b9849\") " pod="openshift-multus/multus-28n48" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.352693 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/3d569b7c-8a0e-4074-b61f-4139413b9849-hostroot\") pod \"multus-28n48\" (UID: \"3d569b7c-8a0e-4074-b61f-4139413b9849\") " pod="openshift-multus/multus-28n48" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.352732 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/3d569b7c-8a0e-4074-b61f-4139413b9849-hostroot\") pod \"multus-28n48\" (UID: \"3d569b7c-8a0e-4074-b61f-4139413b9849\") " pod="openshift-multus/multus-28n48" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.352757 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/3d569b7c-8a0e-4074-b61f-4139413b9849-multus-socket-dir-parent\") pod \"multus-28n48\" (UID: \"3d569b7c-8a0e-4074-b61f-4139413b9849\") " pod="openshift-multus/multus-28n48" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.352780 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5e8cd657-e170-4331-9f82-7b84d122c8e4-tuning-conf-dir\") pod \"multus-additional-cni-plugins-zwkn6\" (UID: \"5e8cd657-e170-4331-9f82-7b84d122c8e4\") " pod="openshift-multus/multus-additional-cni-plugins-zwkn6" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.352797 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3d569b7c-8a0e-4074-b61f-4139413b9849-os-release\") pod \"multus-28n48\" (UID: \"3d569b7c-8a0e-4074-b61f-4139413b9849\") " pod="openshift-multus/multus-28n48" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.352813 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/338ae955-434d-40bd-8519-580badf3e175-proxy-tls\") pod \"machine-config-daemon-hdsmf\" (UID: \"338ae955-434d-40bd-8519-580badf3e175\") " pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.352891 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kp8zj\" (UniqueName: \"kubernetes.io/projected/3d569b7c-8a0e-4074-b61f-4139413b9849-kube-api-access-kp8zj\") pod \"multus-28n48\" (UID: \"3d569b7c-8a0e-4074-b61f-4139413b9849\") " pod="openshift-multus/multus-28n48" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.352926 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3d569b7c-8a0e-4074-b61f-4139413b9849-host-var-lib-cni-bin\") pod \"multus-28n48\" (UID: \"3d569b7c-8a0e-4074-b61f-4139413b9849\") " pod="openshift-multus/multus-28n48" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.352944 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3d569b7c-8a0e-4074-b61f-4139413b9849-system-cni-dir\") pod \"multus-28n48\" (UID: \"3d569b7c-8a0e-4074-b61f-4139413b9849\") " pod="openshift-multus/multus-28n48" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.352960 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3d569b7c-8a0e-4074-b61f-4139413b9849-host-run-netns\") pod \"multus-28n48\" (UID: \"3d569b7c-8a0e-4074-b61f-4139413b9849\") " pod="openshift-multus/multus-28n48" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.352977 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/3d569b7c-8a0e-4074-b61f-4139413b9849-host-var-lib-cni-multus\") pod \"multus-28n48\" (UID: \"3d569b7c-8a0e-4074-b61f-4139413b9849\") " pod="openshift-multus/multus-28n48" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.352992 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3d569b7c-8a0e-4074-b61f-4139413b9849-multus-cni-dir\") pod \"multus-28n48\" (UID: \"3d569b7c-8a0e-4074-b61f-4139413b9849\") " pod="openshift-multus/multus-28n48" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.353006 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/3d569b7c-8a0e-4074-b61f-4139413b9849-host-run-k8s-cni-cncf-io\") pod \"multus-28n48\" (UID: \"3d569b7c-8a0e-4074-b61f-4139413b9849\") " pod="openshift-multus/multus-28n48" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.353025 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/3d569b7c-8a0e-4074-b61f-4139413b9849-multus-daemon-config\") pod \"multus-28n48\" (UID: \"3d569b7c-8a0e-4074-b61f-4139413b9849\") " pod="openshift-multus/multus-28n48" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.353027 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/e61b4710-6f7c-4ab1-b7bb-50d445aeda93-hosts-file\") pod \"node-resolver-tzmpk\" (UID: \"e61b4710-6f7c-4ab1-b7bb-50d445aeda93\") " pod="openshift-dns/node-resolver-tzmpk" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.353044 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/338ae955-434d-40bd-8519-580badf3e175-rootfs\") pod \"machine-config-daemon-hdsmf\" (UID: \"338ae955-434d-40bd-8519-580badf3e175\") " pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.353061 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3d569b7c-8a0e-4074-b61f-4139413b9849-etc-kubernetes\") pod \"multus-28n48\" (UID: \"3d569b7c-8a0e-4074-b61f-4139413b9849\") " pod="openshift-multus/multus-28n48" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.353063 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3d569b7c-8a0e-4074-b61f-4139413b9849-host-var-lib-cni-bin\") pod \"multus-28n48\" (UID: \"3d569b7c-8a0e-4074-b61f-4139413b9849\") " pod="openshift-multus/multus-28n48" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.352861 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/3d569b7c-8a0e-4074-b61f-4139413b9849-multus-socket-dir-parent\") pod \"multus-28n48\" (UID: \"3d569b7c-8a0e-4074-b61f-4139413b9849\") " pod="openshift-multus/multus-28n48" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.353078 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5e8cd657-e170-4331-9f82-7b84d122c8e4-system-cni-dir\") pod \"multus-additional-cni-plugins-zwkn6\" (UID: \"5e8cd657-e170-4331-9f82-7b84d122c8e4\") " pod="openshift-multus/multus-additional-cni-plugins-zwkn6" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.353094 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hw2j9\" (UniqueName: \"kubernetes.io/projected/5e8cd657-e170-4331-9f82-7b84d122c8e4-kube-api-access-hw2j9\") pod \"multus-additional-cni-plugins-zwkn6\" (UID: \"5e8cd657-e170-4331-9f82-7b84d122c8e4\") " pod="openshift-multus/multus-additional-cni-plugins-zwkn6" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.353113 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5e8cd657-e170-4331-9f82-7b84d122c8e4-os-release\") pod \"multus-additional-cni-plugins-zwkn6\" (UID: \"5e8cd657-e170-4331-9f82-7b84d122c8e4\") " pod="openshift-multus/multus-additional-cni-plugins-zwkn6" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.353130 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5e8cd657-e170-4331-9f82-7b84d122c8e4-cni-binary-copy\") pod \"multus-additional-cni-plugins-zwkn6\" (UID: \"5e8cd657-e170-4331-9f82-7b84d122c8e4\") " pod="openshift-multus/multus-additional-cni-plugins-zwkn6" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.353147 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/5e8cd657-e170-4331-9f82-7b84d122c8e4-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-zwkn6\" (UID: \"5e8cd657-e170-4331-9f82-7b84d122c8e4\") " pod="openshift-multus/multus-additional-cni-plugins-zwkn6" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.353162 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/3d569b7c-8a0e-4074-b61f-4139413b9849-host-var-lib-kubelet\") pod \"multus-28n48\" (UID: \"3d569b7c-8a0e-4074-b61f-4139413b9849\") " pod="openshift-multus/multus-28n48" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.353177 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/3d569b7c-8a0e-4074-b61f-4139413b9849-host-run-multus-certs\") pod \"multus-28n48\" (UID: \"3d569b7c-8a0e-4074-b61f-4139413b9849\") " pod="openshift-multus/multus-28n48" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.353215 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5e8cd657-e170-4331-9f82-7b84d122c8e4-cnibin\") pod \"multus-additional-cni-plugins-zwkn6\" (UID: \"5e8cd657-e170-4331-9f82-7b84d122c8e4\") " pod="openshift-multus/multus-additional-cni-plugins-zwkn6" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.353257 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5e8cd657-e170-4331-9f82-7b84d122c8e4-cnibin\") pod \"multus-additional-cni-plugins-zwkn6\" (UID: \"5e8cd657-e170-4331-9f82-7b84d122c8e4\") " pod="openshift-multus/multus-additional-cni-plugins-zwkn6" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.353060 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3d569b7c-8a0e-4074-b61f-4139413b9849-cnibin\") pod \"multus-28n48\" (UID: \"3d569b7c-8a0e-4074-b61f-4139413b9849\") " pod="openshift-multus/multus-28n48" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.353261 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3d569b7c-8a0e-4074-b61f-4139413b9849-os-release\") pod \"multus-28n48\" (UID: \"3d569b7c-8a0e-4074-b61f-4139413b9849\") " pod="openshift-multus/multus-28n48" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.353307 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/3d569b7c-8a0e-4074-b61f-4139413b9849-host-run-multus-certs\") pod \"multus-28n48\" (UID: \"3d569b7c-8a0e-4074-b61f-4139413b9849\") " pod="openshift-multus/multus-28n48" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.353323 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3d569b7c-8a0e-4074-b61f-4139413b9849-etc-kubernetes\") pod \"multus-28n48\" (UID: \"3d569b7c-8a0e-4074-b61f-4139413b9849\") " pod="openshift-multus/multus-28n48" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.353374 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5e8cd657-e170-4331-9f82-7b84d122c8e4-os-release\") pod \"multus-additional-cni-plugins-zwkn6\" (UID: \"5e8cd657-e170-4331-9f82-7b84d122c8e4\") " pod="openshift-multus/multus-additional-cni-plugins-zwkn6" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.353385 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/3d569b7c-8a0e-4074-b61f-4139413b9849-host-var-lib-kubelet\") pod \"multus-28n48\" (UID: \"3d569b7c-8a0e-4074-b61f-4139413b9849\") " pod="openshift-multus/multus-28n48" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.353410 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3d569b7c-8a0e-4074-b61f-4139413b9849-host-run-netns\") pod \"multus-28n48\" (UID: \"3d569b7c-8a0e-4074-b61f-4139413b9849\") " pod="openshift-multus/multus-28n48" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.353442 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/338ae955-434d-40bd-8519-580badf3e175-rootfs\") pod \"machine-config-daemon-hdsmf\" (UID: \"338ae955-434d-40bd-8519-580badf3e175\") " pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.353459 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5e8cd657-e170-4331-9f82-7b84d122c8e4-system-cni-dir\") pod \"multus-additional-cni-plugins-zwkn6\" (UID: \"5e8cd657-e170-4331-9f82-7b84d122c8e4\") " pod="openshift-multus/multus-additional-cni-plugins-zwkn6" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.353499 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/3d569b7c-8a0e-4074-b61f-4139413b9849-host-run-k8s-cni-cncf-io\") pod \"multus-28n48\" (UID: \"3d569b7c-8a0e-4074-b61f-4139413b9849\") " pod="openshift-multus/multus-28n48" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.353472 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3d569b7c-8a0e-4074-b61f-4139413b9849-system-cni-dir\") pod \"multus-28n48\" (UID: \"3d569b7c-8a0e-4074-b61f-4139413b9849\") " pod="openshift-multus/multus-28n48" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.353502 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3d569b7c-8a0e-4074-b61f-4139413b9849-cni-binary-copy\") pod \"multus-28n48\" (UID: \"3d569b7c-8a0e-4074-b61f-4139413b9849\") " pod="openshift-multus/multus-28n48" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.353530 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/3d569b7c-8a0e-4074-b61f-4139413b9849-host-var-lib-cni-multus\") pod \"multus-28n48\" (UID: \"3d569b7c-8a0e-4074-b61f-4139413b9849\") " pod="openshift-multus/multus-28n48" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.353557 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3d569b7c-8a0e-4074-b61f-4139413b9849-multus-cni-dir\") pod \"multus-28n48\" (UID: \"3d569b7c-8a0e-4074-b61f-4139413b9849\") " pod="openshift-multus/multus-28n48" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.353788 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5e8cd657-e170-4331-9f82-7b84d122c8e4-tuning-conf-dir\") pod \"multus-additional-cni-plugins-zwkn6\" (UID: \"5e8cd657-e170-4331-9f82-7b84d122c8e4\") " pod="openshift-multus/multus-additional-cni-plugins-zwkn6" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.353870 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5e8cd657-e170-4331-9f82-7b84d122c8e4-cni-binary-copy\") pod \"multus-additional-cni-plugins-zwkn6\" (UID: \"5e8cd657-e170-4331-9f82-7b84d122c8e4\") " pod="openshift-multus/multus-additional-cni-plugins-zwkn6" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.353917 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/338ae955-434d-40bd-8519-580badf3e175-mcd-auth-proxy-config\") pod \"machine-config-daemon-hdsmf\" (UID: \"338ae955-434d-40bd-8519-580badf3e175\") " pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.354028 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/3d569b7c-8a0e-4074-b61f-4139413b9849-multus-daemon-config\") pod \"multus-28n48\" (UID: \"3d569b7c-8a0e-4074-b61f-4139413b9849\") " pod="openshift-multus/multus-28n48" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.354063 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/5e8cd657-e170-4331-9f82-7b84d122c8e4-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-zwkn6\" (UID: \"5e8cd657-e170-4331-9f82-7b84d122c8e4\") " pod="openshift-multus/multus-additional-cni-plugins-zwkn6" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.354757 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.354789 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.354799 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.354815 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.354825 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:21Z","lastTransitionTime":"2026-01-28T06:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.360975 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:21Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.361881 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/338ae955-434d-40bd-8519-580badf3e175-proxy-tls\") pod \"machine-config-daemon-hdsmf\" (UID: \"338ae955-434d-40bd-8519-580badf3e175\") " pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.365979 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdf4x\" (UniqueName: \"kubernetes.io/projected/338ae955-434d-40bd-8519-580badf3e175-kube-api-access-fdf4x\") pod \"machine-config-daemon-hdsmf\" (UID: \"338ae955-434d-40bd-8519-580badf3e175\") " pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.367109 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rfx6\" (UniqueName: \"kubernetes.io/projected/e61b4710-6f7c-4ab1-b7bb-50d445aeda93-kube-api-access-9rfx6\") pod \"node-resolver-tzmpk\" (UID: \"e61b4710-6f7c-4ab1-b7bb-50d445aeda93\") " pod="openshift-dns/node-resolver-tzmpk" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.367318 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kp8zj\" (UniqueName: \"kubernetes.io/projected/3d569b7c-8a0e-4074-b61f-4139413b9849-kube-api-access-kp8zj\") pod \"multus-28n48\" (UID: \"3d569b7c-8a0e-4074-b61f-4139413b9849\") " pod="openshift-multus/multus-28n48" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.368986 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hw2j9\" (UniqueName: \"kubernetes.io/projected/5e8cd657-e170-4331-9f82-7b84d122c8e4-kube-api-access-hw2j9\") pod \"multus-additional-cni-plugins-zwkn6\" (UID: \"5e8cd657-e170-4331-9f82-7b84d122c8e4\") " pod="openshift-multus/multus-additional-cni-plugins-zwkn6" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.371470 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:21Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.450928 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-zwkn6" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.455178 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-tzmpk" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.456833 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.456874 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.456887 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.456908 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.456919 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:21Z","lastTransitionTime":"2026-01-28T06:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.462203 4642 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-28 06:43:20 +0000 UTC, rotation deadline is 2026-11-03 15:09:06.11879273 +0000 UTC Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.462286 4642 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6704h20m44.656509464s for next certificate rotation Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.463591 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.468448 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-28n48" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.522937 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-7fdwx"] Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.523829 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.525247 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.525609 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.525665 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.525814 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.525923 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.527367 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.527642 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.536673 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d609634-f051-46d5-9a1d-97785c7d8c67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76f1f4a4bf66b6661b0b869ac209c793c9066518674be305037372a22141c1d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f31fe6b99be2f9edd215e88aadd39790ce7f144061336772561af3eab969fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f826dab9c51a49c5b54e934e0be64c809ff09247f8aa766b9a20734a3f1277da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf1949a0768be6ba1b7cfc9107df62005fa50466747e22440e5ea8cd04ec908c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:21Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.548391 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5886e382e4852d848554f98099a7cbdd4d0bc3f161f96fd6f34f300a748b0172\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:21Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.561934 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7fdwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:21Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.562756 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.562787 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.562797 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.562813 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.562824 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:21Z","lastTransitionTime":"2026-01-28T06:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.572920 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-28n48" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d569b7c-8a0e-4074-b61f-4139413b9849\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kp8zj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-28n48\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:21Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.583681 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"286f1a08-e4be-475e-a3ff-c37c99c41ea6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6769a5284fe4dae7df7948532cabc41d1f4a27a7e61ab818d61b26eb8165a750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75be46ee251716ce59f7d4e31811114c790e07ff2068465e1bba6ee4abc6909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1c4b797e1cd78fa6a850c8c7491a690dab16a3c06eeb71b2dc7e6799cf285a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7181241cbadd0b8edc9a6ff7f035813a1b2a06cdec451dc94d869bb56424a485\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70d9e422e29c4563585ee85b03f4d3e5705c7de8c388d337a52b30a7f0ccc096\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0128 06:48:09.026722 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 06:48:09.029824 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3147749855/tls.crt::/tmp/serving-cert-3147749855/tls.key\\\\\\\"\\\\nI0128 06:48:14.183760 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 06:48:14.185955 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 06:48:14.185992 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 06:48:14.186011 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 06:48:14.186020 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 06:48:14.192438 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0128 06:48:14.192454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0128 06:48:14.192477 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 06:48:14.192484 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 06:48:14.192488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 06:48:14.192490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 06:48:14.192493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 06:48:14.192496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0128 06:48:14.194350 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6d564af60cb16fea7167aea68703adced71f83b8ff6a2270e2e6d109f6e9c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d544adc85b9e84ad7685c6ce4e1ae6e7679fb24a21b7d44eb402b613ef6be7ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d544adc85b9e84ad7685c6ce4e1ae6e7679fb24a21b7d44eb402b613ef6be7ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:21Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.592478 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:21Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.604277 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zwkn6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e8cd657-e170-4331-9f82-7b84d122c8e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zwkn6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:21Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.614735 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:21Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.623914 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4acb8b3be7913130bfe0acca1f8ed39f30ee2121eec228e4878f5645899f53cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfb4620ab8d587f286964f74ee575b4837fe55979a2f70767d30709cdb4fb2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:21Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.635158 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tzmpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e61b4710-6f7c-4ab1-b7bb-50d445aeda93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rfx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tzmpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:21Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.644518 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"338ae955-434d-40bd-8519-580badf3e175\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdf4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdf4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hdsmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:21Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.655641 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:21Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.656049 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-ovnkube-config\") pod \"ovnkube-node-7fdwx\" (UID: \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.656083 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-host-slash\") pod \"ovnkube-node-7fdwx\" (UID: \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.656098 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-env-overrides\") pod \"ovnkube-node-7fdwx\" (UID: \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.656116 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-ovnkube-script-lib\") pod \"ovnkube-node-7fdwx\" (UID: \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.656142 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-host-kubelet\") pod \"ovnkube-node-7fdwx\" (UID: \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.656240 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-host-cni-netd\") pod \"ovnkube-node-7fdwx\" (UID: \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.656307 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-host-run-netns\") pod \"ovnkube-node-7fdwx\" (UID: \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.656343 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-var-lib-openvswitch\") pod \"ovnkube-node-7fdwx\" (UID: \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.656371 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-etc-openvswitch\") pod \"ovnkube-node-7fdwx\" (UID: \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.656388 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-ovn-node-metrics-cert\") pod \"ovnkube-node-7fdwx\" (UID: \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.656417 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-systemd-units\") pod \"ovnkube-node-7fdwx\" (UID: \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.656446 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-run-systemd\") pod \"ovnkube-node-7fdwx\" (UID: \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.656489 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-run-ovn\") pod \"ovnkube-node-7fdwx\" (UID: \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.656572 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-host-run-ovn-kubernetes\") pod \"ovnkube-node-7fdwx\" (UID: \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.656606 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-host-cni-bin\") pod \"ovnkube-node-7fdwx\" (UID: \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.656728 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-node-log\") pod \"ovnkube-node-7fdwx\" (UID: \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.656757 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-7fdwx\" (UID: \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.656786 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-run-openvswitch\") pod \"ovnkube-node-7fdwx\" (UID: \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.656812 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-log-socket\") pod \"ovnkube-node-7fdwx\" (UID: \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.656834 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4g9vq\" (UniqueName: \"kubernetes.io/projected/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-kube-api-access-4g9vq\") pod \"ovnkube-node-7fdwx\" (UID: \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.664586 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.664613 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.664627 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.664644 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.664656 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:21Z","lastTransitionTime":"2026-01-28T06:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.665805 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e3e00c82a4c8d33e89ddc75f7c6c393e11f7a98c88c8e3e160fd8f206ece474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:21Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.757580 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-host-run-ovn-kubernetes\") pod \"ovnkube-node-7fdwx\" (UID: \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.757610 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-host-cni-bin\") pod \"ovnkube-node-7fdwx\" (UID: \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.757630 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-node-log\") pod \"ovnkube-node-7fdwx\" (UID: \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.757646 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-run-openvswitch\") pod \"ovnkube-node-7fdwx\" (UID: \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.757662 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-log-socket\") pod \"ovnkube-node-7fdwx\" (UID: \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.757679 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-7fdwx\" (UID: \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.757699 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4g9vq\" (UniqueName: \"kubernetes.io/projected/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-kube-api-access-4g9vq\") pod \"ovnkube-node-7fdwx\" (UID: \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.757727 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-ovnkube-config\") pod \"ovnkube-node-7fdwx\" (UID: \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.757750 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-host-slash\") pod \"ovnkube-node-7fdwx\" (UID: \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.757745 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-host-cni-bin\") pod \"ovnkube-node-7fdwx\" (UID: \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.757786 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-7fdwx\" (UID: \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.757765 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-env-overrides\") pod \"ovnkube-node-7fdwx\" (UID: \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.757861 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-ovnkube-script-lib\") pod \"ovnkube-node-7fdwx\" (UID: \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.757931 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-host-kubelet\") pod \"ovnkube-node-7fdwx\" (UID: \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.757949 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-host-cni-netd\") pod \"ovnkube-node-7fdwx\" (UID: \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.757969 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-host-run-netns\") pod \"ovnkube-node-7fdwx\" (UID: \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.757990 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-var-lib-openvswitch\") pod \"ovnkube-node-7fdwx\" (UID: \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.758005 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-etc-openvswitch\") pod \"ovnkube-node-7fdwx\" (UID: \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.758023 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-ovn-node-metrics-cert\") pod \"ovnkube-node-7fdwx\" (UID: \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.757753 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-host-run-ovn-kubernetes\") pod \"ovnkube-node-7fdwx\" (UID: \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.758044 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-systemd-units\") pod \"ovnkube-node-7fdwx\" (UID: \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.758067 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-systemd-units\") pod \"ovnkube-node-7fdwx\" (UID: \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.758113 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-node-log\") pod \"ovnkube-node-7fdwx\" (UID: \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.758110 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-run-systemd\") pod \"ovnkube-node-7fdwx\" (UID: \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.758141 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-run-systemd\") pod \"ovnkube-node-7fdwx\" (UID: \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.758169 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-run-ovn\") pod \"ovnkube-node-7fdwx\" (UID: \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.758169 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-run-openvswitch\") pod \"ovnkube-node-7fdwx\" (UID: \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.758146 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-run-ovn\") pod \"ovnkube-node-7fdwx\" (UID: \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.758165 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-host-slash\") pod \"ovnkube-node-7fdwx\" (UID: \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.758265 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-etc-openvswitch\") pod \"ovnkube-node-7fdwx\" (UID: \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.758286 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-host-cni-netd\") pod \"ovnkube-node-7fdwx\" (UID: \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.758300 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-env-overrides\") pod \"ovnkube-node-7fdwx\" (UID: \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.758307 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-host-kubelet\") pod \"ovnkube-node-7fdwx\" (UID: \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.758344 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-log-socket\") pod \"ovnkube-node-7fdwx\" (UID: \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.758373 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-var-lib-openvswitch\") pod \"ovnkube-node-7fdwx\" (UID: \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.758394 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-host-run-netns\") pod \"ovnkube-node-7fdwx\" (UID: \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.758539 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-ovnkube-config\") pod \"ovnkube-node-7fdwx\" (UID: \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.759095 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-ovnkube-script-lib\") pod \"ovnkube-node-7fdwx\" (UID: \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.762233 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-ovn-node-metrics-cert\") pod \"ovnkube-node-7fdwx\" (UID: \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.766634 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.766892 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.766901 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.766915 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.766927 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:21Z","lastTransitionTime":"2026-01-28T06:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.771515 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4g9vq\" (UniqueName: \"kubernetes.io/projected/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-kube-api-access-4g9vq\") pod \"ovnkube-node-7fdwx\" (UID: \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.836629 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" Jan 28 06:48:21 crc kubenswrapper[4642]: W0128 06:48:21.846480 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0f5d2a3f_25d8_4051_8000_30ec01a14eb0.slice/crio-eb82ce31ee302e958f27b0d545954ccb2660743db4bf70f2312741fa99aeb02e WatchSource:0}: Error finding container eb82ce31ee302e958f27b0d545954ccb2660743db4bf70f2312741fa99aeb02e: Status 404 returned error can't find the container with id eb82ce31ee302e958f27b0d545954ccb2660743db4bf70f2312741fa99aeb02e Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.869642 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.869696 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.869707 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.869727 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.869750 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:21Z","lastTransitionTime":"2026-01-28T06:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.972331 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.972382 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.972392 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.972410 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:21 crc kubenswrapper[4642]: I0128 06:48:21.972422 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:21Z","lastTransitionTime":"2026-01-28T06:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:22 crc kubenswrapper[4642]: I0128 06:48:22.071249 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 11:13:13.564578367 +0000 UTC Jan 28 06:48:22 crc kubenswrapper[4642]: I0128 06:48:22.078808 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:22 crc kubenswrapper[4642]: I0128 06:48:22.078854 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:22 crc kubenswrapper[4642]: I0128 06:48:22.078864 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:22 crc kubenswrapper[4642]: I0128 06:48:22.078879 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:22 crc kubenswrapper[4642]: I0128 06:48:22.078892 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:22Z","lastTransitionTime":"2026-01-28T06:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:22 crc kubenswrapper[4642]: I0128 06:48:22.181425 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:22 crc kubenswrapper[4642]: I0128 06:48:22.181455 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:22 crc kubenswrapper[4642]: I0128 06:48:22.181464 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:22 crc kubenswrapper[4642]: I0128 06:48:22.181479 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:22 crc kubenswrapper[4642]: I0128 06:48:22.181489 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:22Z","lastTransitionTime":"2026-01-28T06:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:22 crc kubenswrapper[4642]: I0128 06:48:22.197787 4642 generic.go:334] "Generic (PLEG): container finished" podID="0f5d2a3f-25d8-4051-8000-30ec01a14eb0" containerID="ecf4d023197544de3b0fe0f541dc75332bb08d0f44046e46fd468e3c5a0f251e" exitCode=0 Jan 28 06:48:22 crc kubenswrapper[4642]: I0128 06:48:22.197884 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" event={"ID":"0f5d2a3f-25d8-4051-8000-30ec01a14eb0","Type":"ContainerDied","Data":"ecf4d023197544de3b0fe0f541dc75332bb08d0f44046e46fd468e3c5a0f251e"} Jan 28 06:48:22 crc kubenswrapper[4642]: I0128 06:48:22.197941 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" event={"ID":"0f5d2a3f-25d8-4051-8000-30ec01a14eb0","Type":"ContainerStarted","Data":"eb82ce31ee302e958f27b0d545954ccb2660743db4bf70f2312741fa99aeb02e"} Jan 28 06:48:22 crc kubenswrapper[4642]: I0128 06:48:22.199024 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-28n48" event={"ID":"3d569b7c-8a0e-4074-b61f-4139413b9849","Type":"ContainerStarted","Data":"04752968302ea040e4a64b015290addc260974a7243137d55948c3e1a3e3a849"} Jan 28 06:48:22 crc kubenswrapper[4642]: I0128 06:48:22.199061 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-28n48" event={"ID":"3d569b7c-8a0e-4074-b61f-4139413b9849","Type":"ContainerStarted","Data":"9c0896fab501c5c994f7082d188d50d053c50b824f0a68c21970f114c2ce6c2a"} Jan 28 06:48:22 crc kubenswrapper[4642]: I0128 06:48:22.201765 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" event={"ID":"338ae955-434d-40bd-8519-580badf3e175","Type":"ContainerStarted","Data":"f8c176f35c846bbf16f17d93e6c30e3708e24e70060ebc0e78f9714fc8c402f2"} Jan 28 06:48:22 crc kubenswrapper[4642]: I0128 06:48:22.201833 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" event={"ID":"338ae955-434d-40bd-8519-580badf3e175","Type":"ContainerStarted","Data":"f517e5ecc6bf060062365392506b3ee3a09354a4f5cfaa980e0e3a2507f298c3"} Jan 28 06:48:22 crc kubenswrapper[4642]: I0128 06:48:22.201849 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" event={"ID":"338ae955-434d-40bd-8519-580badf3e175","Type":"ContainerStarted","Data":"19c31552adee6ebe03abfc06b8a20e5ec9b9bf9008f35f292428b5c4e917c56c"} Jan 28 06:48:22 crc kubenswrapper[4642]: I0128 06:48:22.204007 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-tzmpk" event={"ID":"e61b4710-6f7c-4ab1-b7bb-50d445aeda93","Type":"ContainerStarted","Data":"f51dac6b178ad3961488d10a37458f5d40c3345f0091f7050a4c3d310ea46704"} Jan 28 06:48:22 crc kubenswrapper[4642]: I0128 06:48:22.204052 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-tzmpk" event={"ID":"e61b4710-6f7c-4ab1-b7bb-50d445aeda93","Type":"ContainerStarted","Data":"1b890fac9dbfc566fb41459ab170219156b372390e8e2db083127ff83c543c30"} Jan 28 06:48:22 crc kubenswrapper[4642]: I0128 06:48:22.207385 4642 generic.go:334] "Generic (PLEG): container finished" podID="5e8cd657-e170-4331-9f82-7b84d122c8e4" containerID="2fd2d14172912918c88948764f41aa4b5f2f97fa49f9edf2007a478b7c06b807" exitCode=0 Jan 28 06:48:22 crc kubenswrapper[4642]: I0128 06:48:22.207432 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zwkn6" event={"ID":"5e8cd657-e170-4331-9f82-7b84d122c8e4","Type":"ContainerDied","Data":"2fd2d14172912918c88948764f41aa4b5f2f97fa49f9edf2007a478b7c06b807"} Jan 28 06:48:22 crc kubenswrapper[4642]: I0128 06:48:22.207469 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zwkn6" event={"ID":"5e8cd657-e170-4331-9f82-7b84d122c8e4","Type":"ContainerStarted","Data":"332e1b75e44f06af45c7cdf5d3c3db44cfae9ef0c9f7ad8586bb8ba948cdc33f"} Jan 28 06:48:22 crc kubenswrapper[4642]: I0128 06:48:22.214719 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zwkn6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e8cd657-e170-4331-9f82-7b84d122c8e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zwkn6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:22Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:22 crc kubenswrapper[4642]: I0128 06:48:22.225136 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-28n48" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d569b7c-8a0e-4074-b61f-4139413b9849\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kp8zj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-28n48\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:22Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:22 crc kubenswrapper[4642]: I0128 06:48:22.235147 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"286f1a08-e4be-475e-a3ff-c37c99c41ea6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6769a5284fe4dae7df7948532cabc41d1f4a27a7e61ab818d61b26eb8165a750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75be46ee251716ce59f7d4e31811114c790e07ff2068465e1bba6ee4abc6909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1c4b797e1cd78fa6a850c8c7491a690dab16a3c06eeb71b2dc7e6799cf285a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7181241cbadd0b8edc9a6ff7f035813a1b2a06cdec451dc94d869bb56424a485\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70d9e422e29c4563585ee85b03f4d3e5705c7de8c388d337a52b30a7f0ccc096\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0128 06:48:09.026722 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 06:48:09.029824 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3147749855/tls.crt::/tmp/serving-cert-3147749855/tls.key\\\\\\\"\\\\nI0128 06:48:14.183760 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 06:48:14.185955 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 06:48:14.185992 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 06:48:14.186011 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 06:48:14.186020 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 06:48:14.192438 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0128 06:48:14.192454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0128 06:48:14.192477 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 06:48:14.192484 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 06:48:14.192488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 06:48:14.192490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 06:48:14.192493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 06:48:14.192496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0128 06:48:14.194350 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6d564af60cb16fea7167aea68703adced71f83b8ff6a2270e2e6d109f6e9c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d544adc85b9e84ad7685c6ce4e1ae6e7679fb24a21b7d44eb402b613ef6be7ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d544adc85b9e84ad7685c6ce4e1ae6e7679fb24a21b7d44eb402b613ef6be7ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:22Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:22 crc kubenswrapper[4642]: I0128 06:48:22.244767 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:22Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:22 crc kubenswrapper[4642]: I0128 06:48:22.254419 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:22Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:22 crc kubenswrapper[4642]: I0128 06:48:22.269658 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:22Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:22 crc kubenswrapper[4642]: I0128 06:48:22.278834 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4acb8b3be7913130bfe0acca1f8ed39f30ee2121eec228e4878f5645899f53cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfb4620ab8d587f286964f74ee575b4837fe55979a2f70767d30709cdb4fb2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:22Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:22 crc kubenswrapper[4642]: I0128 06:48:22.285864 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:22 crc kubenswrapper[4642]: I0128 06:48:22.285892 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:22 crc kubenswrapper[4642]: I0128 06:48:22.285906 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:22 crc kubenswrapper[4642]: I0128 06:48:22.285928 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:22 crc kubenswrapper[4642]: I0128 06:48:22.285941 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:22Z","lastTransitionTime":"2026-01-28T06:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:22 crc kubenswrapper[4642]: I0128 06:48:22.286368 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tzmpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e61b4710-6f7c-4ab1-b7bb-50d445aeda93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rfx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tzmpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:22Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:22 crc kubenswrapper[4642]: I0128 06:48:22.329543 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"338ae955-434d-40bd-8519-580badf3e175\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdf4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdf4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hdsmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:22Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:22 crc kubenswrapper[4642]: I0128 06:48:22.358456 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e3e00c82a4c8d33e89ddc75f7c6c393e11f7a98c88c8e3e160fd8f206ece474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:22Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:22 crc kubenswrapper[4642]: I0128 06:48:22.380536 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecf4d023197544de3b0fe0f541dc75332bb08d0f44046e46fd468e3c5a0f251e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecf4d023197544de3b0fe0f541dc75332bb08d0f44046e46fd468e3c5a0f251e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7fdwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:22Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:22 crc kubenswrapper[4642]: I0128 06:48:22.388842 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:22 crc kubenswrapper[4642]: I0128 06:48:22.388863 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:22 crc kubenswrapper[4642]: I0128 06:48:22.388872 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:22 crc kubenswrapper[4642]: I0128 06:48:22.388885 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:22 crc kubenswrapper[4642]: I0128 06:48:22.388897 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:22Z","lastTransitionTime":"2026-01-28T06:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:22 crc kubenswrapper[4642]: I0128 06:48:22.392446 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d609634-f051-46d5-9a1d-97785c7d8c67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76f1f4a4bf66b6661b0b869ac209c793c9066518674be305037372a22141c1d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f31fe6b99be2f9edd215e88aadd39790ce7f144061336772561af3eab969fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f826dab9c51a49c5b54e934e0be64c809ff09247f8aa766b9a20734a3f1277da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf1949a0768be6ba1b7cfc9107df62005fa50466747e22440e5ea8cd04ec908c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:22Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:22 crc kubenswrapper[4642]: I0128 06:48:22.403118 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5886e382e4852d848554f98099a7cbdd4d0bc3f161f96fd6f34f300a748b0172\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:22Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:22 crc kubenswrapper[4642]: I0128 06:48:22.411705 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e3e00c82a4c8d33e89ddc75f7c6c393e11f7a98c88c8e3e160fd8f206ece474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:22Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:22 crc kubenswrapper[4642]: I0128 06:48:22.422318 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5886e382e4852d848554f98099a7cbdd4d0bc3f161f96fd6f34f300a748b0172\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:22Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:22 crc kubenswrapper[4642]: I0128 06:48:22.435458 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecf4d023197544de3b0fe0f541dc75332bb08d0f44046e46fd468e3c5a0f251e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecf4d023197544de3b0fe0f541dc75332bb08d0f44046e46fd468e3c5a0f251e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7fdwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:22Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:22 crc kubenswrapper[4642]: I0128 06:48:22.446994 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d609634-f051-46d5-9a1d-97785c7d8c67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76f1f4a4bf66b6661b0b869ac209c793c9066518674be305037372a22141c1d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f31fe6b99be2f9edd215e88aadd39790ce7f144061336772561af3eab969fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f826dab9c51a49c5b54e934e0be64c809ff09247f8aa766b9a20734a3f1277da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf1949a0768be6ba1b7cfc9107df62005fa50466747e22440e5ea8cd04ec908c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:22Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:22 crc kubenswrapper[4642]: I0128 06:48:22.460652 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:22Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:22 crc kubenswrapper[4642]: I0128 06:48:22.476458 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zwkn6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e8cd657-e170-4331-9f82-7b84d122c8e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd2d14172912918c88948764f41aa4b5f2f97fa49f9edf2007a478b7c06b807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fd2d14172912918c88948764f41aa4b5f2f97fa49f9edf2007a478b7c06b807\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zwkn6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:22Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:22 crc kubenswrapper[4642]: I0128 06:48:22.490775 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:22 crc kubenswrapper[4642]: I0128 06:48:22.490835 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:22 crc kubenswrapper[4642]: I0128 06:48:22.490845 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:22 crc kubenswrapper[4642]: I0128 06:48:22.490866 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:22 crc kubenswrapper[4642]: I0128 06:48:22.490878 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:22Z","lastTransitionTime":"2026-01-28T06:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:22 crc kubenswrapper[4642]: I0128 06:48:22.491733 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-28n48" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d569b7c-8a0e-4074-b61f-4139413b9849\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04752968302ea040e4a64b015290addc260974a7243137d55948c3e1a3e3a849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kp8zj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-28n48\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:22Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:22 crc kubenswrapper[4642]: I0128 06:48:22.507004 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"286f1a08-e4be-475e-a3ff-c37c99c41ea6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6769a5284fe4dae7df7948532cabc41d1f4a27a7e61ab818d61b26eb8165a750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75be46ee251716ce59f7d4e31811114c790e07ff2068465e1bba6ee4abc6909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1c4b797e1cd78fa6a850c8c7491a690dab16a3c06eeb71b2dc7e6799cf285a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7181241cbadd0b8edc9a6ff7f035813a1b2a06cdec451dc94d869bb56424a485\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70d9e422e29c4563585ee85b03f4d3e5705c7de8c388d337a52b30a7f0ccc096\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0128 06:48:09.026722 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 06:48:09.029824 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3147749855/tls.crt::/tmp/serving-cert-3147749855/tls.key\\\\\\\"\\\\nI0128 06:48:14.183760 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 06:48:14.185955 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 06:48:14.185992 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 06:48:14.186011 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 06:48:14.186020 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 06:48:14.192438 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0128 06:48:14.192454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0128 06:48:14.192477 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 06:48:14.192484 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 06:48:14.192488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 06:48:14.192490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 06:48:14.192493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 06:48:14.192496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0128 06:48:14.194350 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6d564af60cb16fea7167aea68703adced71f83b8ff6a2270e2e6d109f6e9c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d544adc85b9e84ad7685c6ce4e1ae6e7679fb24a21b7d44eb402b613ef6be7ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d544adc85b9e84ad7685c6ce4e1ae6e7679fb24a21b7d44eb402b613ef6be7ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:22Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:22 crc kubenswrapper[4642]: I0128 06:48:22.519529 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:22Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:22 crc kubenswrapper[4642]: I0128 06:48:22.531449 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:22Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:22 crc kubenswrapper[4642]: I0128 06:48:22.545609 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4acb8b3be7913130bfe0acca1f8ed39f30ee2121eec228e4878f5645899f53cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfb4620ab8d587f286964f74ee575b4837fe55979a2f70767d30709cdb4fb2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:22Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:22 crc kubenswrapper[4642]: I0128 06:48:22.559075 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tzmpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e61b4710-6f7c-4ab1-b7bb-50d445aeda93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51dac6b178ad3961488d10a37458f5d40c3345f0091f7050a4c3d310ea46704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rfx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tzmpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:22Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:22 crc kubenswrapper[4642]: I0128 06:48:22.569392 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"338ae955-434d-40bd-8519-580badf3e175\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8c176f35c846bbf16f17d93e6c30e3708e24e70060ebc0e78f9714fc8c402f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdf4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f517e5ecc6bf060062365392506b3ee3a09354a4f5cfaa980e0e3a2507f298c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdf4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hdsmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:22Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:22 crc kubenswrapper[4642]: I0128 06:48:22.594415 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:22 crc kubenswrapper[4642]: I0128 06:48:22.594460 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:22 crc kubenswrapper[4642]: I0128 06:48:22.594472 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:22 crc kubenswrapper[4642]: I0128 06:48:22.594500 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:22 crc kubenswrapper[4642]: I0128 06:48:22.594519 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:22Z","lastTransitionTime":"2026-01-28T06:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:22 crc kubenswrapper[4642]: I0128 06:48:22.666834 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 06:48:22 crc kubenswrapper[4642]: I0128 06:48:22.666927 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 06:48:22 crc kubenswrapper[4642]: E0128 06:48:22.666978 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 06:48:30.666955443 +0000 UTC m=+33.899044253 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 06:48:22 crc kubenswrapper[4642]: I0128 06:48:22.667052 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 06:48:22 crc kubenswrapper[4642]: E0128 06:48:22.667082 4642 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 06:48:22 crc kubenswrapper[4642]: E0128 06:48:22.667117 4642 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 06:48:22 crc kubenswrapper[4642]: E0128 06:48:22.667167 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 06:48:30.667154718 +0000 UTC m=+33.899243527 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 06:48:22 crc kubenswrapper[4642]: E0128 06:48:22.667200 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 06:48:30.667180968 +0000 UTC m=+33.899269776 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 06:48:22 crc kubenswrapper[4642]: I0128 06:48:22.696407 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:22 crc kubenswrapper[4642]: I0128 06:48:22.696443 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:22 crc kubenswrapper[4642]: I0128 06:48:22.696473 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:22 crc kubenswrapper[4642]: I0128 06:48:22.696489 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:22 crc kubenswrapper[4642]: I0128 06:48:22.696498 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:22Z","lastTransitionTime":"2026-01-28T06:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:22 crc kubenswrapper[4642]: I0128 06:48:22.767468 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 06:48:22 crc kubenswrapper[4642]: I0128 06:48:22.767519 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 06:48:22 crc kubenswrapper[4642]: E0128 06:48:22.767603 4642 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 06:48:22 crc kubenswrapper[4642]: E0128 06:48:22.767631 4642 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 06:48:22 crc kubenswrapper[4642]: E0128 06:48:22.767645 4642 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 06:48:22 crc kubenswrapper[4642]: E0128 06:48:22.767665 4642 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 06:48:22 crc kubenswrapper[4642]: E0128 06:48:22.767684 4642 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 06:48:22 crc kubenswrapper[4642]: E0128 06:48:22.767689 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 06:48:30.767678201 +0000 UTC m=+33.999767010 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 06:48:22 crc kubenswrapper[4642]: E0128 06:48:22.767698 4642 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 06:48:22 crc kubenswrapper[4642]: E0128 06:48:22.767739 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 06:48:30.767727243 +0000 UTC m=+33.999816052 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 06:48:22 crc kubenswrapper[4642]: I0128 06:48:22.799915 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:22 crc kubenswrapper[4642]: I0128 06:48:22.799970 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:22 crc kubenswrapper[4642]: I0128 06:48:22.799983 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:22 crc kubenswrapper[4642]: I0128 06:48:22.800003 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:22 crc kubenswrapper[4642]: I0128 06:48:22.800020 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:22Z","lastTransitionTime":"2026-01-28T06:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:22 crc kubenswrapper[4642]: I0128 06:48:22.901907 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:22 crc kubenswrapper[4642]: I0128 06:48:22.901936 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:22 crc kubenswrapper[4642]: I0128 06:48:22.901945 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:22 crc kubenswrapper[4642]: I0128 06:48:22.901959 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:22 crc kubenswrapper[4642]: I0128 06:48:22.901970 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:22Z","lastTransitionTime":"2026-01-28T06:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:23 crc kubenswrapper[4642]: I0128 06:48:23.004468 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:23 crc kubenswrapper[4642]: I0128 06:48:23.004520 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:23 crc kubenswrapper[4642]: I0128 06:48:23.004532 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:23 crc kubenswrapper[4642]: I0128 06:48:23.004553 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:23 crc kubenswrapper[4642]: I0128 06:48:23.004566 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:23Z","lastTransitionTime":"2026-01-28T06:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:23 crc kubenswrapper[4642]: I0128 06:48:23.071677 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 07:06:34.447275957 +0000 UTC Jan 28 06:48:23 crc kubenswrapper[4642]: I0128 06:48:23.098171 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 06:48:23 crc kubenswrapper[4642]: I0128 06:48:23.098253 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 06:48:23 crc kubenswrapper[4642]: I0128 06:48:23.098171 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 06:48:23 crc kubenswrapper[4642]: E0128 06:48:23.098343 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 06:48:23 crc kubenswrapper[4642]: E0128 06:48:23.098435 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 06:48:23 crc kubenswrapper[4642]: E0128 06:48:23.098536 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 06:48:23 crc kubenswrapper[4642]: I0128 06:48:23.106626 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:23 crc kubenswrapper[4642]: I0128 06:48:23.106659 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:23 crc kubenswrapper[4642]: I0128 06:48:23.106669 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:23 crc kubenswrapper[4642]: I0128 06:48:23.106684 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:23 crc kubenswrapper[4642]: I0128 06:48:23.106695 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:23Z","lastTransitionTime":"2026-01-28T06:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:23 crc kubenswrapper[4642]: I0128 06:48:23.210443 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:23 crc kubenswrapper[4642]: I0128 06:48:23.210502 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:23 crc kubenswrapper[4642]: I0128 06:48:23.210518 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:23 crc kubenswrapper[4642]: I0128 06:48:23.210535 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:23 crc kubenswrapper[4642]: I0128 06:48:23.210545 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:23Z","lastTransitionTime":"2026-01-28T06:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:23 crc kubenswrapper[4642]: I0128 06:48:23.215157 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" event={"ID":"0f5d2a3f-25d8-4051-8000-30ec01a14eb0","Type":"ContainerStarted","Data":"fba5c27390ef97be2917d1b375ca741d7a74118ba523043b1c826e4fccaee05d"} Jan 28 06:48:23 crc kubenswrapper[4642]: I0128 06:48:23.215207 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" event={"ID":"0f5d2a3f-25d8-4051-8000-30ec01a14eb0","Type":"ContainerStarted","Data":"a597a1b6d4b61efc1b46773965ad598626e298815da9ceffaf97965d72286d3f"} Jan 28 06:48:23 crc kubenswrapper[4642]: I0128 06:48:23.215221 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" event={"ID":"0f5d2a3f-25d8-4051-8000-30ec01a14eb0","Type":"ContainerStarted","Data":"d9c679b07a820cca68800f672c74f133d871d41e4cfe6c18e101febd7110b497"} Jan 28 06:48:23 crc kubenswrapper[4642]: I0128 06:48:23.215232 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" event={"ID":"0f5d2a3f-25d8-4051-8000-30ec01a14eb0","Type":"ContainerStarted","Data":"89c33d9fa76f5feb29f150ff38354d292c6cb68dedb78eb8496754c23272d137"} Jan 28 06:48:23 crc kubenswrapper[4642]: I0128 06:48:23.215242 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" event={"ID":"0f5d2a3f-25d8-4051-8000-30ec01a14eb0","Type":"ContainerStarted","Data":"e87e992d302acbefdcab2e332757e1f6d6535a0acdbe268b1e2eb1d49a615940"} Jan 28 06:48:23 crc kubenswrapper[4642]: I0128 06:48:23.215251 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" event={"ID":"0f5d2a3f-25d8-4051-8000-30ec01a14eb0","Type":"ContainerStarted","Data":"a6187660025657deea7e0cd68d1341136261c5e66c9e32c3b00b181df029843b"} Jan 28 06:48:23 crc kubenswrapper[4642]: I0128 06:48:23.217957 4642 generic.go:334] "Generic (PLEG): container finished" podID="5e8cd657-e170-4331-9f82-7b84d122c8e4" containerID="7aa815d145b52bbd51ede4a9e3c88571f861f70a6f5dc998910e506e358f73af" exitCode=0 Jan 28 06:48:23 crc kubenswrapper[4642]: I0128 06:48:23.218000 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zwkn6" event={"ID":"5e8cd657-e170-4331-9f82-7b84d122c8e4","Type":"ContainerDied","Data":"7aa815d145b52bbd51ede4a9e3c88571f861f70a6f5dc998910e506e358f73af"} Jan 28 06:48:23 crc kubenswrapper[4642]: I0128 06:48:23.230773 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d609634-f051-46d5-9a1d-97785c7d8c67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76f1f4a4bf66b6661b0b869ac209c793c9066518674be305037372a22141c1d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f31fe6b99be2f9edd215e88aadd39790ce7f144061336772561af3eab969fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f826dab9c51a49c5b54e934e0be64c809ff09247f8aa766b9a20734a3f1277da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf1949a0768be6ba1b7cfc9107df62005fa50466747e22440e5ea8cd04ec908c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:23Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:23 crc kubenswrapper[4642]: I0128 06:48:23.242830 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5886e382e4852d848554f98099a7cbdd4d0bc3f161f96fd6f34f300a748b0172\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:23Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:23 crc kubenswrapper[4642]: I0128 06:48:23.265456 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecf4d023197544de3b0fe0f541dc75332bb08d0f44046e46fd468e3c5a0f251e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecf4d023197544de3b0fe0f541dc75332bb08d0f44046e46fd468e3c5a0f251e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7fdwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:23Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:23 crc kubenswrapper[4642]: I0128 06:48:23.281887 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"286f1a08-e4be-475e-a3ff-c37c99c41ea6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6769a5284fe4dae7df7948532cabc41d1f4a27a7e61ab818d61b26eb8165a750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75be46ee251716ce59f7d4e31811114c790e07ff2068465e1bba6ee4abc6909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1c4b797e1cd78fa6a850c8c7491a690dab16a3c06eeb71b2dc7e6799cf285a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7181241cbadd0b8edc9a6ff7f035813a1b2a06cdec451dc94d869bb56424a485\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70d9e422e29c4563585ee85b03f4d3e5705c7de8c388d337a52b30a7f0ccc096\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0128 06:48:09.026722 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 06:48:09.029824 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3147749855/tls.crt::/tmp/serving-cert-3147749855/tls.key\\\\\\\"\\\\nI0128 06:48:14.183760 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 06:48:14.185955 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 06:48:14.185992 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 06:48:14.186011 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 06:48:14.186020 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 06:48:14.192438 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0128 06:48:14.192454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0128 06:48:14.192477 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 06:48:14.192484 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 06:48:14.192488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 06:48:14.192490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 06:48:14.192493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 06:48:14.192496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0128 06:48:14.194350 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6d564af60cb16fea7167aea68703adced71f83b8ff6a2270e2e6d109f6e9c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d544adc85b9e84ad7685c6ce4e1ae6e7679fb24a21b7d44eb402b613ef6be7ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d544adc85b9e84ad7685c6ce4e1ae6e7679fb24a21b7d44eb402b613ef6be7ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:23Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:23 crc kubenswrapper[4642]: I0128 06:48:23.291741 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:23Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:23 crc kubenswrapper[4642]: I0128 06:48:23.303393 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zwkn6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e8cd657-e170-4331-9f82-7b84d122c8e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd2d14172912918c88948764f41aa4b5f2f97fa49f9edf2007a478b7c06b807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fd2d14172912918c88948764f41aa4b5f2f97fa49f9edf2007a478b7c06b807\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa815d145b52bbd51ede4a9e3c88571f861f70a6f5dc998910e506e358f73af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7aa815d145b52bbd51ede4a9e3c88571f861f70a6f5dc998910e506e358f73af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zwkn6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:23Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:23 crc kubenswrapper[4642]: I0128 06:48:23.313787 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:23 crc kubenswrapper[4642]: I0128 06:48:23.313956 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:23 crc kubenswrapper[4642]: I0128 06:48:23.313989 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:23 crc kubenswrapper[4642]: I0128 06:48:23.314014 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:23 crc kubenswrapper[4642]: I0128 06:48:23.314031 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:23Z","lastTransitionTime":"2026-01-28T06:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:23 crc kubenswrapper[4642]: I0128 06:48:23.314153 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-28n48" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d569b7c-8a0e-4074-b61f-4139413b9849\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04752968302ea040e4a64b015290addc260974a7243137d55948c3e1a3e3a849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kp8zj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-28n48\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:23Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:23 crc kubenswrapper[4642]: I0128 06:48:23.324645 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"338ae955-434d-40bd-8519-580badf3e175\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8c176f35c846bbf16f17d93e6c30e3708e24e70060ebc0e78f9714fc8c402f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdf4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f517e5ecc6bf060062365392506b3ee3a09354a4f5cfaa980e0e3a2507f298c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdf4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hdsmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:23Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:23 crc kubenswrapper[4642]: I0128 06:48:23.335021 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:23Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:23 crc kubenswrapper[4642]: I0128 06:48:23.346833 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:23Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:23 crc kubenswrapper[4642]: I0128 06:48:23.357397 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4acb8b3be7913130bfe0acca1f8ed39f30ee2121eec228e4878f5645899f53cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfb4620ab8d587f286964f74ee575b4837fe55979a2f70767d30709cdb4fb2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:23Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:23 crc kubenswrapper[4642]: I0128 06:48:23.367921 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tzmpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e61b4710-6f7c-4ab1-b7bb-50d445aeda93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51dac6b178ad3961488d10a37458f5d40c3345f0091f7050a4c3d310ea46704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rfx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tzmpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:23Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:23 crc kubenswrapper[4642]: I0128 06:48:23.377405 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e3e00c82a4c8d33e89ddc75f7c6c393e11f7a98c88c8e3e160fd8f206ece474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:23Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:23 crc kubenswrapper[4642]: I0128 06:48:23.415778 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:23 crc kubenswrapper[4642]: I0128 06:48:23.415803 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:23 crc kubenswrapper[4642]: I0128 06:48:23.415812 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:23 crc kubenswrapper[4642]: I0128 06:48:23.415831 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:23 crc kubenswrapper[4642]: I0128 06:48:23.415843 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:23Z","lastTransitionTime":"2026-01-28T06:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:23 crc kubenswrapper[4642]: I0128 06:48:23.518731 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:23 crc kubenswrapper[4642]: I0128 06:48:23.518759 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:23 crc kubenswrapper[4642]: I0128 06:48:23.518768 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:23 crc kubenswrapper[4642]: I0128 06:48:23.518788 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:23 crc kubenswrapper[4642]: I0128 06:48:23.518799 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:23Z","lastTransitionTime":"2026-01-28T06:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:23 crc kubenswrapper[4642]: I0128 06:48:23.621176 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:23 crc kubenswrapper[4642]: I0128 06:48:23.621248 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:23 crc kubenswrapper[4642]: I0128 06:48:23.621257 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:23 crc kubenswrapper[4642]: I0128 06:48:23.621278 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:23 crc kubenswrapper[4642]: I0128 06:48:23.621291 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:23Z","lastTransitionTime":"2026-01-28T06:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:23 crc kubenswrapper[4642]: I0128 06:48:23.723380 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:23 crc kubenswrapper[4642]: I0128 06:48:23.723419 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:23 crc kubenswrapper[4642]: I0128 06:48:23.723429 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:23 crc kubenswrapper[4642]: I0128 06:48:23.723445 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:23 crc kubenswrapper[4642]: I0128 06:48:23.723456 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:23Z","lastTransitionTime":"2026-01-28T06:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:23 crc kubenswrapper[4642]: I0128 06:48:23.827019 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:23 crc kubenswrapper[4642]: I0128 06:48:23.827069 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:23 crc kubenswrapper[4642]: I0128 06:48:23.827080 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:23 crc kubenswrapper[4642]: I0128 06:48:23.827100 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:23 crc kubenswrapper[4642]: I0128 06:48:23.827116 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:23Z","lastTransitionTime":"2026-01-28T06:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:23 crc kubenswrapper[4642]: I0128 06:48:23.929710 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:23 crc kubenswrapper[4642]: I0128 06:48:23.929752 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:23 crc kubenswrapper[4642]: I0128 06:48:23.929764 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:23 crc kubenswrapper[4642]: I0128 06:48:23.930160 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:23 crc kubenswrapper[4642]: I0128 06:48:23.930214 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:23Z","lastTransitionTime":"2026-01-28T06:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:24 crc kubenswrapper[4642]: I0128 06:48:24.032457 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:24 crc kubenswrapper[4642]: I0128 06:48:24.032500 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:24 crc kubenswrapper[4642]: I0128 06:48:24.032510 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:24 crc kubenswrapper[4642]: I0128 06:48:24.032525 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:24 crc kubenswrapper[4642]: I0128 06:48:24.032537 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:24Z","lastTransitionTime":"2026-01-28T06:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:24 crc kubenswrapper[4642]: I0128 06:48:24.072824 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 03:43:21.747593369 +0000 UTC Jan 28 06:48:24 crc kubenswrapper[4642]: I0128 06:48:24.135010 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:24 crc kubenswrapper[4642]: I0128 06:48:24.135051 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:24 crc kubenswrapper[4642]: I0128 06:48:24.135062 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:24 crc kubenswrapper[4642]: I0128 06:48:24.135082 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:24 crc kubenswrapper[4642]: I0128 06:48:24.135094 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:24Z","lastTransitionTime":"2026-01-28T06:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:24 crc kubenswrapper[4642]: I0128 06:48:24.226147 4642 generic.go:334] "Generic (PLEG): container finished" podID="5e8cd657-e170-4331-9f82-7b84d122c8e4" containerID="50e6b7369b4926f78f6257e3f166c6540e93281a880c2a21ae3f87ecf32aaca0" exitCode=0 Jan 28 06:48:24 crc kubenswrapper[4642]: I0128 06:48:24.226232 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zwkn6" event={"ID":"5e8cd657-e170-4331-9f82-7b84d122c8e4","Type":"ContainerDied","Data":"50e6b7369b4926f78f6257e3f166c6540e93281a880c2a21ae3f87ecf32aaca0"} Jan 28 06:48:24 crc kubenswrapper[4642]: I0128 06:48:24.237704 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:24 crc kubenswrapper[4642]: I0128 06:48:24.237749 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:24 crc kubenswrapper[4642]: I0128 06:48:24.237761 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:24 crc kubenswrapper[4642]: I0128 06:48:24.237784 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:24 crc kubenswrapper[4642]: I0128 06:48:24.237797 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:24Z","lastTransitionTime":"2026-01-28T06:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:24 crc kubenswrapper[4642]: I0128 06:48:24.240569 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:24Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:24 crc kubenswrapper[4642]: I0128 06:48:24.252643 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zwkn6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e8cd657-e170-4331-9f82-7b84d122c8e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd2d14172912918c88948764f41aa4b5f2f97fa49f9edf2007a478b7c06b807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fd2d14172912918c88948764f41aa4b5f2f97fa49f9edf2007a478b7c06b807\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa815d145b52bbd51ede4a9e3c88571f861f70a6f5dc998910e506e358f73af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7aa815d145b52bbd51ede4a9e3c88571f861f70a6f5dc998910e506e358f73af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e6b7369b4926f78f6257e3f166c6540e93281a880c2a21ae3f87ecf32aaca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e6b7369b4926f78f6257e3f166c6540e93281a880c2a21ae3f87ecf32aaca0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zwkn6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:24Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:24 crc kubenswrapper[4642]: I0128 06:48:24.264573 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-28n48" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d569b7c-8a0e-4074-b61f-4139413b9849\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04752968302ea040e4a64b015290addc260974a7243137d55948c3e1a3e3a849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kp8zj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-28n48\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:24Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:24 crc kubenswrapper[4642]: I0128 06:48:24.277577 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"286f1a08-e4be-475e-a3ff-c37c99c41ea6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6769a5284fe4dae7df7948532cabc41d1f4a27a7e61ab818d61b26eb8165a750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75be46ee251716ce59f7d4e31811114c790e07ff2068465e1bba6ee4abc6909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1c4b797e1cd78fa6a850c8c7491a690dab16a3c06eeb71b2dc7e6799cf285a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7181241cbadd0b8edc9a6ff7f035813a1b2a06cdec451dc94d869bb56424a485\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70d9e422e29c4563585ee85b03f4d3e5705c7de8c388d337a52b30a7f0ccc096\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0128 06:48:09.026722 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 06:48:09.029824 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3147749855/tls.crt::/tmp/serving-cert-3147749855/tls.key\\\\\\\"\\\\nI0128 06:48:14.183760 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 06:48:14.185955 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 06:48:14.185992 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 06:48:14.186011 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 06:48:14.186020 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 06:48:14.192438 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0128 06:48:14.192454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0128 06:48:14.192477 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 06:48:14.192484 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 06:48:14.192488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 06:48:14.192490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 06:48:14.192493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 06:48:14.192496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0128 06:48:14.194350 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6d564af60cb16fea7167aea68703adced71f83b8ff6a2270e2e6d109f6e9c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d544adc85b9e84ad7685c6ce4e1ae6e7679fb24a21b7d44eb402b613ef6be7ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d544adc85b9e84ad7685c6ce4e1ae6e7679fb24a21b7d44eb402b613ef6be7ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:24Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:24 crc kubenswrapper[4642]: I0128 06:48:24.288606 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:24Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:24 crc kubenswrapper[4642]: I0128 06:48:24.298166 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:24Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:24 crc kubenswrapper[4642]: I0128 06:48:24.310973 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4acb8b3be7913130bfe0acca1f8ed39f30ee2121eec228e4878f5645899f53cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfb4620ab8d587f286964f74ee575b4837fe55979a2f70767d30709cdb4fb2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:24Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:24 crc kubenswrapper[4642]: I0128 06:48:24.318755 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tzmpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e61b4710-6f7c-4ab1-b7bb-50d445aeda93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51dac6b178ad3961488d10a37458f5d40c3345f0091f7050a4c3d310ea46704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rfx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tzmpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:24Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:24 crc kubenswrapper[4642]: I0128 06:48:24.328975 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"338ae955-434d-40bd-8519-580badf3e175\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8c176f35c846bbf16f17d93e6c30e3708e24e70060ebc0e78f9714fc8c402f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdf4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f517e5ecc6bf060062365392506b3ee3a09354a4f5cfaa980e0e3a2507f298c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdf4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hdsmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:24Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:24 crc kubenswrapper[4642]: I0128 06:48:24.337592 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e3e00c82a4c8d33e89ddc75f7c6c393e11f7a98c88c8e3e160fd8f206ece474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:24Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:24 crc kubenswrapper[4642]: I0128 06:48:24.340438 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:24 crc kubenswrapper[4642]: I0128 06:48:24.340473 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:24 crc kubenswrapper[4642]: I0128 06:48:24.340484 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:24 crc kubenswrapper[4642]: I0128 06:48:24.340506 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:24 crc kubenswrapper[4642]: I0128 06:48:24.340519 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:24Z","lastTransitionTime":"2026-01-28T06:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:24 crc kubenswrapper[4642]: I0128 06:48:24.347620 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5886e382e4852d848554f98099a7cbdd4d0bc3f161f96fd6f34f300a748b0172\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:24Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:24 crc kubenswrapper[4642]: I0128 06:48:24.361329 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecf4d023197544de3b0fe0f541dc75332bb08d0f44046e46fd468e3c5a0f251e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecf4d023197544de3b0fe0f541dc75332bb08d0f44046e46fd468e3c5a0f251e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7fdwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:24Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:24 crc kubenswrapper[4642]: I0128 06:48:24.370729 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d609634-f051-46d5-9a1d-97785c7d8c67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76f1f4a4bf66b6661b0b869ac209c793c9066518674be305037372a22141c1d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f31fe6b99be2f9edd215e88aadd39790ce7f144061336772561af3eab969fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f826dab9c51a49c5b54e934e0be64c809ff09247f8aa766b9a20734a3f1277da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf1949a0768be6ba1b7cfc9107df62005fa50466747e22440e5ea8cd04ec908c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:24Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:24 crc kubenswrapper[4642]: I0128 06:48:24.427449 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-wcwkw"] Jan 28 06:48:24 crc kubenswrapper[4642]: I0128 06:48:24.427975 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-wcwkw" Jan 28 06:48:24 crc kubenswrapper[4642]: I0128 06:48:24.431500 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 28 06:48:24 crc kubenswrapper[4642]: I0128 06:48:24.431553 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 28 06:48:24 crc kubenswrapper[4642]: I0128 06:48:24.431602 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 28 06:48:24 crc kubenswrapper[4642]: I0128 06:48:24.431603 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 28 06:48:24 crc kubenswrapper[4642]: I0128 06:48:24.440161 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5886e382e4852d848554f98099a7cbdd4d0bc3f161f96fd6f34f300a748b0172\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:24Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:24 crc kubenswrapper[4642]: I0128 06:48:24.443076 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:24 crc kubenswrapper[4642]: I0128 06:48:24.443120 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:24 crc kubenswrapper[4642]: I0128 06:48:24.443132 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:24 crc kubenswrapper[4642]: I0128 06:48:24.443152 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:24 crc kubenswrapper[4642]: I0128 06:48:24.443163 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:24Z","lastTransitionTime":"2026-01-28T06:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:24 crc kubenswrapper[4642]: I0128 06:48:24.456287 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecf4d023197544de3b0fe0f541dc75332bb08d0f44046e46fd468e3c5a0f251e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecf4d023197544de3b0fe0f541dc75332bb08d0f44046e46fd468e3c5a0f251e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7fdwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:24Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:24 crc kubenswrapper[4642]: I0128 06:48:24.467212 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d609634-f051-46d5-9a1d-97785c7d8c67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76f1f4a4bf66b6661b0b869ac209c793c9066518674be305037372a22141c1d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f31fe6b99be2f9edd215e88aadd39790ce7f144061336772561af3eab969fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f826dab9c51a49c5b54e934e0be64c809ff09247f8aa766b9a20734a3f1277da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf1949a0768be6ba1b7cfc9107df62005fa50466747e22440e5ea8cd04ec908c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:24Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:24 crc kubenswrapper[4642]: I0128 06:48:24.477403 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:24Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:24 crc kubenswrapper[4642]: I0128 06:48:24.488595 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zwkn6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e8cd657-e170-4331-9f82-7b84d122c8e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd2d14172912918c88948764f41aa4b5f2f97fa49f9edf2007a478b7c06b807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fd2d14172912918c88948764f41aa4b5f2f97fa49f9edf2007a478b7c06b807\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa815d145b52bbd51ede4a9e3c88571f861f70a6f5dc998910e506e358f73af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7aa815d145b52bbd51ede4a9e3c88571f861f70a6f5dc998910e506e358f73af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e6b7369b4926f78f6257e3f166c6540e93281a880c2a21ae3f87ecf32aaca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e6b7369b4926f78f6257e3f166c6540e93281a880c2a21ae3f87ecf32aaca0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zwkn6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:24Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:24 crc kubenswrapper[4642]: I0128 06:48:24.499268 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-28n48" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d569b7c-8a0e-4074-b61f-4139413b9849\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04752968302ea040e4a64b015290addc260974a7243137d55948c3e1a3e3a849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kp8zj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-28n48\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:24Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:24 crc kubenswrapper[4642]: I0128 06:48:24.507423 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wcwkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0008ccce-c71a-484f-9df7-02d5d92e8b02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:24Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:24Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s57nl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wcwkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:24Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:24 crc kubenswrapper[4642]: I0128 06:48:24.519168 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"286f1a08-e4be-475e-a3ff-c37c99c41ea6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6769a5284fe4dae7df7948532cabc41d1f4a27a7e61ab818d61b26eb8165a750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75be46ee251716ce59f7d4e31811114c790e07ff2068465e1bba6ee4abc6909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1c4b797e1cd78fa6a850c8c7491a690dab16a3c06eeb71b2dc7e6799cf285a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7181241cbadd0b8edc9a6ff7f035813a1b2a06cdec451dc94d869bb56424a485\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70d9e422e29c4563585ee85b03f4d3e5705c7de8c388d337a52b30a7f0ccc096\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0128 06:48:09.026722 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 06:48:09.029824 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3147749855/tls.crt::/tmp/serving-cert-3147749855/tls.key\\\\\\\"\\\\nI0128 06:48:14.183760 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 06:48:14.185955 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 06:48:14.185992 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 06:48:14.186011 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 06:48:14.186020 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 06:48:14.192438 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0128 06:48:14.192454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0128 06:48:14.192477 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 06:48:14.192484 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 06:48:14.192488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 06:48:14.192490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 06:48:14.192493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 06:48:14.192496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0128 06:48:14.194350 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6d564af60cb16fea7167aea68703adced71f83b8ff6a2270e2e6d109f6e9c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d544adc85b9e84ad7685c6ce4e1ae6e7679fb24a21b7d44eb402b613ef6be7ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d544adc85b9e84ad7685c6ce4e1ae6e7679fb24a21b7d44eb402b613ef6be7ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:24Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:24 crc kubenswrapper[4642]: I0128 06:48:24.527310 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:24Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:24 crc kubenswrapper[4642]: I0128 06:48:24.536299 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:24Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:24 crc kubenswrapper[4642]: I0128 06:48:24.545122 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:24 crc kubenswrapper[4642]: I0128 06:48:24.545166 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:24 crc kubenswrapper[4642]: I0128 06:48:24.545177 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:24 crc kubenswrapper[4642]: I0128 06:48:24.545212 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:24 crc kubenswrapper[4642]: I0128 06:48:24.545225 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:24Z","lastTransitionTime":"2026-01-28T06:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:24 crc kubenswrapper[4642]: I0128 06:48:24.545490 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4acb8b3be7913130bfe0acca1f8ed39f30ee2121eec228e4878f5645899f53cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfb4620ab8d587f286964f74ee575b4837fe55979a2f70767d30709cdb4fb2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:24Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:24 crc kubenswrapper[4642]: I0128 06:48:24.552930 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tzmpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e61b4710-6f7c-4ab1-b7bb-50d445aeda93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51dac6b178ad3961488d10a37458f5d40c3345f0091f7050a4c3d310ea46704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rfx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tzmpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:24Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:24 crc kubenswrapper[4642]: I0128 06:48:24.561120 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"338ae955-434d-40bd-8519-580badf3e175\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8c176f35c846bbf16f17d93e6c30e3708e24e70060ebc0e78f9714fc8c402f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdf4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f517e5ecc6bf060062365392506b3ee3a09354a4f5cfaa980e0e3a2507f298c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdf4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hdsmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:24Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:24 crc kubenswrapper[4642]: I0128 06:48:24.570084 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e3e00c82a4c8d33e89ddc75f7c6c393e11f7a98c88c8e3e160fd8f206ece474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:24Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:24 crc kubenswrapper[4642]: I0128 06:48:24.585537 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s57nl\" (UniqueName: \"kubernetes.io/projected/0008ccce-c71a-484f-9df7-02d5d92e8b02-kube-api-access-s57nl\") pod \"node-ca-wcwkw\" (UID: \"0008ccce-c71a-484f-9df7-02d5d92e8b02\") " pod="openshift-image-registry/node-ca-wcwkw" Jan 28 06:48:24 crc kubenswrapper[4642]: I0128 06:48:24.585581 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0008ccce-c71a-484f-9df7-02d5d92e8b02-host\") pod \"node-ca-wcwkw\" (UID: \"0008ccce-c71a-484f-9df7-02d5d92e8b02\") " pod="openshift-image-registry/node-ca-wcwkw" Jan 28 06:48:24 crc kubenswrapper[4642]: I0128 06:48:24.585634 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/0008ccce-c71a-484f-9df7-02d5d92e8b02-serviceca\") pod \"node-ca-wcwkw\" (UID: \"0008ccce-c71a-484f-9df7-02d5d92e8b02\") " pod="openshift-image-registry/node-ca-wcwkw" Jan 28 06:48:24 crc kubenswrapper[4642]: I0128 06:48:24.647892 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:24 crc kubenswrapper[4642]: I0128 06:48:24.647933 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:24 crc kubenswrapper[4642]: I0128 06:48:24.647944 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:24 crc kubenswrapper[4642]: I0128 06:48:24.647966 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:24 crc kubenswrapper[4642]: I0128 06:48:24.647979 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:24Z","lastTransitionTime":"2026-01-28T06:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:24 crc kubenswrapper[4642]: I0128 06:48:24.686592 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/0008ccce-c71a-484f-9df7-02d5d92e8b02-serviceca\") pod \"node-ca-wcwkw\" (UID: \"0008ccce-c71a-484f-9df7-02d5d92e8b02\") " pod="openshift-image-registry/node-ca-wcwkw" Jan 28 06:48:24 crc kubenswrapper[4642]: I0128 06:48:24.686710 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s57nl\" (UniqueName: \"kubernetes.io/projected/0008ccce-c71a-484f-9df7-02d5d92e8b02-kube-api-access-s57nl\") pod \"node-ca-wcwkw\" (UID: \"0008ccce-c71a-484f-9df7-02d5d92e8b02\") " pod="openshift-image-registry/node-ca-wcwkw" Jan 28 06:48:24 crc kubenswrapper[4642]: I0128 06:48:24.686762 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0008ccce-c71a-484f-9df7-02d5d92e8b02-host\") pod \"node-ca-wcwkw\" (UID: \"0008ccce-c71a-484f-9df7-02d5d92e8b02\") " pod="openshift-image-registry/node-ca-wcwkw" Jan 28 06:48:24 crc kubenswrapper[4642]: I0128 06:48:24.686863 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0008ccce-c71a-484f-9df7-02d5d92e8b02-host\") pod \"node-ca-wcwkw\" (UID: \"0008ccce-c71a-484f-9df7-02d5d92e8b02\") " pod="openshift-image-registry/node-ca-wcwkw" Jan 28 06:48:24 crc kubenswrapper[4642]: I0128 06:48:24.687642 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/0008ccce-c71a-484f-9df7-02d5d92e8b02-serviceca\") pod \"node-ca-wcwkw\" (UID: \"0008ccce-c71a-484f-9df7-02d5d92e8b02\") " pod="openshift-image-registry/node-ca-wcwkw" Jan 28 06:48:24 crc kubenswrapper[4642]: I0128 06:48:24.705251 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s57nl\" (UniqueName: \"kubernetes.io/projected/0008ccce-c71a-484f-9df7-02d5d92e8b02-kube-api-access-s57nl\") pod \"node-ca-wcwkw\" (UID: \"0008ccce-c71a-484f-9df7-02d5d92e8b02\") " pod="openshift-image-registry/node-ca-wcwkw" Jan 28 06:48:24 crc kubenswrapper[4642]: I0128 06:48:24.740605 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-wcwkw" Jan 28 06:48:24 crc kubenswrapper[4642]: I0128 06:48:24.750340 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:24 crc kubenswrapper[4642]: I0128 06:48:24.750378 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:24 crc kubenswrapper[4642]: I0128 06:48:24.750391 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:24 crc kubenswrapper[4642]: I0128 06:48:24.750408 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:24 crc kubenswrapper[4642]: I0128 06:48:24.750420 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:24Z","lastTransitionTime":"2026-01-28T06:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:24 crc kubenswrapper[4642]: W0128 06:48:24.755132 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0008ccce_c71a_484f_9df7_02d5d92e8b02.slice/crio-9f03ee50ee241aa5a95abae3a6bb30f0ec908f89ee709e35389c6d9324a6d495 WatchSource:0}: Error finding container 9f03ee50ee241aa5a95abae3a6bb30f0ec908f89ee709e35389c6d9324a6d495: Status 404 returned error can't find the container with id 9f03ee50ee241aa5a95abae3a6bb30f0ec908f89ee709e35389c6d9324a6d495 Jan 28 06:48:24 crc kubenswrapper[4642]: I0128 06:48:24.853102 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:24 crc kubenswrapper[4642]: I0128 06:48:24.853392 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:24 crc kubenswrapper[4642]: I0128 06:48:24.853404 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:24 crc kubenswrapper[4642]: I0128 06:48:24.853421 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:24 crc kubenswrapper[4642]: I0128 06:48:24.853431 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:24Z","lastTransitionTime":"2026-01-28T06:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:24 crc kubenswrapper[4642]: I0128 06:48:24.955803 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:24 crc kubenswrapper[4642]: I0128 06:48:24.955841 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:24 crc kubenswrapper[4642]: I0128 06:48:24.955853 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:24 crc kubenswrapper[4642]: I0128 06:48:24.955873 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:24 crc kubenswrapper[4642]: I0128 06:48:24.955885 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:24Z","lastTransitionTime":"2026-01-28T06:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:25 crc kubenswrapper[4642]: I0128 06:48:25.058656 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:25 crc kubenswrapper[4642]: I0128 06:48:25.058716 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:25 crc kubenswrapper[4642]: I0128 06:48:25.058728 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:25 crc kubenswrapper[4642]: I0128 06:48:25.058752 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:25 crc kubenswrapper[4642]: I0128 06:48:25.058777 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:25Z","lastTransitionTime":"2026-01-28T06:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:25 crc kubenswrapper[4642]: I0128 06:48:25.073996 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 05:29:18.120702069 +0000 UTC Jan 28 06:48:25 crc kubenswrapper[4642]: I0128 06:48:25.097653 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 06:48:25 crc kubenswrapper[4642]: I0128 06:48:25.097732 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 06:48:25 crc kubenswrapper[4642]: I0128 06:48:25.097662 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 06:48:25 crc kubenswrapper[4642]: E0128 06:48:25.097805 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 06:48:25 crc kubenswrapper[4642]: E0128 06:48:25.097880 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 06:48:25 crc kubenswrapper[4642]: E0128 06:48:25.098098 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 06:48:25 crc kubenswrapper[4642]: I0128 06:48:25.161426 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:25 crc kubenswrapper[4642]: I0128 06:48:25.161462 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:25 crc kubenswrapper[4642]: I0128 06:48:25.161473 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:25 crc kubenswrapper[4642]: I0128 06:48:25.161486 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:25 crc kubenswrapper[4642]: I0128 06:48:25.161496 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:25Z","lastTransitionTime":"2026-01-28T06:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:25 crc kubenswrapper[4642]: I0128 06:48:25.231312 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-wcwkw" event={"ID":"0008ccce-c71a-484f-9df7-02d5d92e8b02","Type":"ContainerStarted","Data":"04f0fbcd26ac8f7340450ee1edb0990146e1fee5ac1a5de78611d4a3c6ca7516"} Jan 28 06:48:25 crc kubenswrapper[4642]: I0128 06:48:25.231363 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-wcwkw" event={"ID":"0008ccce-c71a-484f-9df7-02d5d92e8b02","Type":"ContainerStarted","Data":"9f03ee50ee241aa5a95abae3a6bb30f0ec908f89ee709e35389c6d9324a6d495"} Jan 28 06:48:25 crc kubenswrapper[4642]: I0128 06:48:25.236002 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" event={"ID":"0f5d2a3f-25d8-4051-8000-30ec01a14eb0","Type":"ContainerStarted","Data":"4cdaa92914e736d4130cee0be4f11381b0edec756ca70235c6a4205f04777874"} Jan 28 06:48:25 crc kubenswrapper[4642]: I0128 06:48:25.239090 4642 generic.go:334] "Generic (PLEG): container finished" podID="5e8cd657-e170-4331-9f82-7b84d122c8e4" containerID="cce747d5ffc2f77c768c9e7a472ef0dbf11387e6c11215622ffceef721c9f677" exitCode=0 Jan 28 06:48:25 crc kubenswrapper[4642]: I0128 06:48:25.239140 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zwkn6" event={"ID":"5e8cd657-e170-4331-9f82-7b84d122c8e4","Type":"ContainerDied","Data":"cce747d5ffc2f77c768c9e7a472ef0dbf11387e6c11215622ffceef721c9f677"} Jan 28 06:48:25 crc kubenswrapper[4642]: I0128 06:48:25.243567 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"338ae955-434d-40bd-8519-580badf3e175\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8c176f35c846bbf16f17d93e6c30e3708e24e70060ebc0e78f9714fc8c402f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdf4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f517e5ecc6bf060062365392506b3ee3a09354a4f5cfaa980e0e3a2507f298c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdf4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hdsmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:25Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:25 crc kubenswrapper[4642]: I0128 06:48:25.253731 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:25Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:25 crc kubenswrapper[4642]: I0128 06:48:25.264307 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:25 crc kubenswrapper[4642]: I0128 06:48:25.264364 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:25 crc kubenswrapper[4642]: I0128 06:48:25.264374 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:25 crc kubenswrapper[4642]: I0128 06:48:25.264391 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:25 crc kubenswrapper[4642]: I0128 06:48:25.264405 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:25Z","lastTransitionTime":"2026-01-28T06:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:25 crc kubenswrapper[4642]: I0128 06:48:25.271953 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:25Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:25 crc kubenswrapper[4642]: I0128 06:48:25.285606 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4acb8b3be7913130bfe0acca1f8ed39f30ee2121eec228e4878f5645899f53cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfb4620ab8d587f286964f74ee575b4837fe55979a2f70767d30709cdb4fb2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:25Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:25 crc kubenswrapper[4642]: I0128 06:48:25.294691 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tzmpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e61b4710-6f7c-4ab1-b7bb-50d445aeda93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51dac6b178ad3961488d10a37458f5d40c3345f0091f7050a4c3d310ea46704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rfx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tzmpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:25Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:25 crc kubenswrapper[4642]: I0128 06:48:25.304301 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e3e00c82a4c8d33e89ddc75f7c6c393e11f7a98c88c8e3e160fd8f206ece474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:25Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:25 crc kubenswrapper[4642]: I0128 06:48:25.313902 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d609634-f051-46d5-9a1d-97785c7d8c67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76f1f4a4bf66b6661b0b869ac209c793c9066518674be305037372a22141c1d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f31fe6b99be2f9edd215e88aadd39790ce7f144061336772561af3eab969fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f826dab9c51a49c5b54e934e0be64c809ff09247f8aa766b9a20734a3f1277da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf1949a0768be6ba1b7cfc9107df62005fa50466747e22440e5ea8cd04ec908c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:25Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:25 crc kubenswrapper[4642]: I0128 06:48:25.324331 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5886e382e4852d848554f98099a7cbdd4d0bc3f161f96fd6f34f300a748b0172\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:25Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:25 crc kubenswrapper[4642]: I0128 06:48:25.340074 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecf4d023197544de3b0fe0f541dc75332bb08d0f44046e46fd468e3c5a0f251e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecf4d023197544de3b0fe0f541dc75332bb08d0f44046e46fd468e3c5a0f251e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7fdwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:25Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:25 crc kubenswrapper[4642]: I0128 06:48:25.353321 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"286f1a08-e4be-475e-a3ff-c37c99c41ea6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6769a5284fe4dae7df7948532cabc41d1f4a27a7e61ab818d61b26eb8165a750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75be46ee251716ce59f7d4e31811114c790e07ff2068465e1bba6ee4abc6909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1c4b797e1cd78fa6a850c8c7491a690dab16a3c06eeb71b2dc7e6799cf285a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7181241cbadd0b8edc9a6ff7f035813a1b2a06cdec451dc94d869bb56424a485\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70d9e422e29c4563585ee85b03f4d3e5705c7de8c388d337a52b30a7f0ccc096\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0128 06:48:09.026722 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 06:48:09.029824 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3147749855/tls.crt::/tmp/serving-cert-3147749855/tls.key\\\\\\\"\\\\nI0128 06:48:14.183760 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 06:48:14.185955 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 06:48:14.185992 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 06:48:14.186011 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 06:48:14.186020 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 06:48:14.192438 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0128 06:48:14.192454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0128 06:48:14.192477 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 06:48:14.192484 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 06:48:14.192488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 06:48:14.192490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 06:48:14.192493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 06:48:14.192496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0128 06:48:14.194350 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6d564af60cb16fea7167aea68703adced71f83b8ff6a2270e2e6d109f6e9c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d544adc85b9e84ad7685c6ce4e1ae6e7679fb24a21b7d44eb402b613ef6be7ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d544adc85b9e84ad7685c6ce4e1ae6e7679fb24a21b7d44eb402b613ef6be7ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:25Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:25 crc kubenswrapper[4642]: I0128 06:48:25.362286 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:25Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:25 crc kubenswrapper[4642]: I0128 06:48:25.366541 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:25 crc kubenswrapper[4642]: I0128 06:48:25.366598 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:25 crc kubenswrapper[4642]: I0128 06:48:25.366610 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:25 crc kubenswrapper[4642]: I0128 06:48:25.366625 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:25 crc kubenswrapper[4642]: I0128 06:48:25.366637 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:25Z","lastTransitionTime":"2026-01-28T06:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:25 crc kubenswrapper[4642]: I0128 06:48:25.373341 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zwkn6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e8cd657-e170-4331-9f82-7b84d122c8e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd2d14172912918c88948764f41aa4b5f2f97fa49f9edf2007a478b7c06b807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fd2d14172912918c88948764f41aa4b5f2f97fa49f9edf2007a478b7c06b807\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa815d145b52bbd51ede4a9e3c88571f861f70a6f5dc998910e506e358f73af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7aa815d145b52bbd51ede4a9e3c88571f861f70a6f5dc998910e506e358f73af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e6b7369b4926f78f6257e3f166c6540e93281a880c2a21ae3f87ecf32aaca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e6b7369b4926f78f6257e3f166c6540e93281a880c2a21ae3f87ecf32aaca0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zwkn6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:25Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:25 crc kubenswrapper[4642]: I0128 06:48:25.382382 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-28n48" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d569b7c-8a0e-4074-b61f-4139413b9849\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04752968302ea040e4a64b015290addc260974a7243137d55948c3e1a3e3a849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kp8zj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-28n48\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:25Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:25 crc kubenswrapper[4642]: I0128 06:48:25.389818 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wcwkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0008ccce-c71a-484f-9df7-02d5d92e8b02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04f0fbcd26ac8f7340450ee1edb0990146e1fee5ac1a5de78611d4a3c6ca7516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s57nl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wcwkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:25Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:25 crc kubenswrapper[4642]: I0128 06:48:25.398919 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:25Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:25 crc kubenswrapper[4642]: I0128 06:48:25.408998 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:25Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:25 crc kubenswrapper[4642]: I0128 06:48:25.418107 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4acb8b3be7913130bfe0acca1f8ed39f30ee2121eec228e4878f5645899f53cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfb4620ab8d587f286964f74ee575b4837fe55979a2f70767d30709cdb4fb2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:25Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:25 crc kubenswrapper[4642]: I0128 06:48:25.425763 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tzmpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e61b4710-6f7c-4ab1-b7bb-50d445aeda93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51dac6b178ad3961488d10a37458f5d40c3345f0091f7050a4c3d310ea46704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rfx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tzmpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:25Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:25 crc kubenswrapper[4642]: I0128 06:48:25.439979 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"338ae955-434d-40bd-8519-580badf3e175\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8c176f35c846bbf16f17d93e6c30e3708e24e70060ebc0e78f9714fc8c402f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdf4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f517e5ecc6bf060062365392506b3ee3a09354a4f5cfaa980e0e3a2507f298c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdf4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hdsmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:25Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:25 crc kubenswrapper[4642]: I0128 06:48:25.457843 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e3e00c82a4c8d33e89ddc75f7c6c393e11f7a98c88c8e3e160fd8f206ece474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:25Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:25 crc kubenswrapper[4642]: I0128 06:48:25.468664 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:25 crc kubenswrapper[4642]: I0128 06:48:25.468709 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:25 crc kubenswrapper[4642]: I0128 06:48:25.468719 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:25 crc kubenswrapper[4642]: I0128 06:48:25.468738 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:25 crc kubenswrapper[4642]: I0128 06:48:25.468747 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:25Z","lastTransitionTime":"2026-01-28T06:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:25 crc kubenswrapper[4642]: I0128 06:48:25.490851 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d609634-f051-46d5-9a1d-97785c7d8c67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76f1f4a4bf66b6661b0b869ac209c793c9066518674be305037372a22141c1d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f31fe6b99be2f9edd215e88aadd39790ce7f144061336772561af3eab969fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f826dab9c51a49c5b54e934e0be64c809ff09247f8aa766b9a20734a3f1277da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf1949a0768be6ba1b7cfc9107df62005fa50466747e22440e5ea8cd04ec908c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:25Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:25 crc kubenswrapper[4642]: I0128 06:48:25.503529 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5886e382e4852d848554f98099a7cbdd4d0bc3f161f96fd6f34f300a748b0172\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:25Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:25 crc kubenswrapper[4642]: I0128 06:48:25.518921 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecf4d023197544de3b0fe0f541dc75332bb08d0f44046e46fd468e3c5a0f251e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecf4d023197544de3b0fe0f541dc75332bb08d0f44046e46fd468e3c5a0f251e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7fdwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:25Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:25 crc kubenswrapper[4642]: I0128 06:48:25.529505 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"286f1a08-e4be-475e-a3ff-c37c99c41ea6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6769a5284fe4dae7df7948532cabc41d1f4a27a7e61ab818d61b26eb8165a750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75be46ee251716ce59f7d4e31811114c790e07ff2068465e1bba6ee4abc6909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1c4b797e1cd78fa6a850c8c7491a690dab16a3c06eeb71b2dc7e6799cf285a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7181241cbadd0b8edc9a6ff7f035813a1b2a06cdec451dc94d869bb56424a485\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70d9e422e29c4563585ee85b03f4d3e5705c7de8c388d337a52b30a7f0ccc096\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0128 06:48:09.026722 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 06:48:09.029824 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3147749855/tls.crt::/tmp/serving-cert-3147749855/tls.key\\\\\\\"\\\\nI0128 06:48:14.183760 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 06:48:14.185955 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 06:48:14.185992 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 06:48:14.186011 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 06:48:14.186020 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 06:48:14.192438 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0128 06:48:14.192454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0128 06:48:14.192477 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 06:48:14.192484 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 06:48:14.192488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 06:48:14.192490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 06:48:14.192493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 06:48:14.192496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0128 06:48:14.194350 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6d564af60cb16fea7167aea68703adced71f83b8ff6a2270e2e6d109f6e9c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d544adc85b9e84ad7685c6ce4e1ae6e7679fb24a21b7d44eb402b613ef6be7ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d544adc85b9e84ad7685c6ce4e1ae6e7679fb24a21b7d44eb402b613ef6be7ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:25Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:25 crc kubenswrapper[4642]: I0128 06:48:25.539523 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:25Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:25 crc kubenswrapper[4642]: I0128 06:48:25.550223 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zwkn6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e8cd657-e170-4331-9f82-7b84d122c8e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd2d14172912918c88948764f41aa4b5f2f97fa49f9edf2007a478b7c06b807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fd2d14172912918c88948764f41aa4b5f2f97fa49f9edf2007a478b7c06b807\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa815d145b52bbd51ede4a9e3c88571f861f70a6f5dc998910e506e358f73af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7aa815d145b52bbd51ede4a9e3c88571f861f70a6f5dc998910e506e358f73af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e6b7369b4926f78f6257e3f166c6540e93281a880c2a21ae3f87ecf32aaca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e6b7369b4926f78f6257e3f166c6540e93281a880c2a21ae3f87ecf32aaca0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce747d5ffc2f77c768c9e7a472ef0dbf11387e6c11215622ffceef721c9f677\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce747d5ffc2f77c768c9e7a472ef0dbf11387e6c11215622ffceef721c9f677\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zwkn6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:25Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:25 crc kubenswrapper[4642]: I0128 06:48:25.559572 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-28n48" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d569b7c-8a0e-4074-b61f-4139413b9849\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04752968302ea040e4a64b015290addc260974a7243137d55948c3e1a3e3a849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kp8zj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-28n48\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:25Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:25 crc kubenswrapper[4642]: I0128 06:48:25.567393 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wcwkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0008ccce-c71a-484f-9df7-02d5d92e8b02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04f0fbcd26ac8f7340450ee1edb0990146e1fee5ac1a5de78611d4a3c6ca7516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s57nl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wcwkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:25Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:25 crc kubenswrapper[4642]: I0128 06:48:25.571077 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:25 crc kubenswrapper[4642]: I0128 06:48:25.571104 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:25 crc kubenswrapper[4642]: I0128 06:48:25.571112 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:25 crc kubenswrapper[4642]: I0128 06:48:25.571128 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:25 crc kubenswrapper[4642]: I0128 06:48:25.571137 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:25Z","lastTransitionTime":"2026-01-28T06:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:25 crc kubenswrapper[4642]: I0128 06:48:25.673298 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:25 crc kubenswrapper[4642]: I0128 06:48:25.673342 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:25 crc kubenswrapper[4642]: I0128 06:48:25.673360 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:25 crc kubenswrapper[4642]: I0128 06:48:25.673376 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:25 crc kubenswrapper[4642]: I0128 06:48:25.673385 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:25Z","lastTransitionTime":"2026-01-28T06:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:25 crc kubenswrapper[4642]: I0128 06:48:25.776241 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:25 crc kubenswrapper[4642]: I0128 06:48:25.776283 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:25 crc kubenswrapper[4642]: I0128 06:48:25.776293 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:25 crc kubenswrapper[4642]: I0128 06:48:25.776308 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:25 crc kubenswrapper[4642]: I0128 06:48:25.776319 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:25Z","lastTransitionTime":"2026-01-28T06:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:25 crc kubenswrapper[4642]: I0128 06:48:25.879260 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:25 crc kubenswrapper[4642]: I0128 06:48:25.879332 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:25 crc kubenswrapper[4642]: I0128 06:48:25.879368 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:25 crc kubenswrapper[4642]: I0128 06:48:25.879397 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:25 crc kubenswrapper[4642]: I0128 06:48:25.879417 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:25Z","lastTransitionTime":"2026-01-28T06:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:25 crc kubenswrapper[4642]: I0128 06:48:25.982122 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:25 crc kubenswrapper[4642]: I0128 06:48:25.982164 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:25 crc kubenswrapper[4642]: I0128 06:48:25.982178 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:25 crc kubenswrapper[4642]: I0128 06:48:25.982208 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:25 crc kubenswrapper[4642]: I0128 06:48:25.982218 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:25Z","lastTransitionTime":"2026-01-28T06:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:26 crc kubenswrapper[4642]: I0128 06:48:26.075131 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 16:13:38.11339762 +0000 UTC Jan 28 06:48:26 crc kubenswrapper[4642]: I0128 06:48:26.085222 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:26 crc kubenswrapper[4642]: I0128 06:48:26.085263 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:26 crc kubenswrapper[4642]: I0128 06:48:26.085275 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:26 crc kubenswrapper[4642]: I0128 06:48:26.085298 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:26 crc kubenswrapper[4642]: I0128 06:48:26.085309 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:26Z","lastTransitionTime":"2026-01-28T06:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:26 crc kubenswrapper[4642]: I0128 06:48:26.188040 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:26 crc kubenswrapper[4642]: I0128 06:48:26.188411 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:26 crc kubenswrapper[4642]: I0128 06:48:26.188421 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:26 crc kubenswrapper[4642]: I0128 06:48:26.188440 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:26 crc kubenswrapper[4642]: I0128 06:48:26.188454 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:26Z","lastTransitionTime":"2026-01-28T06:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:26 crc kubenswrapper[4642]: I0128 06:48:26.247463 4642 generic.go:334] "Generic (PLEG): container finished" podID="5e8cd657-e170-4331-9f82-7b84d122c8e4" containerID="a45118261ada0368155ab783800af3a71afe445370f06759b98928518999151d" exitCode=0 Jan 28 06:48:26 crc kubenswrapper[4642]: I0128 06:48:26.247542 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zwkn6" event={"ID":"5e8cd657-e170-4331-9f82-7b84d122c8e4","Type":"ContainerDied","Data":"a45118261ada0368155ab783800af3a71afe445370f06759b98928518999151d"} Jan 28 06:48:26 crc kubenswrapper[4642]: I0128 06:48:26.260988 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"286f1a08-e4be-475e-a3ff-c37c99c41ea6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6769a5284fe4dae7df7948532cabc41d1f4a27a7e61ab818d61b26eb8165a750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75be46ee251716ce59f7d4e31811114c790e07ff2068465e1bba6ee4abc6909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1c4b797e1cd78fa6a850c8c7491a690dab16a3c06eeb71b2dc7e6799cf285a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7181241cbadd0b8edc9a6ff7f035813a1b2a06cdec451dc94d869bb56424a485\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70d9e422e29c4563585ee85b03f4d3e5705c7de8c388d337a52b30a7f0ccc096\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0128 06:48:09.026722 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 06:48:09.029824 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3147749855/tls.crt::/tmp/serving-cert-3147749855/tls.key\\\\\\\"\\\\nI0128 06:48:14.183760 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 06:48:14.185955 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 06:48:14.185992 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 06:48:14.186011 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 06:48:14.186020 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 06:48:14.192438 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0128 06:48:14.192454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0128 06:48:14.192477 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 06:48:14.192484 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 06:48:14.192488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 06:48:14.192490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 06:48:14.192493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 06:48:14.192496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0128 06:48:14.194350 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6d564af60cb16fea7167aea68703adced71f83b8ff6a2270e2e6d109f6e9c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d544adc85b9e84ad7685c6ce4e1ae6e7679fb24a21b7d44eb402b613ef6be7ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d544adc85b9e84ad7685c6ce4e1ae6e7679fb24a21b7d44eb402b613ef6be7ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:26Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:26 crc kubenswrapper[4642]: I0128 06:48:26.270206 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:26Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:26 crc kubenswrapper[4642]: I0128 06:48:26.285209 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zwkn6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e8cd657-e170-4331-9f82-7b84d122c8e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd2d14172912918c88948764f41aa4b5f2f97fa49f9edf2007a478b7c06b807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fd2d14172912918c88948764f41aa4b5f2f97fa49f9edf2007a478b7c06b807\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa815d145b52bbd51ede4a9e3c88571f861f70a6f5dc998910e506e358f73af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7aa815d145b52bbd51ede4a9e3c88571f861f70a6f5dc998910e506e358f73af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e6b7369b4926f78f6257e3f166c6540e93281a880c2a21ae3f87ecf32aaca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e6b7369b4926f78f6257e3f166c6540e93281a880c2a21ae3f87ecf32aaca0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce747d5ffc2f77c768c9e7a472ef0dbf11387e6c11215622ffceef721c9f677\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce747d5ffc2f77c768c9e7a472ef0dbf11387e6c11215622ffceef721c9f677\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a45118261ada0368155ab783800af3a71afe445370f06759b98928518999151d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a45118261ada0368155ab783800af3a71afe445370f06759b98928518999151d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zwkn6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:26Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:26 crc kubenswrapper[4642]: I0128 06:48:26.292009 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:26 crc kubenswrapper[4642]: I0128 06:48:26.292038 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:26 crc kubenswrapper[4642]: I0128 06:48:26.292047 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:26 crc kubenswrapper[4642]: I0128 06:48:26.292063 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:26 crc kubenswrapper[4642]: I0128 06:48:26.292074 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:26Z","lastTransitionTime":"2026-01-28T06:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:26 crc kubenswrapper[4642]: I0128 06:48:26.294858 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-28n48" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d569b7c-8a0e-4074-b61f-4139413b9849\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04752968302ea040e4a64b015290addc260974a7243137d55948c3e1a3e3a849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kp8zj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-28n48\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:26Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:26 crc kubenswrapper[4642]: I0128 06:48:26.302129 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wcwkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0008ccce-c71a-484f-9df7-02d5d92e8b02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04f0fbcd26ac8f7340450ee1edb0990146e1fee5ac1a5de78611d4a3c6ca7516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s57nl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wcwkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:26Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:26 crc kubenswrapper[4642]: I0128 06:48:26.310574 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"338ae955-434d-40bd-8519-580badf3e175\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8c176f35c846bbf16f17d93e6c30e3708e24e70060ebc0e78f9714fc8c402f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdf4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f517e5ecc6bf060062365392506b3ee3a09354a4f5cfaa980e0e3a2507f298c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdf4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hdsmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:26Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:26 crc kubenswrapper[4642]: I0128 06:48:26.318807 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:26Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:26 crc kubenswrapper[4642]: I0128 06:48:26.326948 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:26Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:26 crc kubenswrapper[4642]: I0128 06:48:26.336336 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4acb8b3be7913130bfe0acca1f8ed39f30ee2121eec228e4878f5645899f53cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfb4620ab8d587f286964f74ee575b4837fe55979a2f70767d30709cdb4fb2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:26Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:26 crc kubenswrapper[4642]: I0128 06:48:26.343969 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tzmpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e61b4710-6f7c-4ab1-b7bb-50d445aeda93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51dac6b178ad3961488d10a37458f5d40c3345f0091f7050a4c3d310ea46704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rfx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tzmpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:26Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:26 crc kubenswrapper[4642]: I0128 06:48:26.353328 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e3e00c82a4c8d33e89ddc75f7c6c393e11f7a98c88c8e3e160fd8f206ece474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:26Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:26 crc kubenswrapper[4642]: I0128 06:48:26.361866 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d609634-f051-46d5-9a1d-97785c7d8c67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76f1f4a4bf66b6661b0b869ac209c793c9066518674be305037372a22141c1d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f31fe6b99be2f9edd215e88aadd39790ce7f144061336772561af3eab969fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f826dab9c51a49c5b54e934e0be64c809ff09247f8aa766b9a20734a3f1277da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf1949a0768be6ba1b7cfc9107df62005fa50466747e22440e5ea8cd04ec908c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:26Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:26 crc kubenswrapper[4642]: I0128 06:48:26.371234 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5886e382e4852d848554f98099a7cbdd4d0bc3f161f96fd6f34f300a748b0172\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:26Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:26 crc kubenswrapper[4642]: I0128 06:48:26.385103 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecf4d023197544de3b0fe0f541dc75332bb08d0f44046e46fd468e3c5a0f251e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecf4d023197544de3b0fe0f541dc75332bb08d0f44046e46fd468e3c5a0f251e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7fdwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:26Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:26 crc kubenswrapper[4642]: I0128 06:48:26.394725 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:26 crc kubenswrapper[4642]: I0128 06:48:26.394755 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:26 crc kubenswrapper[4642]: I0128 06:48:26.394782 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:26 crc kubenswrapper[4642]: I0128 06:48:26.394799 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:26 crc kubenswrapper[4642]: I0128 06:48:26.394808 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:26Z","lastTransitionTime":"2026-01-28T06:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:26 crc kubenswrapper[4642]: I0128 06:48:26.498846 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:26 crc kubenswrapper[4642]: I0128 06:48:26.498891 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:26 crc kubenswrapper[4642]: I0128 06:48:26.498904 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:26 crc kubenswrapper[4642]: I0128 06:48:26.498923 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:26 crc kubenswrapper[4642]: I0128 06:48:26.498936 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:26Z","lastTransitionTime":"2026-01-28T06:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:26 crc kubenswrapper[4642]: I0128 06:48:26.600885 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:26 crc kubenswrapper[4642]: I0128 06:48:26.600926 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:26 crc kubenswrapper[4642]: I0128 06:48:26.600936 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:26 crc kubenswrapper[4642]: I0128 06:48:26.600952 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:26 crc kubenswrapper[4642]: I0128 06:48:26.600963 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:26Z","lastTransitionTime":"2026-01-28T06:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:26 crc kubenswrapper[4642]: I0128 06:48:26.702789 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:26 crc kubenswrapper[4642]: I0128 06:48:26.702827 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:26 crc kubenswrapper[4642]: I0128 06:48:26.702835 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:26 crc kubenswrapper[4642]: I0128 06:48:26.702848 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:26 crc kubenswrapper[4642]: I0128 06:48:26.702856 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:26Z","lastTransitionTime":"2026-01-28T06:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:26 crc kubenswrapper[4642]: I0128 06:48:26.805753 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:26 crc kubenswrapper[4642]: I0128 06:48:26.805826 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:26 crc kubenswrapper[4642]: I0128 06:48:26.805842 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:26 crc kubenswrapper[4642]: I0128 06:48:26.805868 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:26 crc kubenswrapper[4642]: I0128 06:48:26.805880 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:26Z","lastTransitionTime":"2026-01-28T06:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:26 crc kubenswrapper[4642]: I0128 06:48:26.908896 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:26 crc kubenswrapper[4642]: I0128 06:48:26.908953 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:26 crc kubenswrapper[4642]: I0128 06:48:26.908964 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:26 crc kubenswrapper[4642]: I0128 06:48:26.908986 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:26 crc kubenswrapper[4642]: I0128 06:48:26.909000 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:26Z","lastTransitionTime":"2026-01-28T06:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:26 crc kubenswrapper[4642]: I0128 06:48:26.982900 4642 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.014710 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.014994 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.015004 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.015019 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.015029 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:27Z","lastTransitionTime":"2026-01-28T06:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.035105 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.046037 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e3e00c82a4c8d33e89ddc75f7c6c393e11f7a98c88c8e3e160fd8f206ece474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:27Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.056147 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d609634-f051-46d5-9a1d-97785c7d8c67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76f1f4a4bf66b6661b0b869ac209c793c9066518674be305037372a22141c1d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f31fe6b99be2f9edd215e88aadd39790ce7f144061336772561af3eab969fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f826dab9c51a49c5b54e934e0be64c809ff09247f8aa766b9a20734a3f1277da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf1949a0768be6ba1b7cfc9107df62005fa50466747e22440e5ea8cd04ec908c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:27Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.071511 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5886e382e4852d848554f98099a7cbdd4d0bc3f161f96fd6f34f300a748b0172\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:27Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.075814 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 18:38:40.386260499 +0000 UTC Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.087580 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecf4d023197544de3b0fe0f541dc75332bb08d0f44046e46fd468e3c5a0f251e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecf4d023197544de3b0fe0f541dc75332bb08d0f44046e46fd468e3c5a0f251e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7fdwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:27Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.097898 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wcwkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0008ccce-c71a-484f-9df7-02d5d92e8b02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04f0fbcd26ac8f7340450ee1edb0990146e1fee5ac1a5de78611d4a3c6ca7516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s57nl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wcwkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:27Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.098068 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 06:48:27 crc kubenswrapper[4642]: E0128 06:48:27.098266 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.098290 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.098447 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 06:48:27 crc kubenswrapper[4642]: E0128 06:48:27.098529 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 06:48:27 crc kubenswrapper[4642]: E0128 06:48:27.098593 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.108054 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"286f1a08-e4be-475e-a3ff-c37c99c41ea6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6769a5284fe4dae7df7948532cabc41d1f4a27a7e61ab818d61b26eb8165a750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75be46ee251716ce59f7d4e31811114c790e07ff2068465e1bba6ee4abc6909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1c4b797e1cd78fa6a850c8c7491a690dab16a3c06eeb71b2dc7e6799cf285a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7181241cbadd0b8edc9a6ff7f035813a1b2a06cdec451dc94d869bb56424a485\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70d9e422e29c4563585ee85b03f4d3e5705c7de8c388d337a52b30a7f0ccc096\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0128 06:48:09.026722 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 06:48:09.029824 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3147749855/tls.crt::/tmp/serving-cert-3147749855/tls.key\\\\\\\"\\\\nI0128 06:48:14.183760 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 06:48:14.185955 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 06:48:14.185992 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 06:48:14.186011 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 06:48:14.186020 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 06:48:14.192438 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0128 06:48:14.192454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0128 06:48:14.192477 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 06:48:14.192484 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 06:48:14.192488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 06:48:14.192490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 06:48:14.192493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 06:48:14.192496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0128 06:48:14.194350 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6d564af60cb16fea7167aea68703adced71f83b8ff6a2270e2e6d109f6e9c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d544adc85b9e84ad7685c6ce4e1ae6e7679fb24a21b7d44eb402b613ef6be7ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d544adc85b9e84ad7685c6ce4e1ae6e7679fb24a21b7d44eb402b613ef6be7ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:27Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.116967 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.116990 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.116999 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.117012 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.117021 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:27Z","lastTransitionTime":"2026-01-28T06:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.117714 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:27Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.128323 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zwkn6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e8cd657-e170-4331-9f82-7b84d122c8e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd2d14172912918c88948764f41aa4b5f2f97fa49f9edf2007a478b7c06b807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fd2d14172912918c88948764f41aa4b5f2f97fa49f9edf2007a478b7c06b807\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa815d145b52bbd51ede4a9e3c88571f861f70a6f5dc998910e506e358f73af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7aa815d145b52bbd51ede4a9e3c88571f861f70a6f5dc998910e506e358f73af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e6b7369b4926f78f6257e3f166c6540e93281a880c2a21ae3f87ecf32aaca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e6b7369b4926f78f6257e3f166c6540e93281a880c2a21ae3f87ecf32aaca0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce747d5ffc2f77c768c9e7a472ef0dbf11387e6c11215622ffceef721c9f677\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce747d5ffc2f77c768c9e7a472ef0dbf11387e6c11215622ffceef721c9f677\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a45118261ada0368155ab783800af3a71afe445370f06759b98928518999151d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a45118261ada0368155ab783800af3a71afe445370f06759b98928518999151d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zwkn6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:27Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.139164 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-28n48" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d569b7c-8a0e-4074-b61f-4139413b9849\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04752968302ea040e4a64b015290addc260974a7243137d55948c3e1a3e3a849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kp8zj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-28n48\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:27Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.153257 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4acb8b3be7913130bfe0acca1f8ed39f30ee2121eec228e4878f5645899f53cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfb4620ab8d587f286964f74ee575b4837fe55979a2f70767d30709cdb4fb2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:27Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.161291 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tzmpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e61b4710-6f7c-4ab1-b7bb-50d445aeda93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51dac6b178ad3961488d10a37458f5d40c3345f0091f7050a4c3d310ea46704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rfx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tzmpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:27Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.168655 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"338ae955-434d-40bd-8519-580badf3e175\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8c176f35c846bbf16f17d93e6c30e3708e24e70060ebc0e78f9714fc8c402f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdf4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f517e5ecc6bf060062365392506b3ee3a09354a4f5cfaa980e0e3a2507f298c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdf4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hdsmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:27Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.176407 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:27Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.184312 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:27Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.193310 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d609634-f051-46d5-9a1d-97785c7d8c67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76f1f4a4bf66b6661b0b869ac209c793c9066518674be305037372a22141c1d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f31fe6b99be2f9edd215e88aadd39790ce7f144061336772561af3eab969fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f826dab9c51a49c5b54e934e0be64c809ff09247f8aa766b9a20734a3f1277da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf1949a0768be6ba1b7cfc9107df62005fa50466747e22440e5ea8cd04ec908c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:27Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.202688 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5886e382e4852d848554f98099a7cbdd4d0bc3f161f96fd6f34f300a748b0172\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:27Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.215952 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecf4d023197544de3b0fe0f541dc75332bb08d0f44046e46fd468e3c5a0f251e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecf4d023197544de3b0fe0f541dc75332bb08d0f44046e46fd468e3c5a0f251e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7fdwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:27Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.219259 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.219284 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.219295 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.219311 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.219321 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:27Z","lastTransitionTime":"2026-01-28T06:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.226773 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"286f1a08-e4be-475e-a3ff-c37c99c41ea6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6769a5284fe4dae7df7948532cabc41d1f4a27a7e61ab818d61b26eb8165a750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75be46ee251716ce59f7d4e31811114c790e07ff2068465e1bba6ee4abc6909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1c4b797e1cd78fa6a850c8c7491a690dab16a3c06eeb71b2dc7e6799cf285a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7181241cbadd0b8edc9a6ff7f035813a1b2a06cdec451dc94d869bb56424a485\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70d9e422e29c4563585ee85b03f4d3e5705c7de8c388d337a52b30a7f0ccc096\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0128 06:48:09.026722 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 06:48:09.029824 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3147749855/tls.crt::/tmp/serving-cert-3147749855/tls.key\\\\\\\"\\\\nI0128 06:48:14.183760 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 06:48:14.185955 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 06:48:14.185992 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 06:48:14.186011 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 06:48:14.186020 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 06:48:14.192438 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0128 06:48:14.192454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0128 06:48:14.192477 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 06:48:14.192484 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 06:48:14.192488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 06:48:14.192490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 06:48:14.192493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 06:48:14.192496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0128 06:48:14.194350 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6d564af60cb16fea7167aea68703adced71f83b8ff6a2270e2e6d109f6e9c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d544adc85b9e84ad7685c6ce4e1ae6e7679fb24a21b7d44eb402b613ef6be7ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d544adc85b9e84ad7685c6ce4e1ae6e7679fb24a21b7d44eb402b613ef6be7ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:27Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.234992 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:27Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.244764 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zwkn6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e8cd657-e170-4331-9f82-7b84d122c8e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd2d14172912918c88948764f41aa4b5f2f97fa49f9edf2007a478b7c06b807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fd2d14172912918c88948764f41aa4b5f2f97fa49f9edf2007a478b7c06b807\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa815d145b52bbd51ede4a9e3c88571f861f70a6f5dc998910e506e358f73af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7aa815d145b52bbd51ede4a9e3c88571f861f70a6f5dc998910e506e358f73af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e6b7369b4926f78f6257e3f166c6540e93281a880c2a21ae3f87ecf32aaca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e6b7369b4926f78f6257e3f166c6540e93281a880c2a21ae3f87ecf32aaca0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce747d5ffc2f77c768c9e7a472ef0dbf11387e6c11215622ffceef721c9f677\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce747d5ffc2f77c768c9e7a472ef0dbf11387e6c11215622ffceef721c9f677\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a45118261ada0368155ab783800af3a71afe445370f06759b98928518999151d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a45118261ada0368155ab783800af3a71afe445370f06759b98928518999151d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zwkn6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:27Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.260314 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-28n48" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d569b7c-8a0e-4074-b61f-4139413b9849\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04752968302ea040e4a64b015290addc260974a7243137d55948c3e1a3e3a849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kp8zj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-28n48\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:27Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.263021 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" event={"ID":"0f5d2a3f-25d8-4051-8000-30ec01a14eb0","Type":"ContainerStarted","Data":"a7910d2cb2c6e6a4f89a6ff081bc0090d19ac293e50096919a54fb112bce3805"} Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.263215 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.263455 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.265447 4642 generic.go:334] "Generic (PLEG): container finished" podID="5e8cd657-e170-4331-9f82-7b84d122c8e4" containerID="e7d61a01712de033907e9fec425f948bcf708899adea8b125ff47bc3b9953f3f" exitCode=0 Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.265477 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zwkn6" event={"ID":"5e8cd657-e170-4331-9f82-7b84d122c8e4","Type":"ContainerDied","Data":"e7d61a01712de033907e9fec425f948bcf708899adea8b125ff47bc3b9953f3f"} Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.267797 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wcwkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0008ccce-c71a-484f-9df7-02d5d92e8b02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04f0fbcd26ac8f7340450ee1edb0990146e1fee5ac1a5de78611d4a3c6ca7516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s57nl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wcwkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:27Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.277272 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:27Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.286697 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.286986 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.288446 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:27Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.297644 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4acb8b3be7913130bfe0acca1f8ed39f30ee2121eec228e4878f5645899f53cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfb4620ab8d587f286964f74ee575b4837fe55979a2f70767d30709cdb4fb2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:27Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.305787 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tzmpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e61b4710-6f7c-4ab1-b7bb-50d445aeda93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51dac6b178ad3961488d10a37458f5d40c3345f0091f7050a4c3d310ea46704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rfx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tzmpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:27Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.313481 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"338ae955-434d-40bd-8519-580badf3e175\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8c176f35c846bbf16f17d93e6c30e3708e24e70060ebc0e78f9714fc8c402f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdf4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f517e5ecc6bf060062365392506b3ee3a09354a4f5cfaa980e0e3a2507f298c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdf4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hdsmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:27Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.321498 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.321527 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.321549 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.321564 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.321575 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:27Z","lastTransitionTime":"2026-01-28T06:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.321773 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e3e00c82a4c8d33e89ddc75f7c6c393e11f7a98c88c8e3e160fd8f206ece474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:27Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.332096 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"286f1a08-e4be-475e-a3ff-c37c99c41ea6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6769a5284fe4dae7df7948532cabc41d1f4a27a7e61ab818d61b26eb8165a750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75be46ee251716ce59f7d4e31811114c790e07ff2068465e1bba6ee4abc6909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1c4b797e1cd78fa6a850c8c7491a690dab16a3c06eeb71b2dc7e6799cf285a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7181241cbadd0b8edc9a6ff7f035813a1b2a06cdec451dc94d869bb56424a485\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70d9e422e29c4563585ee85b03f4d3e5705c7de8c388d337a52b30a7f0ccc096\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0128 06:48:09.026722 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 06:48:09.029824 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3147749855/tls.crt::/tmp/serving-cert-3147749855/tls.key\\\\\\\"\\\\nI0128 06:48:14.183760 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 06:48:14.185955 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 06:48:14.185992 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 06:48:14.186011 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 06:48:14.186020 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 06:48:14.192438 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0128 06:48:14.192454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0128 06:48:14.192477 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 06:48:14.192484 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 06:48:14.192488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 06:48:14.192490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 06:48:14.192493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 06:48:14.192496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0128 06:48:14.194350 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6d564af60cb16fea7167aea68703adced71f83b8ff6a2270e2e6d109f6e9c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d544adc85b9e84ad7685c6ce4e1ae6e7679fb24a21b7d44eb402b613ef6be7ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d544adc85b9e84ad7685c6ce4e1ae6e7679fb24a21b7d44eb402b613ef6be7ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:27Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.342721 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:27Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.355043 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zwkn6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e8cd657-e170-4331-9f82-7b84d122c8e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd2d14172912918c88948764f41aa4b5f2f97fa49f9edf2007a478b7c06b807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fd2d14172912918c88948764f41aa4b5f2f97fa49f9edf2007a478b7c06b807\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa815d145b52bbd51ede4a9e3c88571f861f70a6f5dc998910e506e358f73af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7aa815d145b52bbd51ede4a9e3c88571f861f70a6f5dc998910e506e358f73af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e6b7369b4926f78f6257e3f166c6540e93281a880c2a21ae3f87ecf32aaca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e6b7369b4926f78f6257e3f166c6540e93281a880c2a21ae3f87ecf32aaca0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce747d5ffc2f77c768c9e7a472ef0dbf11387e6c11215622ffceef721c9f677\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce747d5ffc2f77c768c9e7a472ef0dbf11387e6c11215622ffceef721c9f677\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a45118261ada0368155ab783800af3a71afe445370f06759b98928518999151d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a45118261ada0368155ab783800af3a71afe445370f06759b98928518999151d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7d61a01712de033907e9fec425f948bcf708899adea8b125ff47bc3b9953f3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7d61a01712de033907e9fec425f948bcf708899adea8b125ff47bc3b9953f3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zwkn6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:27Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.365232 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-28n48" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d569b7c-8a0e-4074-b61f-4139413b9849\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04752968302ea040e4a64b015290addc260974a7243137d55948c3e1a3e3a849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kp8zj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-28n48\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:27Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.373313 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wcwkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0008ccce-c71a-484f-9df7-02d5d92e8b02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04f0fbcd26ac8f7340450ee1edb0990146e1fee5ac1a5de78611d4a3c6ca7516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s57nl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wcwkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:27Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.381777 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tzmpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e61b4710-6f7c-4ab1-b7bb-50d445aeda93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51dac6b178ad3961488d10a37458f5d40c3345f0091f7050a4c3d310ea46704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rfx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tzmpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:27Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.391618 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"338ae955-434d-40bd-8519-580badf3e175\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8c176f35c846bbf16f17d93e6c30e3708e24e70060ebc0e78f9714fc8c402f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdf4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f517e5ecc6bf060062365392506b3ee3a09354a4f5cfaa980e0e3a2507f298c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdf4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hdsmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:27Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.401317 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:27Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.409868 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:27Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.419684 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4acb8b3be7913130bfe0acca1f8ed39f30ee2121eec228e4878f5645899f53cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfb4620ab8d587f286964f74ee575b4837fe55979a2f70767d30709cdb4fb2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:27Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.424124 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.424262 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.424363 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.424440 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.424501 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:27Z","lastTransitionTime":"2026-01-28T06:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.428243 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e3e00c82a4c8d33e89ddc75f7c6c393e11f7a98c88c8e3e160fd8f206ece474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:27Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.437747 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d609634-f051-46d5-9a1d-97785c7d8c67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76f1f4a4bf66b6661b0b869ac209c793c9066518674be305037372a22141c1d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f31fe6b99be2f9edd215e88aadd39790ce7f144061336772561af3eab969fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f826dab9c51a49c5b54e934e0be64c809ff09247f8aa766b9a20734a3f1277da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf1949a0768be6ba1b7cfc9107df62005fa50466747e22440e5ea8cd04ec908c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:27Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.448340 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5886e382e4852d848554f98099a7cbdd4d0bc3f161f96fd6f34f300a748b0172\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:27Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.464591 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c33d9fa76f5feb29f150ff38354d292c6cb68dedb78eb8496754c23272d137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9c679b07a820cca68800f672c74f133d871d41e4cfe6c18e101febd7110b497\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba5c27390ef97be2917d1b375ca741d7a74118ba523043b1c826e4fccaee05d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a597a1b6d4b61efc1b46773965ad598626e298815da9ceffaf97965d72286d3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e87e992d302acbefdcab2e332757e1f6d6535a0acdbe268b1e2eb1d49a615940\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6187660025657deea7e0cd68d1341136261c5e66c9e32c3b00b181df029843b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7910d2cb2c6e6a4f89a6ff081bc0090d19ac293e50096919a54fb112bce3805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cdaa92914e736d4130cee0be4f11381b0edec756ca70235c6a4205f04777874\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecf4d023197544de3b0fe0f541dc75332bb08d0f44046e46fd468e3c5a0f251e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecf4d023197544de3b0fe0f541dc75332bb08d0f44046e46fd468e3c5a0f251e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7fdwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:27Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.526801 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.526843 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.526851 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.526871 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.526884 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:27Z","lastTransitionTime":"2026-01-28T06:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.530725 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.530753 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.530764 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.530780 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.530791 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:27Z","lastTransitionTime":"2026-01-28T06:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:27 crc kubenswrapper[4642]: E0128 06:48:27.540250 4642 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"af9fb491-28d4-46e5-857c-727aaf9d83d0\\\",\\\"systemUUID\\\":\\\"6d7f2c45-295c-4dcd-b97d-d5c383274b44\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:27Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.543111 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.543141 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.543151 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.543164 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.543171 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:27Z","lastTransitionTime":"2026-01-28T06:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:27 crc kubenswrapper[4642]: E0128 06:48:27.551864 4642 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"af9fb491-28d4-46e5-857c-727aaf9d83d0\\\",\\\"systemUUID\\\":\\\"6d7f2c45-295c-4dcd-b97d-d5c383274b44\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:27Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.554693 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.554751 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.554764 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.554785 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.554798 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:27Z","lastTransitionTime":"2026-01-28T06:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:27 crc kubenswrapper[4642]: E0128 06:48:27.563882 4642 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"af9fb491-28d4-46e5-857c-727aaf9d83d0\\\",\\\"systemUUID\\\":\\\"6d7f2c45-295c-4dcd-b97d-d5c383274b44\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:27Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.566728 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.566764 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.566774 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.566795 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.566808 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:27Z","lastTransitionTime":"2026-01-28T06:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:27 crc kubenswrapper[4642]: E0128 06:48:27.580456 4642 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"af9fb491-28d4-46e5-857c-727aaf9d83d0\\\",\\\"systemUUID\\\":\\\"6d7f2c45-295c-4dcd-b97d-d5c383274b44\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:27Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.582863 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.582948 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.583021 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.583087 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.583144 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:27Z","lastTransitionTime":"2026-01-28T06:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:27 crc kubenswrapper[4642]: E0128 06:48:27.591060 4642 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"af9fb491-28d4-46e5-857c-727aaf9d83d0\\\",\\\"systemUUID\\\":\\\"6d7f2c45-295c-4dcd-b97d-d5c383274b44\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:27Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:27 crc kubenswrapper[4642]: E0128 06:48:27.591163 4642 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.628668 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.628717 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.628727 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.628742 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.628755 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:27Z","lastTransitionTime":"2026-01-28T06:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.730923 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.730957 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.730969 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.730989 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.731000 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:27Z","lastTransitionTime":"2026-01-28T06:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.833247 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.833280 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.833289 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.833302 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.833311 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:27Z","lastTransitionTime":"2026-01-28T06:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.935819 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.935851 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.935860 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.935870 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:27 crc kubenswrapper[4642]: I0128 06:48:27.935878 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:27Z","lastTransitionTime":"2026-01-28T06:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:28 crc kubenswrapper[4642]: I0128 06:48:28.038467 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:28 crc kubenswrapper[4642]: I0128 06:48:28.038496 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:28 crc kubenswrapper[4642]: I0128 06:48:28.038504 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:28 crc kubenswrapper[4642]: I0128 06:48:28.038514 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:28 crc kubenswrapper[4642]: I0128 06:48:28.038522 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:28Z","lastTransitionTime":"2026-01-28T06:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:28 crc kubenswrapper[4642]: I0128 06:48:28.076233 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 16:23:07.495038921 +0000 UTC Jan 28 06:48:28 crc kubenswrapper[4642]: I0128 06:48:28.140919 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:28 crc kubenswrapper[4642]: I0128 06:48:28.140973 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:28 crc kubenswrapper[4642]: I0128 06:48:28.140984 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:28 crc kubenswrapper[4642]: I0128 06:48:28.141001 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:28 crc kubenswrapper[4642]: I0128 06:48:28.141012 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:28Z","lastTransitionTime":"2026-01-28T06:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:28 crc kubenswrapper[4642]: I0128 06:48:28.243409 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:28 crc kubenswrapper[4642]: I0128 06:48:28.243444 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:28 crc kubenswrapper[4642]: I0128 06:48:28.243453 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:28 crc kubenswrapper[4642]: I0128 06:48:28.243471 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:28 crc kubenswrapper[4642]: I0128 06:48:28.243484 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:28Z","lastTransitionTime":"2026-01-28T06:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:28 crc kubenswrapper[4642]: I0128 06:48:28.271811 4642 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 28 06:48:28 crc kubenswrapper[4642]: I0128 06:48:28.272343 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zwkn6" event={"ID":"5e8cd657-e170-4331-9f82-7b84d122c8e4","Type":"ContainerStarted","Data":"a285511bdaad491d745092714ce6a3d6b08414c40333226f4c7a0a1176ea7260"} Jan 28 06:48:28 crc kubenswrapper[4642]: I0128 06:48:28.286519 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"286f1a08-e4be-475e-a3ff-c37c99c41ea6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6769a5284fe4dae7df7948532cabc41d1f4a27a7e61ab818d61b26eb8165a750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75be46ee251716ce59f7d4e31811114c790e07ff2068465e1bba6ee4abc6909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1c4b797e1cd78fa6a850c8c7491a690dab16a3c06eeb71b2dc7e6799cf285a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7181241cbadd0b8edc9a6ff7f035813a1b2a06cdec451dc94d869bb56424a485\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70d9e422e29c4563585ee85b03f4d3e5705c7de8c388d337a52b30a7f0ccc096\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0128 06:48:09.026722 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 06:48:09.029824 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3147749855/tls.crt::/tmp/serving-cert-3147749855/tls.key\\\\\\\"\\\\nI0128 06:48:14.183760 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 06:48:14.185955 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 06:48:14.185992 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 06:48:14.186011 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 06:48:14.186020 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 06:48:14.192438 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0128 06:48:14.192454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0128 06:48:14.192477 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 06:48:14.192484 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 06:48:14.192488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 06:48:14.192490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 06:48:14.192493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 06:48:14.192496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0128 06:48:14.194350 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6d564af60cb16fea7167aea68703adced71f83b8ff6a2270e2e6d109f6e9c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d544adc85b9e84ad7685c6ce4e1ae6e7679fb24a21b7d44eb402b613ef6be7ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d544adc85b9e84ad7685c6ce4e1ae6e7679fb24a21b7d44eb402b613ef6be7ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:28Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:28 crc kubenswrapper[4642]: I0128 06:48:28.296613 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:28Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:28 crc kubenswrapper[4642]: I0128 06:48:28.306847 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zwkn6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e8cd657-e170-4331-9f82-7b84d122c8e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a285511bdaad491d745092714ce6a3d6b08414c40333226f4c7a0a1176ea7260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd2d14172912918c88948764f41aa4b5f2f97fa49f9edf2007a478b7c06b807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fd2d14172912918c88948764f41aa4b5f2f97fa49f9edf2007a478b7c06b807\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa815d145b52bbd51ede4a9e3c88571f861f70a6f5dc998910e506e358f73af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7aa815d145b52bbd51ede4a9e3c88571f861f70a6f5dc998910e506e358f73af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e6b7369b4926f78f6257e3f166c6540e93281a880c2a21ae3f87ecf32aaca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e6b7369b4926f78f6257e3f166c6540e93281a880c2a21ae3f87ecf32aaca0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce747d5ffc2f77c768c9e7a472ef0dbf11387e6c11215622ffceef721c9f677\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce747d5ffc2f77c768c9e7a472ef0dbf11387e6c11215622ffceef721c9f677\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a45118261ada0368155ab783800af3a71afe445370f06759b98928518999151d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a45118261ada0368155ab783800af3a71afe445370f06759b98928518999151d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7d61a01712de033907e9fec425f948bcf708899adea8b125ff47bc3b9953f3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7d61a01712de033907e9fec425f948bcf708899adea8b125ff47bc3b9953f3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zwkn6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:28Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:28 crc kubenswrapper[4642]: I0128 06:48:28.315767 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-28n48" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d569b7c-8a0e-4074-b61f-4139413b9849\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04752968302ea040e4a64b015290addc260974a7243137d55948c3e1a3e3a849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kp8zj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-28n48\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:28Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:28 crc kubenswrapper[4642]: I0128 06:48:28.323159 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wcwkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0008ccce-c71a-484f-9df7-02d5d92e8b02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04f0fbcd26ac8f7340450ee1edb0990146e1fee5ac1a5de78611d4a3c6ca7516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s57nl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wcwkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:28Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:28 crc kubenswrapper[4642]: I0128 06:48:28.333099 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:28Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:28 crc kubenswrapper[4642]: I0128 06:48:28.341747 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:28Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:28 crc kubenswrapper[4642]: I0128 06:48:28.345490 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:28 crc kubenswrapper[4642]: I0128 06:48:28.345523 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:28 crc kubenswrapper[4642]: I0128 06:48:28.345533 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:28 crc kubenswrapper[4642]: I0128 06:48:28.345548 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:28 crc kubenswrapper[4642]: I0128 06:48:28.345557 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:28Z","lastTransitionTime":"2026-01-28T06:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:28 crc kubenswrapper[4642]: I0128 06:48:28.351035 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4acb8b3be7913130bfe0acca1f8ed39f30ee2121eec228e4878f5645899f53cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfb4620ab8d587f286964f74ee575b4837fe55979a2f70767d30709cdb4fb2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:28Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:28 crc kubenswrapper[4642]: I0128 06:48:28.359217 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tzmpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e61b4710-6f7c-4ab1-b7bb-50d445aeda93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51dac6b178ad3961488d10a37458f5d40c3345f0091f7050a4c3d310ea46704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rfx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tzmpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:28Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:28 crc kubenswrapper[4642]: I0128 06:48:28.367831 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"338ae955-434d-40bd-8519-580badf3e175\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8c176f35c846bbf16f17d93e6c30e3708e24e70060ebc0e78f9714fc8c402f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdf4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f517e5ecc6bf060062365392506b3ee3a09354a4f5cfaa980e0e3a2507f298c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdf4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hdsmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:28Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:28 crc kubenswrapper[4642]: I0128 06:48:28.376677 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e3e00c82a4c8d33e89ddc75f7c6c393e11f7a98c88c8e3e160fd8f206ece474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:28Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:28 crc kubenswrapper[4642]: I0128 06:48:28.386712 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d609634-f051-46d5-9a1d-97785c7d8c67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76f1f4a4bf66b6661b0b869ac209c793c9066518674be305037372a22141c1d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f31fe6b99be2f9edd215e88aadd39790ce7f144061336772561af3eab969fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f826dab9c51a49c5b54e934e0be64c809ff09247f8aa766b9a20734a3f1277da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf1949a0768be6ba1b7cfc9107df62005fa50466747e22440e5ea8cd04ec908c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:28Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:28 crc kubenswrapper[4642]: I0128 06:48:28.396832 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5886e382e4852d848554f98099a7cbdd4d0bc3f161f96fd6f34f300a748b0172\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:28Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:28 crc kubenswrapper[4642]: I0128 06:48:28.410714 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c33d9fa76f5feb29f150ff38354d292c6cb68dedb78eb8496754c23272d137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9c679b07a820cca68800f672c74f133d871d41e4cfe6c18e101febd7110b497\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba5c27390ef97be2917d1b375ca741d7a74118ba523043b1c826e4fccaee05d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a597a1b6d4b61efc1b46773965ad598626e298815da9ceffaf97965d72286d3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e87e992d302acbefdcab2e332757e1f6d6535a0acdbe268b1e2eb1d49a615940\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6187660025657deea7e0cd68d1341136261c5e66c9e32c3b00b181df029843b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7910d2cb2c6e6a4f89a6ff081bc0090d19ac293e50096919a54fb112bce3805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cdaa92914e736d4130cee0be4f11381b0edec756ca70235c6a4205f04777874\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecf4d023197544de3b0fe0f541dc75332bb08d0f44046e46fd468e3c5a0f251e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecf4d023197544de3b0fe0f541dc75332bb08d0f44046e46fd468e3c5a0f251e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7fdwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:28Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:28 crc kubenswrapper[4642]: I0128 06:48:28.448082 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:28 crc kubenswrapper[4642]: I0128 06:48:28.448132 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:28 crc kubenswrapper[4642]: I0128 06:48:28.448143 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:28 crc kubenswrapper[4642]: I0128 06:48:28.448163 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:28 crc kubenswrapper[4642]: I0128 06:48:28.448175 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:28Z","lastTransitionTime":"2026-01-28T06:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:28 crc kubenswrapper[4642]: I0128 06:48:28.550516 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:28 crc kubenswrapper[4642]: I0128 06:48:28.550558 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:28 crc kubenswrapper[4642]: I0128 06:48:28.550567 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:28 crc kubenswrapper[4642]: I0128 06:48:28.550587 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:28 crc kubenswrapper[4642]: I0128 06:48:28.550598 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:28Z","lastTransitionTime":"2026-01-28T06:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:28 crc kubenswrapper[4642]: I0128 06:48:28.652821 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:28 crc kubenswrapper[4642]: I0128 06:48:28.653359 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:28 crc kubenswrapper[4642]: I0128 06:48:28.653434 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:28 crc kubenswrapper[4642]: I0128 06:48:28.653522 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:28 crc kubenswrapper[4642]: I0128 06:48:28.653585 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:28Z","lastTransitionTime":"2026-01-28T06:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:28 crc kubenswrapper[4642]: I0128 06:48:28.755428 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:28 crc kubenswrapper[4642]: I0128 06:48:28.755555 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:28 crc kubenswrapper[4642]: I0128 06:48:28.755615 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:28 crc kubenswrapper[4642]: I0128 06:48:28.755674 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:28 crc kubenswrapper[4642]: I0128 06:48:28.755737 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:28Z","lastTransitionTime":"2026-01-28T06:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:28 crc kubenswrapper[4642]: I0128 06:48:28.858206 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:28 crc kubenswrapper[4642]: I0128 06:48:28.858723 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:28 crc kubenswrapper[4642]: I0128 06:48:28.858786 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:28 crc kubenswrapper[4642]: I0128 06:48:28.858858 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:28 crc kubenswrapper[4642]: I0128 06:48:28.858927 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:28Z","lastTransitionTime":"2026-01-28T06:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:28 crc kubenswrapper[4642]: I0128 06:48:28.961926 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:28 crc kubenswrapper[4642]: I0128 06:48:28.961984 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:28 crc kubenswrapper[4642]: I0128 06:48:28.961996 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:28 crc kubenswrapper[4642]: I0128 06:48:28.962016 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:28 crc kubenswrapper[4642]: I0128 06:48:28.962027 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:28Z","lastTransitionTime":"2026-01-28T06:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:29 crc kubenswrapper[4642]: I0128 06:48:29.064900 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:29 crc kubenswrapper[4642]: I0128 06:48:29.064954 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:29 crc kubenswrapper[4642]: I0128 06:48:29.064968 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:29 crc kubenswrapper[4642]: I0128 06:48:29.064987 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:29 crc kubenswrapper[4642]: I0128 06:48:29.064999 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:29Z","lastTransitionTime":"2026-01-28T06:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:29 crc kubenswrapper[4642]: I0128 06:48:29.077029 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 08:43:59.248395803 +0000 UTC Jan 28 06:48:29 crc kubenswrapper[4642]: I0128 06:48:29.097869 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 06:48:29 crc kubenswrapper[4642]: I0128 06:48:29.097953 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 06:48:29 crc kubenswrapper[4642]: I0128 06:48:29.097967 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 06:48:29 crc kubenswrapper[4642]: E0128 06:48:29.098091 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 06:48:29 crc kubenswrapper[4642]: E0128 06:48:29.098232 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 06:48:29 crc kubenswrapper[4642]: E0128 06:48:29.098335 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 06:48:29 crc kubenswrapper[4642]: I0128 06:48:29.167749 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:29 crc kubenswrapper[4642]: I0128 06:48:29.167807 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:29 crc kubenswrapper[4642]: I0128 06:48:29.167820 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:29 crc kubenswrapper[4642]: I0128 06:48:29.167840 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:29 crc kubenswrapper[4642]: I0128 06:48:29.167855 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:29Z","lastTransitionTime":"2026-01-28T06:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:29 crc kubenswrapper[4642]: I0128 06:48:29.271063 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:29 crc kubenswrapper[4642]: I0128 06:48:29.271105 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:29 crc kubenswrapper[4642]: I0128 06:48:29.271116 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:29 crc kubenswrapper[4642]: I0128 06:48:29.271133 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:29 crc kubenswrapper[4642]: I0128 06:48:29.271145 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:29Z","lastTransitionTime":"2026-01-28T06:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:29 crc kubenswrapper[4642]: I0128 06:48:29.276409 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7fdwx_0f5d2a3f-25d8-4051-8000-30ec01a14eb0/ovnkube-controller/0.log" Jan 28 06:48:29 crc kubenswrapper[4642]: I0128 06:48:29.279292 4642 generic.go:334] "Generic (PLEG): container finished" podID="0f5d2a3f-25d8-4051-8000-30ec01a14eb0" containerID="a7910d2cb2c6e6a4f89a6ff081bc0090d19ac293e50096919a54fb112bce3805" exitCode=1 Jan 28 06:48:29 crc kubenswrapper[4642]: I0128 06:48:29.279333 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" event={"ID":"0f5d2a3f-25d8-4051-8000-30ec01a14eb0","Type":"ContainerDied","Data":"a7910d2cb2c6e6a4f89a6ff081bc0090d19ac293e50096919a54fb112bce3805"} Jan 28 06:48:29 crc kubenswrapper[4642]: I0128 06:48:29.280218 4642 scope.go:117] "RemoveContainer" containerID="a7910d2cb2c6e6a4f89a6ff081bc0090d19ac293e50096919a54fb112bce3805" Jan 28 06:48:29 crc kubenswrapper[4642]: I0128 06:48:29.291869 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5886e382e4852d848554f98099a7cbdd4d0bc3f161f96fd6f34f300a748b0172\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:29Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:29 crc kubenswrapper[4642]: I0128 06:48:29.307582 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c33d9fa76f5feb29f150ff38354d292c6cb68dedb78eb8496754c23272d137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9c679b07a820cca68800f672c74f133d871d41e4cfe6c18e101febd7110b497\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba5c27390ef97be2917d1b375ca741d7a74118ba523043b1c826e4fccaee05d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a597a1b6d4b61efc1b46773965ad598626e298815da9ceffaf97965d72286d3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e87e992d302acbefdcab2e332757e1f6d6535a0acdbe268b1e2eb1d49a615940\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6187660025657deea7e0cd68d1341136261c5e66c9e32c3b00b181df029843b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7910d2cb2c6e6a4f89a6ff081bc0090d19ac293e50096919a54fb112bce3805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7910d2cb2c6e6a4f89a6ff081bc0090d19ac293e50096919a54fb112bce3805\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T06:48:29Z\\\",\\\"message\\\":\\\"\\\\nI0128 06:48:29.046308 5954 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 06:48:29.046326 5954 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0128 06:48:29.046338 5954 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0128 06:48:29.046390 5954 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0128 06:48:29.046400 5954 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0128 06:48:29.046432 5954 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0128 06:48:29.047876 5954 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 06:48:29.047876 5954 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 06:48:29.047894 5954 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 06:48:29.047891 5954 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 06:48:29.047905 5954 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0128 06:48:29.047919 5954 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0128 06:48:29.047931 5954 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 06:48:29.048091 5954 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 06:48:29.048176 5954 factory.go:656] Stopping watch factory\\\\nI0128 06:48:29.048216 5954 ovnkube.go:599] Stopped ovnkube\\\\nI0128 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cdaa92914e736d4130cee0be4f11381b0edec756ca70235c6a4205f04777874\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecf4d023197544de3b0fe0f541dc75332bb08d0f44046e46fd468e3c5a0f251e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecf4d023197544de3b0fe0f541dc75332bb08d0f44046e46fd468e3c5a0f251e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7fdwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:29Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:29 crc kubenswrapper[4642]: I0128 06:48:29.317080 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d609634-f051-46d5-9a1d-97785c7d8c67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76f1f4a4bf66b6661b0b869ac209c793c9066518674be305037372a22141c1d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f31fe6b99be2f9edd215e88aadd39790ce7f144061336772561af3eab969fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f826dab9c51a49c5b54e934e0be64c809ff09247f8aa766b9a20734a3f1277da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf1949a0768be6ba1b7cfc9107df62005fa50466747e22440e5ea8cd04ec908c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:29Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:29 crc kubenswrapper[4642]: I0128 06:48:29.326738 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:29Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:29 crc kubenswrapper[4642]: I0128 06:48:29.338243 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zwkn6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e8cd657-e170-4331-9f82-7b84d122c8e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a285511bdaad491d745092714ce6a3d6b08414c40333226f4c7a0a1176ea7260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd2d14172912918c88948764f41aa4b5f2f97fa49f9edf2007a478b7c06b807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fd2d14172912918c88948764f41aa4b5f2f97fa49f9edf2007a478b7c06b807\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa815d145b52bbd51ede4a9e3c88571f861f70a6f5dc998910e506e358f73af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7aa815d145b52bbd51ede4a9e3c88571f861f70a6f5dc998910e506e358f73af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e6b7369b4926f78f6257e3f166c6540e93281a880c2a21ae3f87ecf32aaca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e6b7369b4926f78f6257e3f166c6540e93281a880c2a21ae3f87ecf32aaca0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce747d5ffc2f77c768c9e7a472ef0dbf11387e6c11215622ffceef721c9f677\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce747d5ffc2f77c768c9e7a472ef0dbf11387e6c11215622ffceef721c9f677\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a45118261ada0368155ab783800af3a71afe445370f06759b98928518999151d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a45118261ada0368155ab783800af3a71afe445370f06759b98928518999151d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7d61a01712de033907e9fec425f948bcf708899adea8b125ff47bc3b9953f3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7d61a01712de033907e9fec425f948bcf708899adea8b125ff47bc3b9953f3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zwkn6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:29Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:29 crc kubenswrapper[4642]: I0128 06:48:29.347169 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-28n48" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d569b7c-8a0e-4074-b61f-4139413b9849\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04752968302ea040e4a64b015290addc260974a7243137d55948c3e1a3e3a849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kp8zj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-28n48\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:29Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:29 crc kubenswrapper[4642]: I0128 06:48:29.355327 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wcwkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0008ccce-c71a-484f-9df7-02d5d92e8b02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04f0fbcd26ac8f7340450ee1edb0990146e1fee5ac1a5de78611d4a3c6ca7516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s57nl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wcwkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:29Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:29 crc kubenswrapper[4642]: I0128 06:48:29.364534 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"286f1a08-e4be-475e-a3ff-c37c99c41ea6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6769a5284fe4dae7df7948532cabc41d1f4a27a7e61ab818d61b26eb8165a750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75be46ee251716ce59f7d4e31811114c790e07ff2068465e1bba6ee4abc6909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1c4b797e1cd78fa6a850c8c7491a690dab16a3c06eeb71b2dc7e6799cf285a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7181241cbadd0b8edc9a6ff7f035813a1b2a06cdec451dc94d869bb56424a485\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70d9e422e29c4563585ee85b03f4d3e5705c7de8c388d337a52b30a7f0ccc096\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0128 06:48:09.026722 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 06:48:09.029824 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3147749855/tls.crt::/tmp/serving-cert-3147749855/tls.key\\\\\\\"\\\\nI0128 06:48:14.183760 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 06:48:14.185955 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 06:48:14.185992 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 06:48:14.186011 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 06:48:14.186020 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 06:48:14.192438 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0128 06:48:14.192454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0128 06:48:14.192477 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 06:48:14.192484 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 06:48:14.192488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 06:48:14.192490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 06:48:14.192493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 06:48:14.192496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0128 06:48:14.194350 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6d564af60cb16fea7167aea68703adced71f83b8ff6a2270e2e6d109f6e9c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d544adc85b9e84ad7685c6ce4e1ae6e7679fb24a21b7d44eb402b613ef6be7ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d544adc85b9e84ad7685c6ce4e1ae6e7679fb24a21b7d44eb402b613ef6be7ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:29Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:29 crc kubenswrapper[4642]: I0128 06:48:29.372505 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:29 crc kubenswrapper[4642]: I0128 06:48:29.372535 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:29 crc kubenswrapper[4642]: I0128 06:48:29.372546 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:29 crc kubenswrapper[4642]: I0128 06:48:29.372560 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:29 crc kubenswrapper[4642]: I0128 06:48:29.372569 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:29Z","lastTransitionTime":"2026-01-28T06:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:29 crc kubenswrapper[4642]: I0128 06:48:29.373789 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:29Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:29 crc kubenswrapper[4642]: I0128 06:48:29.382633 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:29Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:29 crc kubenswrapper[4642]: I0128 06:48:29.391987 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4acb8b3be7913130bfe0acca1f8ed39f30ee2121eec228e4878f5645899f53cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfb4620ab8d587f286964f74ee575b4837fe55979a2f70767d30709cdb4fb2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:29Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:29 crc kubenswrapper[4642]: I0128 06:48:29.399361 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tzmpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e61b4710-6f7c-4ab1-b7bb-50d445aeda93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51dac6b178ad3961488d10a37458f5d40c3345f0091f7050a4c3d310ea46704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rfx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tzmpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:29Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:29 crc kubenswrapper[4642]: I0128 06:48:29.409157 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"338ae955-434d-40bd-8519-580badf3e175\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8c176f35c846bbf16f17d93e6c30e3708e24e70060ebc0e78f9714fc8c402f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdf4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f517e5ecc6bf060062365392506b3ee3a09354a4f5cfaa980e0e3a2507f298c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdf4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hdsmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:29Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:29 crc kubenswrapper[4642]: I0128 06:48:29.418207 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e3e00c82a4c8d33e89ddc75f7c6c393e11f7a98c88c8e3e160fd8f206ece474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:29Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:29 crc kubenswrapper[4642]: I0128 06:48:29.475847 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:29 crc kubenswrapper[4642]: I0128 06:48:29.475899 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:29 crc kubenswrapper[4642]: I0128 06:48:29.475908 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:29 crc kubenswrapper[4642]: I0128 06:48:29.475929 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:29 crc kubenswrapper[4642]: I0128 06:48:29.475943 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:29Z","lastTransitionTime":"2026-01-28T06:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:29 crc kubenswrapper[4642]: I0128 06:48:29.578035 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:29 crc kubenswrapper[4642]: I0128 06:48:29.578072 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:29 crc kubenswrapper[4642]: I0128 06:48:29.578083 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:29 crc kubenswrapper[4642]: I0128 06:48:29.578101 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:29 crc kubenswrapper[4642]: I0128 06:48:29.578113 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:29Z","lastTransitionTime":"2026-01-28T06:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:29 crc kubenswrapper[4642]: I0128 06:48:29.680571 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:29 crc kubenswrapper[4642]: I0128 06:48:29.680615 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:29 crc kubenswrapper[4642]: I0128 06:48:29.680624 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:29 crc kubenswrapper[4642]: I0128 06:48:29.680641 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:29 crc kubenswrapper[4642]: I0128 06:48:29.680650 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:29Z","lastTransitionTime":"2026-01-28T06:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:29 crc kubenswrapper[4642]: I0128 06:48:29.783474 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:29 crc kubenswrapper[4642]: I0128 06:48:29.783521 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:29 crc kubenswrapper[4642]: I0128 06:48:29.783531 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:29 crc kubenswrapper[4642]: I0128 06:48:29.783551 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:29 crc kubenswrapper[4642]: I0128 06:48:29.783561 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:29Z","lastTransitionTime":"2026-01-28T06:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:29 crc kubenswrapper[4642]: I0128 06:48:29.886666 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:29 crc kubenswrapper[4642]: I0128 06:48:29.886709 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:29 crc kubenswrapper[4642]: I0128 06:48:29.886720 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:29 crc kubenswrapper[4642]: I0128 06:48:29.886737 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:29 crc kubenswrapper[4642]: I0128 06:48:29.886748 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:29Z","lastTransitionTime":"2026-01-28T06:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:29 crc kubenswrapper[4642]: I0128 06:48:29.989651 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:29 crc kubenswrapper[4642]: I0128 06:48:29.989699 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:29 crc kubenswrapper[4642]: I0128 06:48:29.989720 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:29 crc kubenswrapper[4642]: I0128 06:48:29.989738 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:29 crc kubenswrapper[4642]: I0128 06:48:29.989752 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:29Z","lastTransitionTime":"2026-01-28T06:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:30 crc kubenswrapper[4642]: I0128 06:48:30.077703 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 02:00:50.878907389 +0000 UTC Jan 28 06:48:30 crc kubenswrapper[4642]: I0128 06:48:30.092177 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:30 crc kubenswrapper[4642]: I0128 06:48:30.092228 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:30 crc kubenswrapper[4642]: I0128 06:48:30.092240 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:30 crc kubenswrapper[4642]: I0128 06:48:30.092254 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:30 crc kubenswrapper[4642]: I0128 06:48:30.092262 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:30Z","lastTransitionTime":"2026-01-28T06:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:30 crc kubenswrapper[4642]: I0128 06:48:30.194871 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:30 crc kubenswrapper[4642]: I0128 06:48:30.194925 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:30 crc kubenswrapper[4642]: I0128 06:48:30.194935 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:30 crc kubenswrapper[4642]: I0128 06:48:30.194955 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:30 crc kubenswrapper[4642]: I0128 06:48:30.194968 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:30Z","lastTransitionTime":"2026-01-28T06:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:30 crc kubenswrapper[4642]: I0128 06:48:30.285238 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7fdwx_0f5d2a3f-25d8-4051-8000-30ec01a14eb0/ovnkube-controller/1.log" Jan 28 06:48:30 crc kubenswrapper[4642]: I0128 06:48:30.285832 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7fdwx_0f5d2a3f-25d8-4051-8000-30ec01a14eb0/ovnkube-controller/0.log" Jan 28 06:48:30 crc kubenswrapper[4642]: I0128 06:48:30.288816 4642 generic.go:334] "Generic (PLEG): container finished" podID="0f5d2a3f-25d8-4051-8000-30ec01a14eb0" containerID="b35193ef5e485263e1e373e9e2db577f3019bf0d55871920e59a0b313baed854" exitCode=1 Jan 28 06:48:30 crc kubenswrapper[4642]: I0128 06:48:30.288861 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" event={"ID":"0f5d2a3f-25d8-4051-8000-30ec01a14eb0","Type":"ContainerDied","Data":"b35193ef5e485263e1e373e9e2db577f3019bf0d55871920e59a0b313baed854"} Jan 28 06:48:30 crc kubenswrapper[4642]: I0128 06:48:30.288928 4642 scope.go:117] "RemoveContainer" containerID="a7910d2cb2c6e6a4f89a6ff081bc0090d19ac293e50096919a54fb112bce3805" Jan 28 06:48:30 crc kubenswrapper[4642]: I0128 06:48:30.289545 4642 scope.go:117] "RemoveContainer" containerID="b35193ef5e485263e1e373e9e2db577f3019bf0d55871920e59a0b313baed854" Jan 28 06:48:30 crc kubenswrapper[4642]: E0128 06:48:30.289716 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-7fdwx_openshift-ovn-kubernetes(0f5d2a3f-25d8-4051-8000-30ec01a14eb0)\"" pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" podUID="0f5d2a3f-25d8-4051-8000-30ec01a14eb0" Jan 28 06:48:30 crc kubenswrapper[4642]: I0128 06:48:30.297091 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:30 crc kubenswrapper[4642]: I0128 06:48:30.297126 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:30 crc kubenswrapper[4642]: I0128 06:48:30.297135 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:30 crc kubenswrapper[4642]: I0128 06:48:30.297152 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:30 crc kubenswrapper[4642]: I0128 06:48:30.297164 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:30Z","lastTransitionTime":"2026-01-28T06:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:30 crc kubenswrapper[4642]: I0128 06:48:30.301054 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e3e00c82a4c8d33e89ddc75f7c6c393e11f7a98c88c8e3e160fd8f206ece474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:30Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:30 crc kubenswrapper[4642]: I0128 06:48:30.312083 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d609634-f051-46d5-9a1d-97785c7d8c67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76f1f4a4bf66b6661b0b869ac209c793c9066518674be305037372a22141c1d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f31fe6b99be2f9edd215e88aadd39790ce7f144061336772561af3eab969fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f826dab9c51a49c5b54e934e0be64c809ff09247f8aa766b9a20734a3f1277da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf1949a0768be6ba1b7cfc9107df62005fa50466747e22440e5ea8cd04ec908c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:30Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:30 crc kubenswrapper[4642]: I0128 06:48:30.320651 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5886e382e4852d848554f98099a7cbdd4d0bc3f161f96fd6f34f300a748b0172\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:30Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:30 crc kubenswrapper[4642]: I0128 06:48:30.333284 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c33d9fa76f5feb29f150ff38354d292c6cb68dedb78eb8496754c23272d137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9c679b07a820cca68800f672c74f133d871d41e4cfe6c18e101febd7110b497\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba5c27390ef97be2917d1b375ca741d7a74118ba523043b1c826e4fccaee05d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a597a1b6d4b61efc1b46773965ad598626e298815da9ceffaf97965d72286d3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e87e992d302acbefdcab2e332757e1f6d6535a0acdbe268b1e2eb1d49a615940\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6187660025657deea7e0cd68d1341136261c5e66c9e32c3b00b181df029843b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35193ef5e485263e1e373e9e2db577f3019bf0d55871920e59a0b313baed854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7910d2cb2c6e6a4f89a6ff081bc0090d19ac293e50096919a54fb112bce3805\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T06:48:29Z\\\",\\\"message\\\":\\\"\\\\nI0128 06:48:29.046308 5954 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 06:48:29.046326 5954 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0128 06:48:29.046338 5954 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0128 06:48:29.046390 5954 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0128 06:48:29.046400 5954 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0128 06:48:29.046432 5954 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0128 06:48:29.047876 5954 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 06:48:29.047876 5954 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 06:48:29.047894 5954 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 06:48:29.047891 5954 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 06:48:29.047905 5954 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0128 06:48:29.047919 5954 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0128 06:48:29.047931 5954 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 06:48:29.048091 5954 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 06:48:29.048176 5954 factory.go:656] Stopping watch factory\\\\nI0128 06:48:29.048216 5954 ovnkube.go:599] Stopped ovnkube\\\\nI0128 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b35193ef5e485263e1e373e9e2db577f3019bf0d55871920e59a0b313baed854\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T06:48:29Z\\\",\\\"message\\\":\\\"ol:\\\\\\\"TCP\\\\\\\", inport:9393, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0128 06:48:29.954821 6081 lb_config.go:1031] Cluster endpoints for openshift-kube-apiserver-operator/metrics for network=default are: map[]\\\\nI0128 06:48:29.954839 6081 services_controller.go:444] Built service openshift-ingress-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0128 06:48:29.954849 6081 services_controller.go:445] Built service openshift-ingress-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF0128 06:48:29.954846 6081 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for banpInformer during admin network policy controller initialization, handler {0x1fcc300 0x1fcbfe0 0x1fcbf80} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cdaa92914e736d4130cee0be4f11381b0edec756ca70235c6a4205f04777874\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecf4d023197544de3b0fe0f541dc75332bb08d0f44046e46fd468e3c5a0f251e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecf4d023197544de3b0fe0f541dc75332bb08d0f44046e46fd468e3c5a0f251e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7fdwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:30Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:30 crc kubenswrapper[4642]: I0128 06:48:30.343802 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"286f1a08-e4be-475e-a3ff-c37c99c41ea6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6769a5284fe4dae7df7948532cabc41d1f4a27a7e61ab818d61b26eb8165a750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75be46ee251716ce59f7d4e31811114c790e07ff2068465e1bba6ee4abc6909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1c4b797e1cd78fa6a850c8c7491a690dab16a3c06eeb71b2dc7e6799cf285a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7181241cbadd0b8edc9a6ff7f035813a1b2a06cdec451dc94d869bb56424a485\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70d9e422e29c4563585ee85b03f4d3e5705c7de8c388d337a52b30a7f0ccc096\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0128 06:48:09.026722 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 06:48:09.029824 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3147749855/tls.crt::/tmp/serving-cert-3147749855/tls.key\\\\\\\"\\\\nI0128 06:48:14.183760 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 06:48:14.185955 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 06:48:14.185992 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 06:48:14.186011 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 06:48:14.186020 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 06:48:14.192438 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0128 06:48:14.192454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0128 06:48:14.192477 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 06:48:14.192484 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 06:48:14.192488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 06:48:14.192490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 06:48:14.192493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 06:48:14.192496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0128 06:48:14.194350 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6d564af60cb16fea7167aea68703adced71f83b8ff6a2270e2e6d109f6e9c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d544adc85b9e84ad7685c6ce4e1ae6e7679fb24a21b7d44eb402b613ef6be7ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d544adc85b9e84ad7685c6ce4e1ae6e7679fb24a21b7d44eb402b613ef6be7ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:30Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:30 crc kubenswrapper[4642]: I0128 06:48:30.352763 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:30Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:30 crc kubenswrapper[4642]: I0128 06:48:30.362881 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zwkn6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e8cd657-e170-4331-9f82-7b84d122c8e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a285511bdaad491d745092714ce6a3d6b08414c40333226f4c7a0a1176ea7260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd2d14172912918c88948764f41aa4b5f2f97fa49f9edf2007a478b7c06b807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fd2d14172912918c88948764f41aa4b5f2f97fa49f9edf2007a478b7c06b807\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa815d145b52bbd51ede4a9e3c88571f861f70a6f5dc998910e506e358f73af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7aa815d145b52bbd51ede4a9e3c88571f861f70a6f5dc998910e506e358f73af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e6b7369b4926f78f6257e3f166c6540e93281a880c2a21ae3f87ecf32aaca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e6b7369b4926f78f6257e3f166c6540e93281a880c2a21ae3f87ecf32aaca0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce747d5ffc2f77c768c9e7a472ef0dbf11387e6c11215622ffceef721c9f677\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce747d5ffc2f77c768c9e7a472ef0dbf11387e6c11215622ffceef721c9f677\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a45118261ada0368155ab783800af3a71afe445370f06759b98928518999151d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a45118261ada0368155ab783800af3a71afe445370f06759b98928518999151d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7d61a01712de033907e9fec425f948bcf708899adea8b125ff47bc3b9953f3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7d61a01712de033907e9fec425f948bcf708899adea8b125ff47bc3b9953f3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zwkn6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:30Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:30 crc kubenswrapper[4642]: I0128 06:48:30.371923 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-28n48" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d569b7c-8a0e-4074-b61f-4139413b9849\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04752968302ea040e4a64b015290addc260974a7243137d55948c3e1a3e3a849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kp8zj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-28n48\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:30Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:30 crc kubenswrapper[4642]: I0128 06:48:30.379100 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wcwkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0008ccce-c71a-484f-9df7-02d5d92e8b02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04f0fbcd26ac8f7340450ee1edb0990146e1fee5ac1a5de78611d4a3c6ca7516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s57nl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wcwkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:30Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:30 crc kubenswrapper[4642]: I0128 06:48:30.387319 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:30Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:30 crc kubenswrapper[4642]: I0128 06:48:30.395207 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:30Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:30 crc kubenswrapper[4642]: I0128 06:48:30.398811 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:30 crc kubenswrapper[4642]: I0128 06:48:30.398840 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:30 crc kubenswrapper[4642]: I0128 06:48:30.398853 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:30 crc kubenswrapper[4642]: I0128 06:48:30.398867 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:30 crc kubenswrapper[4642]: I0128 06:48:30.398876 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:30Z","lastTransitionTime":"2026-01-28T06:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:30 crc kubenswrapper[4642]: I0128 06:48:30.405569 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4acb8b3be7913130bfe0acca1f8ed39f30ee2121eec228e4878f5645899f53cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfb4620ab8d587f286964f74ee575b4837fe55979a2f70767d30709cdb4fb2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:30Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:30 crc kubenswrapper[4642]: I0128 06:48:30.414296 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tzmpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e61b4710-6f7c-4ab1-b7bb-50d445aeda93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51dac6b178ad3961488d10a37458f5d40c3345f0091f7050a4c3d310ea46704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rfx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tzmpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:30Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:30 crc kubenswrapper[4642]: I0128 06:48:30.422999 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"338ae955-434d-40bd-8519-580badf3e175\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8c176f35c846bbf16f17d93e6c30e3708e24e70060ebc0e78f9714fc8c402f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdf4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f517e5ecc6bf060062365392506b3ee3a09354a4f5cfaa980e0e3a2507f298c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdf4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hdsmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:30Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:30 crc kubenswrapper[4642]: I0128 06:48:30.501890 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:30 crc kubenswrapper[4642]: I0128 06:48:30.501944 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:30 crc kubenswrapper[4642]: I0128 06:48:30.501955 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:30 crc kubenswrapper[4642]: I0128 06:48:30.501981 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:30 crc kubenswrapper[4642]: I0128 06:48:30.501992 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:30Z","lastTransitionTime":"2026-01-28T06:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:30 crc kubenswrapper[4642]: I0128 06:48:30.604394 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:30 crc kubenswrapper[4642]: I0128 06:48:30.604457 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:30 crc kubenswrapper[4642]: I0128 06:48:30.604468 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:30 crc kubenswrapper[4642]: I0128 06:48:30.604494 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:30 crc kubenswrapper[4642]: I0128 06:48:30.604505 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:30Z","lastTransitionTime":"2026-01-28T06:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:30 crc kubenswrapper[4642]: I0128 06:48:30.706621 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:30 crc kubenswrapper[4642]: I0128 06:48:30.706670 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:30 crc kubenswrapper[4642]: I0128 06:48:30.706683 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:30 crc kubenswrapper[4642]: I0128 06:48:30.706704 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:30 crc kubenswrapper[4642]: I0128 06:48:30.706718 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:30Z","lastTransitionTime":"2026-01-28T06:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:30 crc kubenswrapper[4642]: I0128 06:48:30.750360 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 06:48:30 crc kubenswrapper[4642]: I0128 06:48:30.750505 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 06:48:30 crc kubenswrapper[4642]: E0128 06:48:30.750608 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 06:48:46.750563023 +0000 UTC m=+49.982651833 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 06:48:30 crc kubenswrapper[4642]: E0128 06:48:30.750641 4642 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 06:48:30 crc kubenswrapper[4642]: E0128 06:48:30.750715 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 06:48:46.750697065 +0000 UTC m=+49.982785873 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 06:48:30 crc kubenswrapper[4642]: I0128 06:48:30.750770 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 06:48:30 crc kubenswrapper[4642]: E0128 06:48:30.750816 4642 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 06:48:30 crc kubenswrapper[4642]: E0128 06:48:30.750847 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 06:48:46.750838631 +0000 UTC m=+49.982927440 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 06:48:30 crc kubenswrapper[4642]: I0128 06:48:30.809290 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:30 crc kubenswrapper[4642]: I0128 06:48:30.809326 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:30 crc kubenswrapper[4642]: I0128 06:48:30.809336 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:30 crc kubenswrapper[4642]: I0128 06:48:30.809366 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:30 crc kubenswrapper[4642]: I0128 06:48:30.809379 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:30Z","lastTransitionTime":"2026-01-28T06:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:30 crc kubenswrapper[4642]: I0128 06:48:30.852165 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 06:48:30 crc kubenswrapper[4642]: I0128 06:48:30.852269 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 06:48:30 crc kubenswrapper[4642]: E0128 06:48:30.852402 4642 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 06:48:30 crc kubenswrapper[4642]: E0128 06:48:30.852437 4642 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 06:48:30 crc kubenswrapper[4642]: E0128 06:48:30.852455 4642 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 06:48:30 crc kubenswrapper[4642]: E0128 06:48:30.852468 4642 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 06:48:30 crc kubenswrapper[4642]: E0128 06:48:30.852489 4642 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 06:48:30 crc kubenswrapper[4642]: E0128 06:48:30.852504 4642 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 06:48:30 crc kubenswrapper[4642]: E0128 06:48:30.852525 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 06:48:46.852502116 +0000 UTC m=+50.084590925 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 06:48:30 crc kubenswrapper[4642]: E0128 06:48:30.852568 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 06:48:46.852552892 +0000 UTC m=+50.084641702 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 06:48:30 crc kubenswrapper[4642]: I0128 06:48:30.912126 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:30 crc kubenswrapper[4642]: I0128 06:48:30.912209 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:30 crc kubenswrapper[4642]: I0128 06:48:30.912222 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:30 crc kubenswrapper[4642]: I0128 06:48:30.912242 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:30 crc kubenswrapper[4642]: I0128 06:48:30.912253 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:30Z","lastTransitionTime":"2026-01-28T06:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:31 crc kubenswrapper[4642]: I0128 06:48:31.015276 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:31 crc kubenswrapper[4642]: I0128 06:48:31.015342 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:31 crc kubenswrapper[4642]: I0128 06:48:31.015361 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:31 crc kubenswrapper[4642]: I0128 06:48:31.015378 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:31 crc kubenswrapper[4642]: I0128 06:48:31.015389 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:31Z","lastTransitionTime":"2026-01-28T06:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:31 crc kubenswrapper[4642]: I0128 06:48:31.078529 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 13:13:29.257162704 +0000 UTC Jan 28 06:48:31 crc kubenswrapper[4642]: I0128 06:48:31.098238 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 06:48:31 crc kubenswrapper[4642]: I0128 06:48:31.098376 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 06:48:31 crc kubenswrapper[4642]: E0128 06:48:31.098464 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 06:48:31 crc kubenswrapper[4642]: I0128 06:48:31.098273 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 06:48:31 crc kubenswrapper[4642]: E0128 06:48:31.098622 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 06:48:31 crc kubenswrapper[4642]: E0128 06:48:31.098707 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 06:48:31 crc kubenswrapper[4642]: I0128 06:48:31.117924 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:31 crc kubenswrapper[4642]: I0128 06:48:31.117975 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:31 crc kubenswrapper[4642]: I0128 06:48:31.117985 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:31 crc kubenswrapper[4642]: I0128 06:48:31.118003 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:31 crc kubenswrapper[4642]: I0128 06:48:31.118014 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:31Z","lastTransitionTime":"2026-01-28T06:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:31 crc kubenswrapper[4642]: I0128 06:48:31.220835 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:31 crc kubenswrapper[4642]: I0128 06:48:31.220885 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:31 crc kubenswrapper[4642]: I0128 06:48:31.220899 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:31 crc kubenswrapper[4642]: I0128 06:48:31.220915 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:31 crc kubenswrapper[4642]: I0128 06:48:31.220925 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:31Z","lastTransitionTime":"2026-01-28T06:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:31 crc kubenswrapper[4642]: I0128 06:48:31.293900 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7fdwx_0f5d2a3f-25d8-4051-8000-30ec01a14eb0/ovnkube-controller/1.log" Jan 28 06:48:31 crc kubenswrapper[4642]: I0128 06:48:31.298238 4642 scope.go:117] "RemoveContainer" containerID="b35193ef5e485263e1e373e9e2db577f3019bf0d55871920e59a0b313baed854" Jan 28 06:48:31 crc kubenswrapper[4642]: E0128 06:48:31.298793 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-7fdwx_openshift-ovn-kubernetes(0f5d2a3f-25d8-4051-8000-30ec01a14eb0)\"" pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" podUID="0f5d2a3f-25d8-4051-8000-30ec01a14eb0" Jan 28 06:48:31 crc kubenswrapper[4642]: I0128 06:48:31.315002 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zwkn6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e8cd657-e170-4331-9f82-7b84d122c8e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a285511bdaad491d745092714ce6a3d6b08414c40333226f4c7a0a1176ea7260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd2d14172912918c88948764f41aa4b5f2f97fa49f9edf2007a478b7c06b807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fd2d14172912918c88948764f41aa4b5f2f97fa49f9edf2007a478b7c06b807\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa815d145b52bbd51ede4a9e3c88571f861f70a6f5dc998910e506e358f73af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7aa815d145b52bbd51ede4a9e3c88571f861f70a6f5dc998910e506e358f73af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e6b7369b4926f78f6257e3f166c6540e93281a880c2a21ae3f87ecf32aaca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e6b7369b4926f78f6257e3f166c6540e93281a880c2a21ae3f87ecf32aaca0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce747d5ffc2f77c768c9e7a472ef0dbf11387e6c11215622ffceef721c9f677\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce747d5ffc2f77c768c9e7a472ef0dbf11387e6c11215622ffceef721c9f677\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a45118261ada0368155ab783800af3a71afe445370f06759b98928518999151d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a45118261ada0368155ab783800af3a71afe445370f06759b98928518999151d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7d61a01712de033907e9fec425f948bcf708899adea8b125ff47bc3b9953f3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7d61a01712de033907e9fec425f948bcf708899adea8b125ff47bc3b9953f3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zwkn6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:31Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:31 crc kubenswrapper[4642]: I0128 06:48:31.323721 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:31 crc kubenswrapper[4642]: I0128 06:48:31.323754 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:31 crc kubenswrapper[4642]: I0128 06:48:31.323764 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:31 crc kubenswrapper[4642]: I0128 06:48:31.323780 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:31 crc kubenswrapper[4642]: I0128 06:48:31.323790 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:31Z","lastTransitionTime":"2026-01-28T06:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:31 crc kubenswrapper[4642]: I0128 06:48:31.323974 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-28n48" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d569b7c-8a0e-4074-b61f-4139413b9849\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04752968302ea040e4a64b015290addc260974a7243137d55948c3e1a3e3a849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kp8zj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-28n48\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:31Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:31 crc kubenswrapper[4642]: I0128 06:48:31.331535 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wcwkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0008ccce-c71a-484f-9df7-02d5d92e8b02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04f0fbcd26ac8f7340450ee1edb0990146e1fee5ac1a5de78611d4a3c6ca7516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s57nl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wcwkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:31Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:31 crc kubenswrapper[4642]: I0128 06:48:31.340807 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"286f1a08-e4be-475e-a3ff-c37c99c41ea6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6769a5284fe4dae7df7948532cabc41d1f4a27a7e61ab818d61b26eb8165a750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75be46ee251716ce59f7d4e31811114c790e07ff2068465e1bba6ee4abc6909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1c4b797e1cd78fa6a850c8c7491a690dab16a3c06eeb71b2dc7e6799cf285a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7181241cbadd0b8edc9a6ff7f035813a1b2a06cdec451dc94d869bb56424a485\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70d9e422e29c4563585ee85b03f4d3e5705c7de8c388d337a52b30a7f0ccc096\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0128 06:48:09.026722 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 06:48:09.029824 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3147749855/tls.crt::/tmp/serving-cert-3147749855/tls.key\\\\\\\"\\\\nI0128 06:48:14.183760 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 06:48:14.185955 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 06:48:14.185992 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 06:48:14.186011 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 06:48:14.186020 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 06:48:14.192438 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0128 06:48:14.192454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0128 06:48:14.192477 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 06:48:14.192484 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 06:48:14.192488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 06:48:14.192490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 06:48:14.192493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 06:48:14.192496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0128 06:48:14.194350 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6d564af60cb16fea7167aea68703adced71f83b8ff6a2270e2e6d109f6e9c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d544adc85b9e84ad7685c6ce4e1ae6e7679fb24a21b7d44eb402b613ef6be7ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d544adc85b9e84ad7685c6ce4e1ae6e7679fb24a21b7d44eb402b613ef6be7ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:31Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:31 crc kubenswrapper[4642]: I0128 06:48:31.348952 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:31Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:31 crc kubenswrapper[4642]: I0128 06:48:31.357471 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:31Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:31 crc kubenswrapper[4642]: I0128 06:48:31.365664 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:31Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:31 crc kubenswrapper[4642]: I0128 06:48:31.374047 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4acb8b3be7913130bfe0acca1f8ed39f30ee2121eec228e4878f5645899f53cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfb4620ab8d587f286964f74ee575b4837fe55979a2f70767d30709cdb4fb2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:31Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:31 crc kubenswrapper[4642]: I0128 06:48:31.380833 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tzmpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e61b4710-6f7c-4ab1-b7bb-50d445aeda93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51dac6b178ad3961488d10a37458f5d40c3345f0091f7050a4c3d310ea46704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rfx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tzmpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:31Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:31 crc kubenswrapper[4642]: I0128 06:48:31.388088 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"338ae955-434d-40bd-8519-580badf3e175\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8c176f35c846bbf16f17d93e6c30e3708e24e70060ebc0e78f9714fc8c402f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdf4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f517e5ecc6bf060062365392506b3ee3a09354a4f5cfaa980e0e3a2507f298c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdf4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hdsmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:31Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:31 crc kubenswrapper[4642]: I0128 06:48:31.395955 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e3e00c82a4c8d33e89ddc75f7c6c393e11f7a98c88c8e3e160fd8f206ece474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:31Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:31 crc kubenswrapper[4642]: I0128 06:48:31.410433 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c33d9fa76f5feb29f150ff38354d292c6cb68dedb78eb8496754c23272d137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9c679b07a820cca68800f672c74f133d871d41e4cfe6c18e101febd7110b497\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba5c27390ef97be2917d1b375ca741d7a74118ba523043b1c826e4fccaee05d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a597a1b6d4b61efc1b46773965ad598626e298815da9ceffaf97965d72286d3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e87e992d302acbefdcab2e332757e1f6d6535a0acdbe268b1e2eb1d49a615940\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6187660025657deea7e0cd68d1341136261c5e66c9e32c3b00b181df029843b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35193ef5e485263e1e373e9e2db577f3019bf0d55871920e59a0b313baed854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b35193ef5e485263e1e373e9e2db577f3019bf0d55871920e59a0b313baed854\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T06:48:29Z\\\",\\\"message\\\":\\\"ol:\\\\\\\"TCP\\\\\\\", inport:9393, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0128 06:48:29.954821 6081 lb_config.go:1031] Cluster endpoints for openshift-kube-apiserver-operator/metrics for network=default are: map[]\\\\nI0128 06:48:29.954839 6081 services_controller.go:444] Built service openshift-ingress-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0128 06:48:29.954849 6081 services_controller.go:445] Built service openshift-ingress-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF0128 06:48:29.954846 6081 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for banpInformer during admin network policy controller initialization, handler {0x1fcc300 0x1fcbfe0 0x1fcbf80} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-7fdwx_openshift-ovn-kubernetes(0f5d2a3f-25d8-4051-8000-30ec01a14eb0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cdaa92914e736d4130cee0be4f11381b0edec756ca70235c6a4205f04777874\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecf4d023197544de3b0fe0f541dc75332bb08d0f44046e46fd468e3c5a0f251e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecf4d023197544de3b0fe0f541dc75332bb08d0f44046e46fd468e3c5a0f251e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7fdwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:31Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:31 crc kubenswrapper[4642]: I0128 06:48:31.420152 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d609634-f051-46d5-9a1d-97785c7d8c67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76f1f4a4bf66b6661b0b869ac209c793c9066518674be305037372a22141c1d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f31fe6b99be2f9edd215e88aadd39790ce7f144061336772561af3eab969fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f826dab9c51a49c5b54e934e0be64c809ff09247f8aa766b9a20734a3f1277da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf1949a0768be6ba1b7cfc9107df62005fa50466747e22440e5ea8cd04ec908c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:31Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:31 crc kubenswrapper[4642]: I0128 06:48:31.425590 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:31 crc kubenswrapper[4642]: I0128 06:48:31.425631 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:31 crc kubenswrapper[4642]: I0128 06:48:31.425643 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:31 crc kubenswrapper[4642]: I0128 06:48:31.425661 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:31 crc kubenswrapper[4642]: I0128 06:48:31.425672 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:31Z","lastTransitionTime":"2026-01-28T06:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:31 crc kubenswrapper[4642]: I0128 06:48:31.431247 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5886e382e4852d848554f98099a7cbdd4d0bc3f161f96fd6f34f300a748b0172\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:31Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:31 crc kubenswrapper[4642]: I0128 06:48:31.528368 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:31 crc kubenswrapper[4642]: I0128 06:48:31.528420 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:31 crc kubenswrapper[4642]: I0128 06:48:31.528431 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:31 crc kubenswrapper[4642]: I0128 06:48:31.528449 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:31 crc kubenswrapper[4642]: I0128 06:48:31.528460 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:31Z","lastTransitionTime":"2026-01-28T06:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:31 crc kubenswrapper[4642]: I0128 06:48:31.631419 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:31 crc kubenswrapper[4642]: I0128 06:48:31.631452 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:31 crc kubenswrapper[4642]: I0128 06:48:31.631461 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:31 crc kubenswrapper[4642]: I0128 06:48:31.631476 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:31 crc kubenswrapper[4642]: I0128 06:48:31.631489 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:31Z","lastTransitionTime":"2026-01-28T06:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:31 crc kubenswrapper[4642]: I0128 06:48:31.734924 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:31 crc kubenswrapper[4642]: I0128 06:48:31.734984 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:31 crc kubenswrapper[4642]: I0128 06:48:31.734996 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:31 crc kubenswrapper[4642]: I0128 06:48:31.735016 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:31 crc kubenswrapper[4642]: I0128 06:48:31.735027 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:31Z","lastTransitionTime":"2026-01-28T06:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:31 crc kubenswrapper[4642]: I0128 06:48:31.837910 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:31 crc kubenswrapper[4642]: I0128 06:48:31.837976 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:31 crc kubenswrapper[4642]: I0128 06:48:31.837986 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:31 crc kubenswrapper[4642]: I0128 06:48:31.838016 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:31 crc kubenswrapper[4642]: I0128 06:48:31.838030 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:31Z","lastTransitionTime":"2026-01-28T06:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:31 crc kubenswrapper[4642]: I0128 06:48:31.940856 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:31 crc kubenswrapper[4642]: I0128 06:48:31.941241 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:31 crc kubenswrapper[4642]: I0128 06:48:31.941253 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:31 crc kubenswrapper[4642]: I0128 06:48:31.941272 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:31 crc kubenswrapper[4642]: I0128 06:48:31.941286 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:31Z","lastTransitionTime":"2026-01-28T06:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:32 crc kubenswrapper[4642]: I0128 06:48:32.044043 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:32 crc kubenswrapper[4642]: I0128 06:48:32.044089 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:32 crc kubenswrapper[4642]: I0128 06:48:32.044101 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:32 crc kubenswrapper[4642]: I0128 06:48:32.044119 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:32 crc kubenswrapper[4642]: I0128 06:48:32.044134 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:32Z","lastTransitionTime":"2026-01-28T06:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:32 crc kubenswrapper[4642]: I0128 06:48:32.079023 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 19:56:21.019529995 +0000 UTC Jan 28 06:48:32 crc kubenswrapper[4642]: I0128 06:48:32.147214 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:32 crc kubenswrapper[4642]: I0128 06:48:32.147263 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:32 crc kubenswrapper[4642]: I0128 06:48:32.147273 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:32 crc kubenswrapper[4642]: I0128 06:48:32.147293 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:32 crc kubenswrapper[4642]: I0128 06:48:32.147309 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:32Z","lastTransitionTime":"2026-01-28T06:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:32 crc kubenswrapper[4642]: I0128 06:48:32.249482 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:32 crc kubenswrapper[4642]: I0128 06:48:32.249527 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:32 crc kubenswrapper[4642]: I0128 06:48:32.249539 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:32 crc kubenswrapper[4642]: I0128 06:48:32.249559 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:32 crc kubenswrapper[4642]: I0128 06:48:32.249573 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:32Z","lastTransitionTime":"2026-01-28T06:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:32 crc kubenswrapper[4642]: I0128 06:48:32.351998 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:32 crc kubenswrapper[4642]: I0128 06:48:32.352051 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:32 crc kubenswrapper[4642]: I0128 06:48:32.352062 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:32 crc kubenswrapper[4642]: I0128 06:48:32.352082 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:32 crc kubenswrapper[4642]: I0128 06:48:32.352094 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:32Z","lastTransitionTime":"2026-01-28T06:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:32 crc kubenswrapper[4642]: I0128 06:48:32.454966 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:32 crc kubenswrapper[4642]: I0128 06:48:32.455030 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:32 crc kubenswrapper[4642]: I0128 06:48:32.455042 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:32 crc kubenswrapper[4642]: I0128 06:48:32.455068 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:32 crc kubenswrapper[4642]: I0128 06:48:32.455088 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:32Z","lastTransitionTime":"2026-01-28T06:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:32 crc kubenswrapper[4642]: I0128 06:48:32.557341 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:32 crc kubenswrapper[4642]: I0128 06:48:32.557412 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:32 crc kubenswrapper[4642]: I0128 06:48:32.557423 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:32 crc kubenswrapper[4642]: I0128 06:48:32.557444 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:32 crc kubenswrapper[4642]: I0128 06:48:32.557458 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:32Z","lastTransitionTime":"2026-01-28T06:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:32 crc kubenswrapper[4642]: I0128 06:48:32.660393 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:32 crc kubenswrapper[4642]: I0128 06:48:32.660440 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:32 crc kubenswrapper[4642]: I0128 06:48:32.660452 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:32 crc kubenswrapper[4642]: I0128 06:48:32.660472 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:32 crc kubenswrapper[4642]: I0128 06:48:32.660485 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:32Z","lastTransitionTime":"2026-01-28T06:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:32 crc kubenswrapper[4642]: I0128 06:48:32.762893 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:32 crc kubenswrapper[4642]: I0128 06:48:32.762936 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:32 crc kubenswrapper[4642]: I0128 06:48:32.762947 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:32 crc kubenswrapper[4642]: I0128 06:48:32.762966 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:32 crc kubenswrapper[4642]: I0128 06:48:32.762980 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:32Z","lastTransitionTime":"2026-01-28T06:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:32 crc kubenswrapper[4642]: I0128 06:48:32.865945 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:32 crc kubenswrapper[4642]: I0128 06:48:32.866002 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:32 crc kubenswrapper[4642]: I0128 06:48:32.866015 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:32 crc kubenswrapper[4642]: I0128 06:48:32.866039 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:32 crc kubenswrapper[4642]: I0128 06:48:32.866055 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:32Z","lastTransitionTime":"2026-01-28T06:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:32 crc kubenswrapper[4642]: I0128 06:48:32.968586 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:32 crc kubenswrapper[4642]: I0128 06:48:32.968621 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:32 crc kubenswrapper[4642]: I0128 06:48:32.968631 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:32 crc kubenswrapper[4642]: I0128 06:48:32.968647 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:32 crc kubenswrapper[4642]: I0128 06:48:32.968661 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:32Z","lastTransitionTime":"2026-01-28T06:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:33 crc kubenswrapper[4642]: I0128 06:48:33.070827 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:33 crc kubenswrapper[4642]: I0128 06:48:33.070884 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:33 crc kubenswrapper[4642]: I0128 06:48:33.070896 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:33 crc kubenswrapper[4642]: I0128 06:48:33.070917 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:33 crc kubenswrapper[4642]: I0128 06:48:33.070930 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:33Z","lastTransitionTime":"2026-01-28T06:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:33 crc kubenswrapper[4642]: I0128 06:48:33.079254 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 10:38:10.614171084 +0000 UTC Jan 28 06:48:33 crc kubenswrapper[4642]: I0128 06:48:33.097576 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 06:48:33 crc kubenswrapper[4642]: I0128 06:48:33.097642 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 06:48:33 crc kubenswrapper[4642]: E0128 06:48:33.097717 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 06:48:33 crc kubenswrapper[4642]: E0128 06:48:33.097838 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 06:48:33 crc kubenswrapper[4642]: I0128 06:48:33.098233 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 06:48:33 crc kubenswrapper[4642]: E0128 06:48:33.098394 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 06:48:33 crc kubenswrapper[4642]: I0128 06:48:33.172640 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:33 crc kubenswrapper[4642]: I0128 06:48:33.172674 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:33 crc kubenswrapper[4642]: I0128 06:48:33.172686 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:33 crc kubenswrapper[4642]: I0128 06:48:33.172703 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:33 crc kubenswrapper[4642]: I0128 06:48:33.172715 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:33Z","lastTransitionTime":"2026-01-28T06:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:33 crc kubenswrapper[4642]: I0128 06:48:33.275895 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:33 crc kubenswrapper[4642]: I0128 06:48:33.276337 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:33 crc kubenswrapper[4642]: I0128 06:48:33.276490 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:33 crc kubenswrapper[4642]: I0128 06:48:33.276573 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:33 crc kubenswrapper[4642]: I0128 06:48:33.276636 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:33Z","lastTransitionTime":"2026-01-28T06:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:33 crc kubenswrapper[4642]: I0128 06:48:33.378818 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:33 crc kubenswrapper[4642]: I0128 06:48:33.378869 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:33 crc kubenswrapper[4642]: I0128 06:48:33.378882 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:33 crc kubenswrapper[4642]: I0128 06:48:33.378905 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:33 crc kubenswrapper[4642]: I0128 06:48:33.378919 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:33Z","lastTransitionTime":"2026-01-28T06:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:33 crc kubenswrapper[4642]: I0128 06:48:33.481005 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:33 crc kubenswrapper[4642]: I0128 06:48:33.481447 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:33 crc kubenswrapper[4642]: I0128 06:48:33.481523 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:33 crc kubenswrapper[4642]: I0128 06:48:33.481622 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:33 crc kubenswrapper[4642]: I0128 06:48:33.481692 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:33Z","lastTransitionTime":"2026-01-28T06:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:33 crc kubenswrapper[4642]: I0128 06:48:33.498976 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7js4"] Jan 28 06:48:33 crc kubenswrapper[4642]: I0128 06:48:33.499866 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7js4" Jan 28 06:48:33 crc kubenswrapper[4642]: I0128 06:48:33.501870 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 28 06:48:33 crc kubenswrapper[4642]: I0128 06:48:33.501874 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 28 06:48:33 crc kubenswrapper[4642]: I0128 06:48:33.509366 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wcwkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0008ccce-c71a-484f-9df7-02d5d92e8b02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04f0fbcd26ac8f7340450ee1edb0990146e1fee5ac1a5de78611d4a3c6ca7516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s57nl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wcwkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:33Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:33 crc kubenswrapper[4642]: I0128 06:48:33.519670 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"286f1a08-e4be-475e-a3ff-c37c99c41ea6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6769a5284fe4dae7df7948532cabc41d1f4a27a7e61ab818d61b26eb8165a750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75be46ee251716ce59f7d4e31811114c790e07ff2068465e1bba6ee4abc6909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1c4b797e1cd78fa6a850c8c7491a690dab16a3c06eeb71b2dc7e6799cf285a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7181241cbadd0b8edc9a6ff7f035813a1b2a06cdec451dc94d869bb56424a485\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70d9e422e29c4563585ee85b03f4d3e5705c7de8c388d337a52b30a7f0ccc096\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0128 06:48:09.026722 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 06:48:09.029824 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3147749855/tls.crt::/tmp/serving-cert-3147749855/tls.key\\\\\\\"\\\\nI0128 06:48:14.183760 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 06:48:14.185955 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 06:48:14.185992 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 06:48:14.186011 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 06:48:14.186020 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 06:48:14.192438 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0128 06:48:14.192454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0128 06:48:14.192477 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 06:48:14.192484 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 06:48:14.192488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 06:48:14.192490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 06:48:14.192493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 06:48:14.192496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0128 06:48:14.194350 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6d564af60cb16fea7167aea68703adced71f83b8ff6a2270e2e6d109f6e9c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d544adc85b9e84ad7685c6ce4e1ae6e7679fb24a21b7d44eb402b613ef6be7ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d544adc85b9e84ad7685c6ce4e1ae6e7679fb24a21b7d44eb402b613ef6be7ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:33Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:33 crc kubenswrapper[4642]: I0128 06:48:33.526993 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:33Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:33 crc kubenswrapper[4642]: I0128 06:48:33.536389 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zwkn6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e8cd657-e170-4331-9f82-7b84d122c8e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a285511bdaad491d745092714ce6a3d6b08414c40333226f4c7a0a1176ea7260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd2d14172912918c88948764f41aa4b5f2f97fa49f9edf2007a478b7c06b807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fd2d14172912918c88948764f41aa4b5f2f97fa49f9edf2007a478b7c06b807\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa815d145b52bbd51ede4a9e3c88571f861f70a6f5dc998910e506e358f73af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7aa815d145b52bbd51ede4a9e3c88571f861f70a6f5dc998910e506e358f73af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e6b7369b4926f78f6257e3f166c6540e93281a880c2a21ae3f87ecf32aaca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e6b7369b4926f78f6257e3f166c6540e93281a880c2a21ae3f87ecf32aaca0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce747d5ffc2f77c768c9e7a472ef0dbf11387e6c11215622ffceef721c9f677\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce747d5ffc2f77c768c9e7a472ef0dbf11387e6c11215622ffceef721c9f677\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a45118261ada0368155ab783800af3a71afe445370f06759b98928518999151d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a45118261ada0368155ab783800af3a71afe445370f06759b98928518999151d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7d61a01712de033907e9fec425f948bcf708899adea8b125ff47bc3b9953f3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7d61a01712de033907e9fec425f948bcf708899adea8b125ff47bc3b9953f3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zwkn6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:33Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:33 crc kubenswrapper[4642]: I0128 06:48:33.544845 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-28n48" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d569b7c-8a0e-4074-b61f-4139413b9849\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04752968302ea040e4a64b015290addc260974a7243137d55948c3e1a3e3a849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kp8zj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-28n48\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:33Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:33 crc kubenswrapper[4642]: I0128 06:48:33.553451 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4acb8b3be7913130bfe0acca1f8ed39f30ee2121eec228e4878f5645899f53cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfb4620ab8d587f286964f74ee575b4837fe55979a2f70767d30709cdb4fb2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:33Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:33 crc kubenswrapper[4642]: I0128 06:48:33.560220 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tzmpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e61b4710-6f7c-4ab1-b7bb-50d445aeda93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51dac6b178ad3961488d10a37458f5d40c3345f0091f7050a4c3d310ea46704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rfx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tzmpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:33Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:33 crc kubenswrapper[4642]: I0128 06:48:33.567244 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"338ae955-434d-40bd-8519-580badf3e175\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8c176f35c846bbf16f17d93e6c30e3708e24e70060ebc0e78f9714fc8c402f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdf4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f517e5ecc6bf060062365392506b3ee3a09354a4f5cfaa980e0e3a2507f298c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdf4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hdsmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:33Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:33 crc kubenswrapper[4642]: I0128 06:48:33.576151 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:33Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:33 crc kubenswrapper[4642]: I0128 06:48:33.584136 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:33 crc kubenswrapper[4642]: I0128 06:48:33.584161 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:33 crc kubenswrapper[4642]: I0128 06:48:33.584171 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:33 crc kubenswrapper[4642]: I0128 06:48:33.584111 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:33Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:33 crc kubenswrapper[4642]: I0128 06:48:33.584210 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:33 crc kubenswrapper[4642]: I0128 06:48:33.584405 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:33Z","lastTransitionTime":"2026-01-28T06:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:33 crc kubenswrapper[4642]: I0128 06:48:33.592165 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e3e00c82a4c8d33e89ddc75f7c6c393e11f7a98c88c8e3e160fd8f206ece474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:33Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:33 crc kubenswrapper[4642]: I0128 06:48:33.599262 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7js4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c649375a-21a3-43f5-bd77-fbc87de527fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjkkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjkkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-f7js4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:33Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:33 crc kubenswrapper[4642]: I0128 06:48:33.608883 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d609634-f051-46d5-9a1d-97785c7d8c67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76f1f4a4bf66b6661b0b869ac209c793c9066518674be305037372a22141c1d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f31fe6b99be2f9edd215e88aadd39790ce7f144061336772561af3eab969fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f826dab9c51a49c5b54e934e0be64c809ff09247f8aa766b9a20734a3f1277da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf1949a0768be6ba1b7cfc9107df62005fa50466747e22440e5ea8cd04ec908c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:33Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:33 crc kubenswrapper[4642]: I0128 06:48:33.617837 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5886e382e4852d848554f98099a7cbdd4d0bc3f161f96fd6f34f300a748b0172\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:33Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:33 crc kubenswrapper[4642]: I0128 06:48:33.630468 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c33d9fa76f5feb29f150ff38354d292c6cb68dedb78eb8496754c23272d137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9c679b07a820cca68800f672c74f133d871d41e4cfe6c18e101febd7110b497\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba5c27390ef97be2917d1b375ca741d7a74118ba523043b1c826e4fccaee05d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a597a1b6d4b61efc1b46773965ad598626e298815da9ceffaf97965d72286d3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e87e992d302acbefdcab2e332757e1f6d6535a0acdbe268b1e2eb1d49a615940\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6187660025657deea7e0cd68d1341136261c5e66c9e32c3b00b181df029843b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35193ef5e485263e1e373e9e2db577f3019bf0d55871920e59a0b313baed854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b35193ef5e485263e1e373e9e2db577f3019bf0d55871920e59a0b313baed854\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T06:48:29Z\\\",\\\"message\\\":\\\"ol:\\\\\\\"TCP\\\\\\\", inport:9393, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0128 06:48:29.954821 6081 lb_config.go:1031] Cluster endpoints for openshift-kube-apiserver-operator/metrics for network=default are: map[]\\\\nI0128 06:48:29.954839 6081 services_controller.go:444] Built service openshift-ingress-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0128 06:48:29.954849 6081 services_controller.go:445] Built service openshift-ingress-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF0128 06:48:29.954846 6081 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for banpInformer during admin network policy controller initialization, handler {0x1fcc300 0x1fcbfe0 0x1fcbf80} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-7fdwx_openshift-ovn-kubernetes(0f5d2a3f-25d8-4051-8000-30ec01a14eb0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cdaa92914e736d4130cee0be4f11381b0edec756ca70235c6a4205f04777874\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecf4d023197544de3b0fe0f541dc75332bb08d0f44046e46fd468e3c5a0f251e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecf4d023197544de3b0fe0f541dc75332bb08d0f44046e46fd468e3c5a0f251e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7fdwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:33Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:33 crc kubenswrapper[4642]: I0128 06:48:33.679770 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjkkw\" (UniqueName: \"kubernetes.io/projected/c649375a-21a3-43f5-bd77-fbc87de527fa-kube-api-access-bjkkw\") pod \"ovnkube-control-plane-749d76644c-f7js4\" (UID: \"c649375a-21a3-43f5-bd77-fbc87de527fa\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7js4" Jan 28 06:48:33 crc kubenswrapper[4642]: I0128 06:48:33.679809 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c649375a-21a3-43f5-bd77-fbc87de527fa-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-f7js4\" (UID: \"c649375a-21a3-43f5-bd77-fbc87de527fa\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7js4" Jan 28 06:48:33 crc kubenswrapper[4642]: I0128 06:48:33.679837 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c649375a-21a3-43f5-bd77-fbc87de527fa-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-f7js4\" (UID: \"c649375a-21a3-43f5-bd77-fbc87de527fa\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7js4" Jan 28 06:48:33 crc kubenswrapper[4642]: I0128 06:48:33.679865 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c649375a-21a3-43f5-bd77-fbc87de527fa-env-overrides\") pod \"ovnkube-control-plane-749d76644c-f7js4\" (UID: \"c649375a-21a3-43f5-bd77-fbc87de527fa\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7js4" Jan 28 06:48:33 crc kubenswrapper[4642]: I0128 06:48:33.686315 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:33 crc kubenswrapper[4642]: I0128 06:48:33.686375 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:33 crc kubenswrapper[4642]: I0128 06:48:33.686389 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:33 crc kubenswrapper[4642]: I0128 06:48:33.686402 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:33 crc kubenswrapper[4642]: I0128 06:48:33.686414 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:33Z","lastTransitionTime":"2026-01-28T06:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:33 crc kubenswrapper[4642]: I0128 06:48:33.781090 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjkkw\" (UniqueName: \"kubernetes.io/projected/c649375a-21a3-43f5-bd77-fbc87de527fa-kube-api-access-bjkkw\") pod \"ovnkube-control-plane-749d76644c-f7js4\" (UID: \"c649375a-21a3-43f5-bd77-fbc87de527fa\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7js4" Jan 28 06:48:33 crc kubenswrapper[4642]: I0128 06:48:33.781132 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c649375a-21a3-43f5-bd77-fbc87de527fa-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-f7js4\" (UID: \"c649375a-21a3-43f5-bd77-fbc87de527fa\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7js4" Jan 28 06:48:33 crc kubenswrapper[4642]: I0128 06:48:33.781164 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c649375a-21a3-43f5-bd77-fbc87de527fa-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-f7js4\" (UID: \"c649375a-21a3-43f5-bd77-fbc87de527fa\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7js4" Jan 28 06:48:33 crc kubenswrapper[4642]: I0128 06:48:33.781232 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c649375a-21a3-43f5-bd77-fbc87de527fa-env-overrides\") pod \"ovnkube-control-plane-749d76644c-f7js4\" (UID: \"c649375a-21a3-43f5-bd77-fbc87de527fa\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7js4" Jan 28 06:48:33 crc kubenswrapper[4642]: I0128 06:48:33.781932 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c649375a-21a3-43f5-bd77-fbc87de527fa-env-overrides\") pod \"ovnkube-control-plane-749d76644c-f7js4\" (UID: \"c649375a-21a3-43f5-bd77-fbc87de527fa\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7js4" Jan 28 06:48:33 crc kubenswrapper[4642]: I0128 06:48:33.781984 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c649375a-21a3-43f5-bd77-fbc87de527fa-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-f7js4\" (UID: \"c649375a-21a3-43f5-bd77-fbc87de527fa\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7js4" Jan 28 06:48:33 crc kubenswrapper[4642]: I0128 06:48:33.788072 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c649375a-21a3-43f5-bd77-fbc87de527fa-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-f7js4\" (UID: \"c649375a-21a3-43f5-bd77-fbc87de527fa\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7js4" Jan 28 06:48:33 crc kubenswrapper[4642]: I0128 06:48:33.789226 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:33 crc kubenswrapper[4642]: I0128 06:48:33.789265 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:33 crc kubenswrapper[4642]: I0128 06:48:33.789279 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:33 crc kubenswrapper[4642]: I0128 06:48:33.789299 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:33 crc kubenswrapper[4642]: I0128 06:48:33.789310 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:33Z","lastTransitionTime":"2026-01-28T06:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:33 crc kubenswrapper[4642]: I0128 06:48:33.794383 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjkkw\" (UniqueName: \"kubernetes.io/projected/c649375a-21a3-43f5-bd77-fbc87de527fa-kube-api-access-bjkkw\") pod \"ovnkube-control-plane-749d76644c-f7js4\" (UID: \"c649375a-21a3-43f5-bd77-fbc87de527fa\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7js4" Jan 28 06:48:33 crc kubenswrapper[4642]: I0128 06:48:33.809309 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7js4" Jan 28 06:48:33 crc kubenswrapper[4642]: W0128 06:48:33.820834 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc649375a_21a3_43f5_bd77_fbc87de527fa.slice/crio-b2b9aa337f53afc6498ca49c326e029fc9eed8d128e5f8d4e5055f0e5de1ed85 WatchSource:0}: Error finding container b2b9aa337f53afc6498ca49c326e029fc9eed8d128e5f8d4e5055f0e5de1ed85: Status 404 returned error can't find the container with id b2b9aa337f53afc6498ca49c326e029fc9eed8d128e5f8d4e5055f0e5de1ed85 Jan 28 06:48:33 crc kubenswrapper[4642]: I0128 06:48:33.891533 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:33 crc kubenswrapper[4642]: I0128 06:48:33.891564 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:33 crc kubenswrapper[4642]: I0128 06:48:33.891572 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:33 crc kubenswrapper[4642]: I0128 06:48:33.891607 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:33 crc kubenswrapper[4642]: I0128 06:48:33.891616 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:33Z","lastTransitionTime":"2026-01-28T06:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:33 crc kubenswrapper[4642]: I0128 06:48:33.993414 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:33 crc kubenswrapper[4642]: I0128 06:48:33.993451 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:33 crc kubenswrapper[4642]: I0128 06:48:33.993463 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:33 crc kubenswrapper[4642]: I0128 06:48:33.993477 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:33 crc kubenswrapper[4642]: I0128 06:48:33.993486 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:33Z","lastTransitionTime":"2026-01-28T06:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:34 crc kubenswrapper[4642]: I0128 06:48:34.079910 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 23:14:38.023133968 +0000 UTC Jan 28 06:48:34 crc kubenswrapper[4642]: I0128 06:48:34.095982 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:34 crc kubenswrapper[4642]: I0128 06:48:34.096009 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:34 crc kubenswrapper[4642]: I0128 06:48:34.096019 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:34 crc kubenswrapper[4642]: I0128 06:48:34.096031 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:34 crc kubenswrapper[4642]: I0128 06:48:34.096040 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:34Z","lastTransitionTime":"2026-01-28T06:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:34 crc kubenswrapper[4642]: I0128 06:48:34.197582 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:34 crc kubenswrapper[4642]: I0128 06:48:34.197611 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:34 crc kubenswrapper[4642]: I0128 06:48:34.197622 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:34 crc kubenswrapper[4642]: I0128 06:48:34.197637 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:34 crc kubenswrapper[4642]: I0128 06:48:34.197648 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:34Z","lastTransitionTime":"2026-01-28T06:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:34 crc kubenswrapper[4642]: I0128 06:48:34.300557 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:34 crc kubenswrapper[4642]: I0128 06:48:34.300596 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:34 crc kubenswrapper[4642]: I0128 06:48:34.300609 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:34 crc kubenswrapper[4642]: I0128 06:48:34.300628 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:34 crc kubenswrapper[4642]: I0128 06:48:34.300639 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:34Z","lastTransitionTime":"2026-01-28T06:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:34 crc kubenswrapper[4642]: I0128 06:48:34.309652 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7js4" event={"ID":"c649375a-21a3-43f5-bd77-fbc87de527fa","Type":"ContainerStarted","Data":"4ad8af253ff523788331637fe108b083ad990eea4334fdea34d2e3b299de83eb"} Jan 28 06:48:34 crc kubenswrapper[4642]: I0128 06:48:34.309714 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7js4" event={"ID":"c649375a-21a3-43f5-bd77-fbc87de527fa","Type":"ContainerStarted","Data":"56f22af3d5ead6bcc96f5a78901579a4c02637186393613dad0006ecc09e4731"} Jan 28 06:48:34 crc kubenswrapper[4642]: I0128 06:48:34.309729 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7js4" event={"ID":"c649375a-21a3-43f5-bd77-fbc87de527fa","Type":"ContainerStarted","Data":"b2b9aa337f53afc6498ca49c326e029fc9eed8d128e5f8d4e5055f0e5de1ed85"} Jan 28 06:48:34 crc kubenswrapper[4642]: I0128 06:48:34.320318 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:34Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:34 crc kubenswrapper[4642]: I0128 06:48:34.332905 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zwkn6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e8cd657-e170-4331-9f82-7b84d122c8e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a285511bdaad491d745092714ce6a3d6b08414c40333226f4c7a0a1176ea7260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd2d14172912918c88948764f41aa4b5f2f97fa49f9edf2007a478b7c06b807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fd2d14172912918c88948764f41aa4b5f2f97fa49f9edf2007a478b7c06b807\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa815d145b52bbd51ede4a9e3c88571f861f70a6f5dc998910e506e358f73af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7aa815d145b52bbd51ede4a9e3c88571f861f70a6f5dc998910e506e358f73af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e6b7369b4926f78f6257e3f166c6540e93281a880c2a21ae3f87ecf32aaca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e6b7369b4926f78f6257e3f166c6540e93281a880c2a21ae3f87ecf32aaca0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce747d5ffc2f77c768c9e7a472ef0dbf11387e6c11215622ffceef721c9f677\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce747d5ffc2f77c768c9e7a472ef0dbf11387e6c11215622ffceef721c9f677\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a45118261ada0368155ab783800af3a71afe445370f06759b98928518999151d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a45118261ada0368155ab783800af3a71afe445370f06759b98928518999151d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7d61a01712de033907e9fec425f948bcf708899adea8b125ff47bc3b9953f3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7d61a01712de033907e9fec425f948bcf708899adea8b125ff47bc3b9953f3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zwkn6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:34Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:34 crc kubenswrapper[4642]: I0128 06:48:34.342444 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-28n48" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d569b7c-8a0e-4074-b61f-4139413b9849\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04752968302ea040e4a64b015290addc260974a7243137d55948c3e1a3e3a849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kp8zj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-28n48\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:34Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:34 crc kubenswrapper[4642]: I0128 06:48:34.350647 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wcwkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0008ccce-c71a-484f-9df7-02d5d92e8b02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04f0fbcd26ac8f7340450ee1edb0990146e1fee5ac1a5de78611d4a3c6ca7516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s57nl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wcwkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:34Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:34 crc kubenswrapper[4642]: I0128 06:48:34.360849 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"286f1a08-e4be-475e-a3ff-c37c99c41ea6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6769a5284fe4dae7df7948532cabc41d1f4a27a7e61ab818d61b26eb8165a750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75be46ee251716ce59f7d4e31811114c790e07ff2068465e1bba6ee4abc6909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1c4b797e1cd78fa6a850c8c7491a690dab16a3c06eeb71b2dc7e6799cf285a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7181241cbadd0b8edc9a6ff7f035813a1b2a06cdec451dc94d869bb56424a485\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70d9e422e29c4563585ee85b03f4d3e5705c7de8c388d337a52b30a7f0ccc096\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0128 06:48:09.026722 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 06:48:09.029824 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3147749855/tls.crt::/tmp/serving-cert-3147749855/tls.key\\\\\\\"\\\\nI0128 06:48:14.183760 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 06:48:14.185955 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 06:48:14.185992 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 06:48:14.186011 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 06:48:14.186020 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 06:48:14.192438 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0128 06:48:14.192454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0128 06:48:14.192477 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 06:48:14.192484 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 06:48:14.192488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 06:48:14.192490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 06:48:14.192493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 06:48:14.192496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0128 06:48:14.194350 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6d564af60cb16fea7167aea68703adced71f83b8ff6a2270e2e6d109f6e9c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d544adc85b9e84ad7685c6ce4e1ae6e7679fb24a21b7d44eb402b613ef6be7ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d544adc85b9e84ad7685c6ce4e1ae6e7679fb24a21b7d44eb402b613ef6be7ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:34Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:34 crc kubenswrapper[4642]: I0128 06:48:34.370174 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:34Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:34 crc kubenswrapper[4642]: I0128 06:48:34.379141 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:34Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:34 crc kubenswrapper[4642]: I0128 06:48:34.388181 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4acb8b3be7913130bfe0acca1f8ed39f30ee2121eec228e4878f5645899f53cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfb4620ab8d587f286964f74ee575b4837fe55979a2f70767d30709cdb4fb2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:34Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:34 crc kubenswrapper[4642]: I0128 06:48:34.395603 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tzmpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e61b4710-6f7c-4ab1-b7bb-50d445aeda93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51dac6b178ad3961488d10a37458f5d40c3345f0091f7050a4c3d310ea46704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rfx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tzmpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:34Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:34 crc kubenswrapper[4642]: I0128 06:48:34.403280 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:34 crc kubenswrapper[4642]: I0128 06:48:34.403335 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:34 crc kubenswrapper[4642]: I0128 06:48:34.403361 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:34 crc kubenswrapper[4642]: I0128 06:48:34.403381 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:34 crc kubenswrapper[4642]: I0128 06:48:34.403394 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:34Z","lastTransitionTime":"2026-01-28T06:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:34 crc kubenswrapper[4642]: I0128 06:48:34.404210 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"338ae955-434d-40bd-8519-580badf3e175\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8c176f35c846bbf16f17d93e6c30e3708e24e70060ebc0e78f9714fc8c402f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdf4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f517e5ecc6bf060062365392506b3ee3a09354a4f5cfaa980e0e3a2507f298c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdf4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hdsmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:34Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:34 crc kubenswrapper[4642]: I0128 06:48:34.413296 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e3e00c82a4c8d33e89ddc75f7c6c393e11f7a98c88c8e3e160fd8f206ece474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:34Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:34 crc kubenswrapper[4642]: I0128 06:48:34.421511 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7js4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c649375a-21a3-43f5-bd77-fbc87de527fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56f22af3d5ead6bcc96f5a78901579a4c02637186393613dad0006ecc09e4731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjkkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ad8af253ff523788331637fe108b083ad990eea4334fdea34d2e3b299de83eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjkkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-f7js4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:34Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:34 crc kubenswrapper[4642]: I0128 06:48:34.430967 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5886e382e4852d848554f98099a7cbdd4d0bc3f161f96fd6f34f300a748b0172\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:34Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:34 crc kubenswrapper[4642]: I0128 06:48:34.444410 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c33d9fa76f5feb29f150ff38354d292c6cb68dedb78eb8496754c23272d137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9c679b07a820cca68800f672c74f133d871d41e4cfe6c18e101febd7110b497\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba5c27390ef97be2917d1b375ca741d7a74118ba523043b1c826e4fccaee05d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a597a1b6d4b61efc1b46773965ad598626e298815da9ceffaf97965d72286d3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e87e992d302acbefdcab2e332757e1f6d6535a0acdbe268b1e2eb1d49a615940\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6187660025657deea7e0cd68d1341136261c5e66c9e32c3b00b181df029843b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35193ef5e485263e1e373e9e2db577f3019bf0d55871920e59a0b313baed854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b35193ef5e485263e1e373e9e2db577f3019bf0d55871920e59a0b313baed854\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T06:48:29Z\\\",\\\"message\\\":\\\"ol:\\\\\\\"TCP\\\\\\\", inport:9393, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0128 06:48:29.954821 6081 lb_config.go:1031] Cluster endpoints for openshift-kube-apiserver-operator/metrics for network=default are: map[]\\\\nI0128 06:48:29.954839 6081 services_controller.go:444] Built service openshift-ingress-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0128 06:48:29.954849 6081 services_controller.go:445] Built service openshift-ingress-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF0128 06:48:29.954846 6081 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for banpInformer during admin network policy controller initialization, handler {0x1fcc300 0x1fcbfe0 0x1fcbf80} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-7fdwx_openshift-ovn-kubernetes(0f5d2a3f-25d8-4051-8000-30ec01a14eb0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cdaa92914e736d4130cee0be4f11381b0edec756ca70235c6a4205f04777874\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecf4d023197544de3b0fe0f541dc75332bb08d0f44046e46fd468e3c5a0f251e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecf4d023197544de3b0fe0f541dc75332bb08d0f44046e46fd468e3c5a0f251e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7fdwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:34Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:34 crc kubenswrapper[4642]: I0128 06:48:34.454123 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d609634-f051-46d5-9a1d-97785c7d8c67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76f1f4a4bf66b6661b0b869ac209c793c9066518674be305037372a22141c1d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f31fe6b99be2f9edd215e88aadd39790ce7f144061336772561af3eab969fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f826dab9c51a49c5b54e934e0be64c809ff09247f8aa766b9a20734a3f1277da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf1949a0768be6ba1b7cfc9107df62005fa50466747e22440e5ea8cd04ec908c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:34Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:34 crc kubenswrapper[4642]: I0128 06:48:34.505572 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:34 crc kubenswrapper[4642]: I0128 06:48:34.505615 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:34 crc kubenswrapper[4642]: I0128 06:48:34.505627 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:34 crc kubenswrapper[4642]: I0128 06:48:34.505646 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:34 crc kubenswrapper[4642]: I0128 06:48:34.505658 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:34Z","lastTransitionTime":"2026-01-28T06:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:34 crc kubenswrapper[4642]: I0128 06:48:34.560792 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-bpz6r"] Jan 28 06:48:34 crc kubenswrapper[4642]: I0128 06:48:34.561462 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bpz6r" Jan 28 06:48:34 crc kubenswrapper[4642]: E0128 06:48:34.561529 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bpz6r" podUID="e7ad39da-99cf-4851-be79-a7d38df54055" Jan 28 06:48:34 crc kubenswrapper[4642]: I0128 06:48:34.573851 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"286f1a08-e4be-475e-a3ff-c37c99c41ea6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6769a5284fe4dae7df7948532cabc41d1f4a27a7e61ab818d61b26eb8165a750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75be46ee251716ce59f7d4e31811114c790e07ff2068465e1bba6ee4abc6909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1c4b797e1cd78fa6a850c8c7491a690dab16a3c06eeb71b2dc7e6799cf285a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7181241cbadd0b8edc9a6ff7f035813a1b2a06cdec451dc94d869bb56424a485\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70d9e422e29c4563585ee85b03f4d3e5705c7de8c388d337a52b30a7f0ccc096\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0128 06:48:09.026722 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 06:48:09.029824 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3147749855/tls.crt::/tmp/serving-cert-3147749855/tls.key\\\\\\\"\\\\nI0128 06:48:14.183760 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 06:48:14.185955 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 06:48:14.185992 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 06:48:14.186011 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 06:48:14.186020 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 06:48:14.192438 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0128 06:48:14.192454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0128 06:48:14.192477 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 06:48:14.192484 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 06:48:14.192488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 06:48:14.192490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 06:48:14.192493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 06:48:14.192496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0128 06:48:14.194350 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6d564af60cb16fea7167aea68703adced71f83b8ff6a2270e2e6d109f6e9c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d544adc85b9e84ad7685c6ce4e1ae6e7679fb24a21b7d44eb402b613ef6be7ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d544adc85b9e84ad7685c6ce4e1ae6e7679fb24a21b7d44eb402b613ef6be7ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:34Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:34 crc kubenswrapper[4642]: I0128 06:48:34.584299 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:34Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:34 crc kubenswrapper[4642]: I0128 06:48:34.595920 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zwkn6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e8cd657-e170-4331-9f82-7b84d122c8e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a285511bdaad491d745092714ce6a3d6b08414c40333226f4c7a0a1176ea7260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd2d14172912918c88948764f41aa4b5f2f97fa49f9edf2007a478b7c06b807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fd2d14172912918c88948764f41aa4b5f2f97fa49f9edf2007a478b7c06b807\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa815d145b52bbd51ede4a9e3c88571f861f70a6f5dc998910e506e358f73af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7aa815d145b52bbd51ede4a9e3c88571f861f70a6f5dc998910e506e358f73af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e6b7369b4926f78f6257e3f166c6540e93281a880c2a21ae3f87ecf32aaca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e6b7369b4926f78f6257e3f166c6540e93281a880c2a21ae3f87ecf32aaca0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce747d5ffc2f77c768c9e7a472ef0dbf11387e6c11215622ffceef721c9f677\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce747d5ffc2f77c768c9e7a472ef0dbf11387e6c11215622ffceef721c9f677\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a45118261ada0368155ab783800af3a71afe445370f06759b98928518999151d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a45118261ada0368155ab783800af3a71afe445370f06759b98928518999151d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7d61a01712de033907e9fec425f948bcf708899adea8b125ff47bc3b9953f3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7d61a01712de033907e9fec425f948bcf708899adea8b125ff47bc3b9953f3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zwkn6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:34Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:34 crc kubenswrapper[4642]: I0128 06:48:34.605683 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-28n48" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d569b7c-8a0e-4074-b61f-4139413b9849\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04752968302ea040e4a64b015290addc260974a7243137d55948c3e1a3e3a849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kp8zj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-28n48\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:34Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:34 crc kubenswrapper[4642]: I0128 06:48:34.607875 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:34 crc kubenswrapper[4642]: I0128 06:48:34.607913 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:34 crc kubenswrapper[4642]: I0128 06:48:34.607925 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:34 crc kubenswrapper[4642]: I0128 06:48:34.607941 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:34 crc kubenswrapper[4642]: I0128 06:48:34.607953 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:34Z","lastTransitionTime":"2026-01-28T06:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:34 crc kubenswrapper[4642]: I0128 06:48:34.615304 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wcwkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0008ccce-c71a-484f-9df7-02d5d92e8b02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04f0fbcd26ac8f7340450ee1edb0990146e1fee5ac1a5de78611d4a3c6ca7516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s57nl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wcwkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:34Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:34 crc kubenswrapper[4642]: I0128 06:48:34.623360 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bpz6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ad39da-99cf-4851-be79-a7d38df54055\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7dl8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7dl8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bpz6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:34Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:34 crc kubenswrapper[4642]: I0128 06:48:34.632810 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:34Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:34 crc kubenswrapper[4642]: I0128 06:48:34.642143 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:34Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:34 crc kubenswrapper[4642]: I0128 06:48:34.650215 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4acb8b3be7913130bfe0acca1f8ed39f30ee2121eec228e4878f5645899f53cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfb4620ab8d587f286964f74ee575b4837fe55979a2f70767d30709cdb4fb2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:34Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:34 crc kubenswrapper[4642]: I0128 06:48:34.657359 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tzmpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e61b4710-6f7c-4ab1-b7bb-50d445aeda93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51dac6b178ad3961488d10a37458f5d40c3345f0091f7050a4c3d310ea46704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rfx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tzmpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:34Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:34 crc kubenswrapper[4642]: I0128 06:48:34.664763 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"338ae955-434d-40bd-8519-580badf3e175\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8c176f35c846bbf16f17d93e6c30e3708e24e70060ebc0e78f9714fc8c402f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdf4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f517e5ecc6bf060062365392506b3ee3a09354a4f5cfaa980e0e3a2507f298c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdf4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hdsmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:34Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:34 crc kubenswrapper[4642]: I0128 06:48:34.672929 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e3e00c82a4c8d33e89ddc75f7c6c393e11f7a98c88c8e3e160fd8f206ece474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:34Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:34 crc kubenswrapper[4642]: I0128 06:48:34.680647 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7js4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c649375a-21a3-43f5-bd77-fbc87de527fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56f22af3d5ead6bcc96f5a78901579a4c02637186393613dad0006ecc09e4731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjkkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ad8af253ff523788331637fe108b083ad990eea4334fdea34d2e3b299de83eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjkkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-f7js4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:34Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:34 crc kubenswrapper[4642]: I0128 06:48:34.688849 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d609634-f051-46d5-9a1d-97785c7d8c67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76f1f4a4bf66b6661b0b869ac209c793c9066518674be305037372a22141c1d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f31fe6b99be2f9edd215e88aadd39790ce7f144061336772561af3eab969fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f826dab9c51a49c5b54e934e0be64c809ff09247f8aa766b9a20734a3f1277da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf1949a0768be6ba1b7cfc9107df62005fa50466747e22440e5ea8cd04ec908c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:34Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:34 crc kubenswrapper[4642]: I0128 06:48:34.690096 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dl8x\" (UniqueName: \"kubernetes.io/projected/e7ad39da-99cf-4851-be79-a7d38df54055-kube-api-access-7dl8x\") pod \"network-metrics-daemon-bpz6r\" (UID: \"e7ad39da-99cf-4851-be79-a7d38df54055\") " pod="openshift-multus/network-metrics-daemon-bpz6r" Jan 28 06:48:34 crc kubenswrapper[4642]: I0128 06:48:34.690143 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e7ad39da-99cf-4851-be79-a7d38df54055-metrics-certs\") pod \"network-metrics-daemon-bpz6r\" (UID: \"e7ad39da-99cf-4851-be79-a7d38df54055\") " pod="openshift-multus/network-metrics-daemon-bpz6r" Jan 28 06:48:34 crc kubenswrapper[4642]: I0128 06:48:34.697899 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5886e382e4852d848554f98099a7cbdd4d0bc3f161f96fd6f34f300a748b0172\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:34Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:34 crc kubenswrapper[4642]: I0128 06:48:34.709711 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:34 crc kubenswrapper[4642]: I0128 06:48:34.709750 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:34 crc kubenswrapper[4642]: I0128 06:48:34.709759 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:34 crc kubenswrapper[4642]: I0128 06:48:34.709781 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:34 crc kubenswrapper[4642]: I0128 06:48:34.709791 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:34Z","lastTransitionTime":"2026-01-28T06:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:34 crc kubenswrapper[4642]: I0128 06:48:34.710472 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c33d9fa76f5feb29f150ff38354d292c6cb68dedb78eb8496754c23272d137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9c679b07a820cca68800f672c74f133d871d41e4cfe6c18e101febd7110b497\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba5c27390ef97be2917d1b375ca741d7a74118ba523043b1c826e4fccaee05d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a597a1b6d4b61efc1b46773965ad598626e298815da9ceffaf97965d72286d3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e87e992d302acbefdcab2e332757e1f6d6535a0acdbe268b1e2eb1d49a615940\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6187660025657deea7e0cd68d1341136261c5e66c9e32c3b00b181df029843b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35193ef5e485263e1e373e9e2db577f3019bf0d55871920e59a0b313baed854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b35193ef5e485263e1e373e9e2db577f3019bf0d55871920e59a0b313baed854\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T06:48:29Z\\\",\\\"message\\\":\\\"ol:\\\\\\\"TCP\\\\\\\", inport:9393, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0128 06:48:29.954821 6081 lb_config.go:1031] Cluster endpoints for openshift-kube-apiserver-operator/metrics for network=default are: map[]\\\\nI0128 06:48:29.954839 6081 services_controller.go:444] Built service openshift-ingress-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0128 06:48:29.954849 6081 services_controller.go:445] Built service openshift-ingress-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF0128 06:48:29.954846 6081 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for banpInformer during admin network policy controller initialization, handler {0x1fcc300 0x1fcbfe0 0x1fcbf80} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-7fdwx_openshift-ovn-kubernetes(0f5d2a3f-25d8-4051-8000-30ec01a14eb0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cdaa92914e736d4130cee0be4f11381b0edec756ca70235c6a4205f04777874\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecf4d023197544de3b0fe0f541dc75332bb08d0f44046e46fd468e3c5a0f251e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecf4d023197544de3b0fe0f541dc75332bb08d0f44046e46fd468e3c5a0f251e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7fdwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:34Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:34 crc kubenswrapper[4642]: I0128 06:48:34.791015 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dl8x\" (UniqueName: \"kubernetes.io/projected/e7ad39da-99cf-4851-be79-a7d38df54055-kube-api-access-7dl8x\") pod \"network-metrics-daemon-bpz6r\" (UID: \"e7ad39da-99cf-4851-be79-a7d38df54055\") " pod="openshift-multus/network-metrics-daemon-bpz6r" Jan 28 06:48:34 crc kubenswrapper[4642]: I0128 06:48:34.791074 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e7ad39da-99cf-4851-be79-a7d38df54055-metrics-certs\") pod \"network-metrics-daemon-bpz6r\" (UID: \"e7ad39da-99cf-4851-be79-a7d38df54055\") " pod="openshift-multus/network-metrics-daemon-bpz6r" Jan 28 06:48:34 crc kubenswrapper[4642]: E0128 06:48:34.791263 4642 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 06:48:34 crc kubenswrapper[4642]: E0128 06:48:34.791373 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e7ad39da-99cf-4851-be79-a7d38df54055-metrics-certs podName:e7ad39da-99cf-4851-be79-a7d38df54055 nodeName:}" failed. No retries permitted until 2026-01-28 06:48:35.291318966 +0000 UTC m=+38.523407776 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e7ad39da-99cf-4851-be79-a7d38df54055-metrics-certs") pod "network-metrics-daemon-bpz6r" (UID: "e7ad39da-99cf-4851-be79-a7d38df54055") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 06:48:34 crc kubenswrapper[4642]: I0128 06:48:34.807892 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dl8x\" (UniqueName: \"kubernetes.io/projected/e7ad39da-99cf-4851-be79-a7d38df54055-kube-api-access-7dl8x\") pod \"network-metrics-daemon-bpz6r\" (UID: \"e7ad39da-99cf-4851-be79-a7d38df54055\") " pod="openshift-multus/network-metrics-daemon-bpz6r" Jan 28 06:48:34 crc kubenswrapper[4642]: I0128 06:48:34.811695 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:34 crc kubenswrapper[4642]: I0128 06:48:34.811793 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:34 crc kubenswrapper[4642]: I0128 06:48:34.811867 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:34 crc kubenswrapper[4642]: I0128 06:48:34.811945 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:34 crc kubenswrapper[4642]: I0128 06:48:34.812021 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:34Z","lastTransitionTime":"2026-01-28T06:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:34 crc kubenswrapper[4642]: I0128 06:48:34.914292 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:34 crc kubenswrapper[4642]: I0128 06:48:34.914445 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:34 crc kubenswrapper[4642]: I0128 06:48:34.914510 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:34 crc kubenswrapper[4642]: I0128 06:48:34.914582 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:34 crc kubenswrapper[4642]: I0128 06:48:34.914653 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:34Z","lastTransitionTime":"2026-01-28T06:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:35 crc kubenswrapper[4642]: I0128 06:48:35.017492 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:35 crc kubenswrapper[4642]: I0128 06:48:35.017542 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:35 crc kubenswrapper[4642]: I0128 06:48:35.017552 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:35 crc kubenswrapper[4642]: I0128 06:48:35.017569 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:35 crc kubenswrapper[4642]: I0128 06:48:35.017584 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:35Z","lastTransitionTime":"2026-01-28T06:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:35 crc kubenswrapper[4642]: I0128 06:48:35.080432 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 12:35:12.116624231 +0000 UTC Jan 28 06:48:35 crc kubenswrapper[4642]: I0128 06:48:35.098080 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 06:48:35 crc kubenswrapper[4642]: I0128 06:48:35.098155 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 06:48:35 crc kubenswrapper[4642]: I0128 06:48:35.098080 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 06:48:35 crc kubenswrapper[4642]: E0128 06:48:35.098231 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 06:48:35 crc kubenswrapper[4642]: E0128 06:48:35.098330 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 06:48:35 crc kubenswrapper[4642]: E0128 06:48:35.098387 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 06:48:35 crc kubenswrapper[4642]: I0128 06:48:35.119057 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:35 crc kubenswrapper[4642]: I0128 06:48:35.119095 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:35 crc kubenswrapper[4642]: I0128 06:48:35.119105 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:35 crc kubenswrapper[4642]: I0128 06:48:35.119121 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:35 crc kubenswrapper[4642]: I0128 06:48:35.119133 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:35Z","lastTransitionTime":"2026-01-28T06:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:35 crc kubenswrapper[4642]: I0128 06:48:35.221523 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:35 crc kubenswrapper[4642]: I0128 06:48:35.221571 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:35 crc kubenswrapper[4642]: I0128 06:48:35.221613 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:35 crc kubenswrapper[4642]: I0128 06:48:35.221631 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:35 crc kubenswrapper[4642]: I0128 06:48:35.221643 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:35Z","lastTransitionTime":"2026-01-28T06:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:35 crc kubenswrapper[4642]: I0128 06:48:35.296158 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e7ad39da-99cf-4851-be79-a7d38df54055-metrics-certs\") pod \"network-metrics-daemon-bpz6r\" (UID: \"e7ad39da-99cf-4851-be79-a7d38df54055\") " pod="openshift-multus/network-metrics-daemon-bpz6r" Jan 28 06:48:35 crc kubenswrapper[4642]: E0128 06:48:35.296380 4642 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 06:48:35 crc kubenswrapper[4642]: E0128 06:48:35.296468 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e7ad39da-99cf-4851-be79-a7d38df54055-metrics-certs podName:e7ad39da-99cf-4851-be79-a7d38df54055 nodeName:}" failed. No retries permitted until 2026-01-28 06:48:36.296448183 +0000 UTC m=+39.528536992 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e7ad39da-99cf-4851-be79-a7d38df54055-metrics-certs") pod "network-metrics-daemon-bpz6r" (UID: "e7ad39da-99cf-4851-be79-a7d38df54055") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 06:48:35 crc kubenswrapper[4642]: I0128 06:48:35.323744 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:35 crc kubenswrapper[4642]: I0128 06:48:35.323778 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:35 crc kubenswrapper[4642]: I0128 06:48:35.323791 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:35 crc kubenswrapper[4642]: I0128 06:48:35.323811 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:35 crc kubenswrapper[4642]: I0128 06:48:35.323826 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:35Z","lastTransitionTime":"2026-01-28T06:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:35 crc kubenswrapper[4642]: I0128 06:48:35.425801 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:35 crc kubenswrapper[4642]: I0128 06:48:35.425833 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:35 crc kubenswrapper[4642]: I0128 06:48:35.425844 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:35 crc kubenswrapper[4642]: I0128 06:48:35.425857 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:35 crc kubenswrapper[4642]: I0128 06:48:35.425866 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:35Z","lastTransitionTime":"2026-01-28T06:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:35 crc kubenswrapper[4642]: I0128 06:48:35.527711 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:35 crc kubenswrapper[4642]: I0128 06:48:35.527743 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:35 crc kubenswrapper[4642]: I0128 06:48:35.527752 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:35 crc kubenswrapper[4642]: I0128 06:48:35.527766 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:35 crc kubenswrapper[4642]: I0128 06:48:35.527776 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:35Z","lastTransitionTime":"2026-01-28T06:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:35 crc kubenswrapper[4642]: I0128 06:48:35.629836 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:35 crc kubenswrapper[4642]: I0128 06:48:35.629887 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:35 crc kubenswrapper[4642]: I0128 06:48:35.629900 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:35 crc kubenswrapper[4642]: I0128 06:48:35.629919 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:35 crc kubenswrapper[4642]: I0128 06:48:35.629935 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:35Z","lastTransitionTime":"2026-01-28T06:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:35 crc kubenswrapper[4642]: I0128 06:48:35.732176 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:35 crc kubenswrapper[4642]: I0128 06:48:35.732239 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:35 crc kubenswrapper[4642]: I0128 06:48:35.732250 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:35 crc kubenswrapper[4642]: I0128 06:48:35.732265 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:35 crc kubenswrapper[4642]: I0128 06:48:35.732284 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:35Z","lastTransitionTime":"2026-01-28T06:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:35 crc kubenswrapper[4642]: I0128 06:48:35.834416 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:35 crc kubenswrapper[4642]: I0128 06:48:35.834476 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:35 crc kubenswrapper[4642]: I0128 06:48:35.834488 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:35 crc kubenswrapper[4642]: I0128 06:48:35.834508 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:35 crc kubenswrapper[4642]: I0128 06:48:35.834522 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:35Z","lastTransitionTime":"2026-01-28T06:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:35 crc kubenswrapper[4642]: I0128 06:48:35.937064 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:35 crc kubenswrapper[4642]: I0128 06:48:35.937119 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:35 crc kubenswrapper[4642]: I0128 06:48:35.937130 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:35 crc kubenswrapper[4642]: I0128 06:48:35.937151 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:35 crc kubenswrapper[4642]: I0128 06:48:35.937162 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:35Z","lastTransitionTime":"2026-01-28T06:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:36 crc kubenswrapper[4642]: I0128 06:48:36.039830 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:36 crc kubenswrapper[4642]: I0128 06:48:36.039891 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:36 crc kubenswrapper[4642]: I0128 06:48:36.039903 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:36 crc kubenswrapper[4642]: I0128 06:48:36.039917 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:36 crc kubenswrapper[4642]: I0128 06:48:36.039934 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:36Z","lastTransitionTime":"2026-01-28T06:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:36 crc kubenswrapper[4642]: I0128 06:48:36.081306 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 22:00:40.044046685 +0000 UTC Jan 28 06:48:36 crc kubenswrapper[4642]: I0128 06:48:36.097886 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bpz6r" Jan 28 06:48:36 crc kubenswrapper[4642]: E0128 06:48:36.098079 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bpz6r" podUID="e7ad39da-99cf-4851-be79-a7d38df54055" Jan 28 06:48:36 crc kubenswrapper[4642]: I0128 06:48:36.142733 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:36 crc kubenswrapper[4642]: I0128 06:48:36.142792 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:36 crc kubenswrapper[4642]: I0128 06:48:36.142806 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:36 crc kubenswrapper[4642]: I0128 06:48:36.142829 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:36 crc kubenswrapper[4642]: I0128 06:48:36.142842 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:36Z","lastTransitionTime":"2026-01-28T06:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:36 crc kubenswrapper[4642]: I0128 06:48:36.245732 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:36 crc kubenswrapper[4642]: I0128 06:48:36.245792 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:36 crc kubenswrapper[4642]: I0128 06:48:36.245802 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:36 crc kubenswrapper[4642]: I0128 06:48:36.245815 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:36 crc kubenswrapper[4642]: I0128 06:48:36.245825 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:36Z","lastTransitionTime":"2026-01-28T06:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:36 crc kubenswrapper[4642]: I0128 06:48:36.305252 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e7ad39da-99cf-4851-be79-a7d38df54055-metrics-certs\") pod \"network-metrics-daemon-bpz6r\" (UID: \"e7ad39da-99cf-4851-be79-a7d38df54055\") " pod="openshift-multus/network-metrics-daemon-bpz6r" Jan 28 06:48:36 crc kubenswrapper[4642]: E0128 06:48:36.305415 4642 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 06:48:36 crc kubenswrapper[4642]: E0128 06:48:36.305497 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e7ad39da-99cf-4851-be79-a7d38df54055-metrics-certs podName:e7ad39da-99cf-4851-be79-a7d38df54055 nodeName:}" failed. No retries permitted until 2026-01-28 06:48:38.305476122 +0000 UTC m=+41.537564941 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e7ad39da-99cf-4851-be79-a7d38df54055-metrics-certs") pod "network-metrics-daemon-bpz6r" (UID: "e7ad39da-99cf-4851-be79-a7d38df54055") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 06:48:36 crc kubenswrapper[4642]: I0128 06:48:36.348083 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:36 crc kubenswrapper[4642]: I0128 06:48:36.348135 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:36 crc kubenswrapper[4642]: I0128 06:48:36.348149 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:36 crc kubenswrapper[4642]: I0128 06:48:36.348172 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:36 crc kubenswrapper[4642]: I0128 06:48:36.348202 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:36Z","lastTransitionTime":"2026-01-28T06:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:36 crc kubenswrapper[4642]: I0128 06:48:36.450560 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:36 crc kubenswrapper[4642]: I0128 06:48:36.450613 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:36 crc kubenswrapper[4642]: I0128 06:48:36.450625 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:36 crc kubenswrapper[4642]: I0128 06:48:36.450647 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:36 crc kubenswrapper[4642]: I0128 06:48:36.450659 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:36Z","lastTransitionTime":"2026-01-28T06:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:36 crc kubenswrapper[4642]: I0128 06:48:36.552811 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:36 crc kubenswrapper[4642]: I0128 06:48:36.552864 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:36 crc kubenswrapper[4642]: I0128 06:48:36.552874 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:36 crc kubenswrapper[4642]: I0128 06:48:36.552895 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:36 crc kubenswrapper[4642]: I0128 06:48:36.552908 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:36Z","lastTransitionTime":"2026-01-28T06:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:36 crc kubenswrapper[4642]: I0128 06:48:36.654470 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:36 crc kubenswrapper[4642]: I0128 06:48:36.654509 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:36 crc kubenswrapper[4642]: I0128 06:48:36.654518 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:36 crc kubenswrapper[4642]: I0128 06:48:36.654533 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:36 crc kubenswrapper[4642]: I0128 06:48:36.654544 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:36Z","lastTransitionTime":"2026-01-28T06:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:36 crc kubenswrapper[4642]: I0128 06:48:36.756923 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:36 crc kubenswrapper[4642]: I0128 06:48:36.756968 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:36 crc kubenswrapper[4642]: I0128 06:48:36.756978 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:36 crc kubenswrapper[4642]: I0128 06:48:36.756994 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:36 crc kubenswrapper[4642]: I0128 06:48:36.757002 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:36Z","lastTransitionTime":"2026-01-28T06:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:36 crc kubenswrapper[4642]: I0128 06:48:36.860025 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:36 crc kubenswrapper[4642]: I0128 06:48:36.860082 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:36 crc kubenswrapper[4642]: I0128 06:48:36.860094 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:36 crc kubenswrapper[4642]: I0128 06:48:36.860108 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:36 crc kubenswrapper[4642]: I0128 06:48:36.860122 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:36Z","lastTransitionTime":"2026-01-28T06:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:36 crc kubenswrapper[4642]: I0128 06:48:36.962638 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:36 crc kubenswrapper[4642]: I0128 06:48:36.962700 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:36 crc kubenswrapper[4642]: I0128 06:48:36.962711 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:36 crc kubenswrapper[4642]: I0128 06:48:36.962730 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:36 crc kubenswrapper[4642]: I0128 06:48:36.962747 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:36Z","lastTransitionTime":"2026-01-28T06:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:37 crc kubenswrapper[4642]: I0128 06:48:37.064804 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:37 crc kubenswrapper[4642]: I0128 06:48:37.064846 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:37 crc kubenswrapper[4642]: I0128 06:48:37.064856 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:37 crc kubenswrapper[4642]: I0128 06:48:37.064875 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:37 crc kubenswrapper[4642]: I0128 06:48:37.064885 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:37Z","lastTransitionTime":"2026-01-28T06:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:37 crc kubenswrapper[4642]: I0128 06:48:37.081985 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 01:49:34.651995226 +0000 UTC Jan 28 06:48:37 crc kubenswrapper[4642]: I0128 06:48:37.097568 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 06:48:37 crc kubenswrapper[4642]: I0128 06:48:37.097628 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 06:48:37 crc kubenswrapper[4642]: E0128 06:48:37.097709 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 06:48:37 crc kubenswrapper[4642]: I0128 06:48:37.097785 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 06:48:37 crc kubenswrapper[4642]: E0128 06:48:37.097855 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 06:48:37 crc kubenswrapper[4642]: E0128 06:48:37.097960 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 06:48:37 crc kubenswrapper[4642]: I0128 06:48:37.111852 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"286f1a08-e4be-475e-a3ff-c37c99c41ea6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6769a5284fe4dae7df7948532cabc41d1f4a27a7e61ab818d61b26eb8165a750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75be46ee251716ce59f7d4e31811114c790e07ff2068465e1bba6ee4abc6909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1c4b797e1cd78fa6a850c8c7491a690dab16a3c06eeb71b2dc7e6799cf285a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7181241cbadd0b8edc9a6ff7f035813a1b2a06cdec451dc94d869bb56424a485\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70d9e422e29c4563585ee85b03f4d3e5705c7de8c388d337a52b30a7f0ccc096\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0128 06:48:09.026722 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 06:48:09.029824 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3147749855/tls.crt::/tmp/serving-cert-3147749855/tls.key\\\\\\\"\\\\nI0128 06:48:14.183760 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 06:48:14.185955 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 06:48:14.185992 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 06:48:14.186011 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 06:48:14.186020 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 06:48:14.192438 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0128 06:48:14.192454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0128 06:48:14.192477 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 06:48:14.192484 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 06:48:14.192488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 06:48:14.192490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 06:48:14.192493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 06:48:14.192496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0128 06:48:14.194350 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6d564af60cb16fea7167aea68703adced71f83b8ff6a2270e2e6d109f6e9c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d544adc85b9e84ad7685c6ce4e1ae6e7679fb24a21b7d44eb402b613ef6be7ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d544adc85b9e84ad7685c6ce4e1ae6e7679fb24a21b7d44eb402b613ef6be7ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:37Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:37 crc kubenswrapper[4642]: I0128 06:48:37.122797 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:37Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:37 crc kubenswrapper[4642]: I0128 06:48:37.137257 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zwkn6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e8cd657-e170-4331-9f82-7b84d122c8e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a285511bdaad491d745092714ce6a3d6b08414c40333226f4c7a0a1176ea7260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd2d14172912918c88948764f41aa4b5f2f97fa49f9edf2007a478b7c06b807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fd2d14172912918c88948764f41aa4b5f2f97fa49f9edf2007a478b7c06b807\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa815d145b52bbd51ede4a9e3c88571f861f70a6f5dc998910e506e358f73af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7aa815d145b52bbd51ede4a9e3c88571f861f70a6f5dc998910e506e358f73af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e6b7369b4926f78f6257e3f166c6540e93281a880c2a21ae3f87ecf32aaca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e6b7369b4926f78f6257e3f166c6540e93281a880c2a21ae3f87ecf32aaca0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce747d5ffc2f77c768c9e7a472ef0dbf11387e6c11215622ffceef721c9f677\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce747d5ffc2f77c768c9e7a472ef0dbf11387e6c11215622ffceef721c9f677\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a45118261ada0368155ab783800af3a71afe445370f06759b98928518999151d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a45118261ada0368155ab783800af3a71afe445370f06759b98928518999151d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7d61a01712de033907e9fec425f948bcf708899adea8b125ff47bc3b9953f3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7d61a01712de033907e9fec425f948bcf708899adea8b125ff47bc3b9953f3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zwkn6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:37Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:37 crc kubenswrapper[4642]: I0128 06:48:37.146974 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-28n48" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d569b7c-8a0e-4074-b61f-4139413b9849\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04752968302ea040e4a64b015290addc260974a7243137d55948c3e1a3e3a849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kp8zj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-28n48\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:37Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:37 crc kubenswrapper[4642]: I0128 06:48:37.157213 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wcwkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0008ccce-c71a-484f-9df7-02d5d92e8b02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04f0fbcd26ac8f7340450ee1edb0990146e1fee5ac1a5de78611d4a3c6ca7516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s57nl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wcwkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:37Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:37 crc kubenswrapper[4642]: I0128 06:48:37.165598 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tzmpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e61b4710-6f7c-4ab1-b7bb-50d445aeda93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51dac6b178ad3961488d10a37458f5d40c3345f0091f7050a4c3d310ea46704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rfx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tzmpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:37Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:37 crc kubenswrapper[4642]: I0128 06:48:37.166861 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:37 crc kubenswrapper[4642]: I0128 06:48:37.166905 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:37 crc kubenswrapper[4642]: I0128 06:48:37.166917 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:37 crc kubenswrapper[4642]: I0128 06:48:37.166936 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:37 crc kubenswrapper[4642]: I0128 06:48:37.166951 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:37Z","lastTransitionTime":"2026-01-28T06:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:37 crc kubenswrapper[4642]: I0128 06:48:37.175669 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"338ae955-434d-40bd-8519-580badf3e175\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8c176f35c846bbf16f17d93e6c30e3708e24e70060ebc0e78f9714fc8c402f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdf4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f517e5ecc6bf060062365392506b3ee3a09354a4f5cfaa980e0e3a2507f298c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdf4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hdsmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:37Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:37 crc kubenswrapper[4642]: I0128 06:48:37.183788 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bpz6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ad39da-99cf-4851-be79-a7d38df54055\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7dl8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7dl8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bpz6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:37Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:37 crc kubenswrapper[4642]: I0128 06:48:37.193810 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:37Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:37 crc kubenswrapper[4642]: I0128 06:48:37.202546 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:37Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:37 crc kubenswrapper[4642]: I0128 06:48:37.211709 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4acb8b3be7913130bfe0acca1f8ed39f30ee2121eec228e4878f5645899f53cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfb4620ab8d587f286964f74ee575b4837fe55979a2f70767d30709cdb4fb2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:37Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:37 crc kubenswrapper[4642]: I0128 06:48:37.222727 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e3e00c82a4c8d33e89ddc75f7c6c393e11f7a98c88c8e3e160fd8f206ece474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:37Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:37 crc kubenswrapper[4642]: I0128 06:48:37.231440 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7js4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c649375a-21a3-43f5-bd77-fbc87de527fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56f22af3d5ead6bcc96f5a78901579a4c02637186393613dad0006ecc09e4731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjkkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ad8af253ff523788331637fe108b083ad990eea4334fdea34d2e3b299de83eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjkkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-f7js4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:37Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:37 crc kubenswrapper[4642]: I0128 06:48:37.240536 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d609634-f051-46d5-9a1d-97785c7d8c67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76f1f4a4bf66b6661b0b869ac209c793c9066518674be305037372a22141c1d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f31fe6b99be2f9edd215e88aadd39790ce7f144061336772561af3eab969fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f826dab9c51a49c5b54e934e0be64c809ff09247f8aa766b9a20734a3f1277da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf1949a0768be6ba1b7cfc9107df62005fa50466747e22440e5ea8cd04ec908c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:37Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:37 crc kubenswrapper[4642]: I0128 06:48:37.249725 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5886e382e4852d848554f98099a7cbdd4d0bc3f161f96fd6f34f300a748b0172\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:37Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:37 crc kubenswrapper[4642]: I0128 06:48:37.262653 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c33d9fa76f5feb29f150ff38354d292c6cb68dedb78eb8496754c23272d137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9c679b07a820cca68800f672c74f133d871d41e4cfe6c18e101febd7110b497\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba5c27390ef97be2917d1b375ca741d7a74118ba523043b1c826e4fccaee05d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a597a1b6d4b61efc1b46773965ad598626e298815da9ceffaf97965d72286d3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e87e992d302acbefdcab2e332757e1f6d6535a0acdbe268b1e2eb1d49a615940\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6187660025657deea7e0cd68d1341136261c5e66c9e32c3b00b181df029843b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35193ef5e485263e1e373e9e2db577f3019bf0d55871920e59a0b313baed854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b35193ef5e485263e1e373e9e2db577f3019bf0d55871920e59a0b313baed854\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T06:48:29Z\\\",\\\"message\\\":\\\"ol:\\\\\\\"TCP\\\\\\\", inport:9393, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0128 06:48:29.954821 6081 lb_config.go:1031] Cluster endpoints for openshift-kube-apiserver-operator/metrics for network=default are: map[]\\\\nI0128 06:48:29.954839 6081 services_controller.go:444] Built service openshift-ingress-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0128 06:48:29.954849 6081 services_controller.go:445] Built service openshift-ingress-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF0128 06:48:29.954846 6081 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for banpInformer during admin network policy controller initialization, handler {0x1fcc300 0x1fcbfe0 0x1fcbf80} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-7fdwx_openshift-ovn-kubernetes(0f5d2a3f-25d8-4051-8000-30ec01a14eb0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cdaa92914e736d4130cee0be4f11381b0edec756ca70235c6a4205f04777874\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecf4d023197544de3b0fe0f541dc75332bb08d0f44046e46fd468e3c5a0f251e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecf4d023197544de3b0fe0f541dc75332bb08d0f44046e46fd468e3c5a0f251e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7fdwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:37Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:37 crc kubenswrapper[4642]: I0128 06:48:37.269235 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:37 crc kubenswrapper[4642]: I0128 06:48:37.269273 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:37 crc kubenswrapper[4642]: I0128 06:48:37.269284 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:37 crc kubenswrapper[4642]: I0128 06:48:37.269299 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:37 crc kubenswrapper[4642]: I0128 06:48:37.269310 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:37Z","lastTransitionTime":"2026-01-28T06:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:37 crc kubenswrapper[4642]: I0128 06:48:37.371778 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:37 crc kubenswrapper[4642]: I0128 06:48:37.371819 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:37 crc kubenswrapper[4642]: I0128 06:48:37.371830 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:37 crc kubenswrapper[4642]: I0128 06:48:37.371872 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:37 crc kubenswrapper[4642]: I0128 06:48:37.371883 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:37Z","lastTransitionTime":"2026-01-28T06:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:37 crc kubenswrapper[4642]: I0128 06:48:37.474296 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:37 crc kubenswrapper[4642]: I0128 06:48:37.474337 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:37 crc kubenswrapper[4642]: I0128 06:48:37.474355 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:37 crc kubenswrapper[4642]: I0128 06:48:37.474374 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:37 crc kubenswrapper[4642]: I0128 06:48:37.474387 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:37Z","lastTransitionTime":"2026-01-28T06:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:37 crc kubenswrapper[4642]: I0128 06:48:37.576850 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:37 crc kubenswrapper[4642]: I0128 06:48:37.576884 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:37 crc kubenswrapper[4642]: I0128 06:48:37.576896 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:37 crc kubenswrapper[4642]: I0128 06:48:37.576911 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:37 crc kubenswrapper[4642]: I0128 06:48:37.576921 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:37Z","lastTransitionTime":"2026-01-28T06:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:37 crc kubenswrapper[4642]: I0128 06:48:37.636119 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:37 crc kubenswrapper[4642]: I0128 06:48:37.636151 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:37 crc kubenswrapper[4642]: I0128 06:48:37.636162 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:37 crc kubenswrapper[4642]: I0128 06:48:37.636175 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:37 crc kubenswrapper[4642]: I0128 06:48:37.636221 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:37Z","lastTransitionTime":"2026-01-28T06:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:37 crc kubenswrapper[4642]: E0128 06:48:37.646821 4642 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"af9fb491-28d4-46e5-857c-727aaf9d83d0\\\",\\\"systemUUID\\\":\\\"6d7f2c45-295c-4dcd-b97d-d5c383274b44\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:37Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:37 crc kubenswrapper[4642]: I0128 06:48:37.650828 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:37 crc kubenswrapper[4642]: I0128 06:48:37.650856 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:37 crc kubenswrapper[4642]: I0128 06:48:37.650866 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:37 crc kubenswrapper[4642]: I0128 06:48:37.650877 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:37 crc kubenswrapper[4642]: I0128 06:48:37.650886 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:37Z","lastTransitionTime":"2026-01-28T06:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:37 crc kubenswrapper[4642]: E0128 06:48:37.661482 4642 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"af9fb491-28d4-46e5-857c-727aaf9d83d0\\\",\\\"systemUUID\\\":\\\"6d7f2c45-295c-4dcd-b97d-d5c383274b44\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:37Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:37 crc kubenswrapper[4642]: I0128 06:48:37.664391 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:37 crc kubenswrapper[4642]: I0128 06:48:37.664436 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:37 crc kubenswrapper[4642]: I0128 06:48:37.664447 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:37 crc kubenswrapper[4642]: I0128 06:48:37.664464 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:37 crc kubenswrapper[4642]: I0128 06:48:37.664477 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:37Z","lastTransitionTime":"2026-01-28T06:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:37 crc kubenswrapper[4642]: E0128 06:48:37.674506 4642 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"af9fb491-28d4-46e5-857c-727aaf9d83d0\\\",\\\"systemUUID\\\":\\\"6d7f2c45-295c-4dcd-b97d-d5c383274b44\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:37Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:37 crc kubenswrapper[4642]: I0128 06:48:37.677331 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:37 crc kubenswrapper[4642]: I0128 06:48:37.677395 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:37 crc kubenswrapper[4642]: I0128 06:48:37.677409 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:37 crc kubenswrapper[4642]: I0128 06:48:37.677422 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:37 crc kubenswrapper[4642]: I0128 06:48:37.677431 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:37Z","lastTransitionTime":"2026-01-28T06:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:37 crc kubenswrapper[4642]: E0128 06:48:37.686663 4642 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"af9fb491-28d4-46e5-857c-727aaf9d83d0\\\",\\\"systemUUID\\\":\\\"6d7f2c45-295c-4dcd-b97d-d5c383274b44\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:37Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:37 crc kubenswrapper[4642]: I0128 06:48:37.689805 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:37 crc kubenswrapper[4642]: I0128 06:48:37.689863 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:37 crc kubenswrapper[4642]: I0128 06:48:37.689879 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:37 crc kubenswrapper[4642]: I0128 06:48:37.689894 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:37 crc kubenswrapper[4642]: I0128 06:48:37.689903 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:37Z","lastTransitionTime":"2026-01-28T06:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:37 crc kubenswrapper[4642]: E0128 06:48:37.699818 4642 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"af9fb491-28d4-46e5-857c-727aaf9d83d0\\\",\\\"systemUUID\\\":\\\"6d7f2c45-295c-4dcd-b97d-d5c383274b44\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:37Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:37 crc kubenswrapper[4642]: E0128 06:48:37.699921 4642 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 06:48:37 crc kubenswrapper[4642]: I0128 06:48:37.700921 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:37 crc kubenswrapper[4642]: I0128 06:48:37.700955 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:37 crc kubenswrapper[4642]: I0128 06:48:37.700965 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:37 crc kubenswrapper[4642]: I0128 06:48:37.700977 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:37 crc kubenswrapper[4642]: I0128 06:48:37.700985 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:37Z","lastTransitionTime":"2026-01-28T06:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:37 crc kubenswrapper[4642]: I0128 06:48:37.803683 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:37 crc kubenswrapper[4642]: I0128 06:48:37.803750 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:37 crc kubenswrapper[4642]: I0128 06:48:37.803760 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:37 crc kubenswrapper[4642]: I0128 06:48:37.803776 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:37 crc kubenswrapper[4642]: I0128 06:48:37.803787 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:37Z","lastTransitionTime":"2026-01-28T06:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:37 crc kubenswrapper[4642]: I0128 06:48:37.906464 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:37 crc kubenswrapper[4642]: I0128 06:48:37.906520 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:37 crc kubenswrapper[4642]: I0128 06:48:37.906533 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:37 crc kubenswrapper[4642]: I0128 06:48:37.906555 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:37 crc kubenswrapper[4642]: I0128 06:48:37.906568 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:37Z","lastTransitionTime":"2026-01-28T06:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:38 crc kubenswrapper[4642]: I0128 06:48:38.008802 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:38 crc kubenswrapper[4642]: I0128 06:48:38.008873 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:38 crc kubenswrapper[4642]: I0128 06:48:38.008882 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:38 crc kubenswrapper[4642]: I0128 06:48:38.008900 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:38 crc kubenswrapper[4642]: I0128 06:48:38.008912 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:38Z","lastTransitionTime":"2026-01-28T06:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:38 crc kubenswrapper[4642]: I0128 06:48:38.082865 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 06:01:39.948145614 +0000 UTC Jan 28 06:48:38 crc kubenswrapper[4642]: I0128 06:48:38.097400 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bpz6r" Jan 28 06:48:38 crc kubenswrapper[4642]: E0128 06:48:38.097565 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bpz6r" podUID="e7ad39da-99cf-4851-be79-a7d38df54055" Jan 28 06:48:38 crc kubenswrapper[4642]: I0128 06:48:38.111001 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:38 crc kubenswrapper[4642]: I0128 06:48:38.111036 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:38 crc kubenswrapper[4642]: I0128 06:48:38.111048 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:38 crc kubenswrapper[4642]: I0128 06:48:38.111066 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:38 crc kubenswrapper[4642]: I0128 06:48:38.111086 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:38Z","lastTransitionTime":"2026-01-28T06:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:38 crc kubenswrapper[4642]: I0128 06:48:38.214337 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:38 crc kubenswrapper[4642]: I0128 06:48:38.214409 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:38 crc kubenswrapper[4642]: I0128 06:48:38.214419 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:38 crc kubenswrapper[4642]: I0128 06:48:38.214430 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:38 crc kubenswrapper[4642]: I0128 06:48:38.214441 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:38Z","lastTransitionTime":"2026-01-28T06:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:38 crc kubenswrapper[4642]: I0128 06:48:38.316601 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:38 crc kubenswrapper[4642]: I0128 06:48:38.316644 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:38 crc kubenswrapper[4642]: I0128 06:48:38.316655 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:38 crc kubenswrapper[4642]: I0128 06:48:38.316669 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:38 crc kubenswrapper[4642]: I0128 06:48:38.316681 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:38Z","lastTransitionTime":"2026-01-28T06:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:38 crc kubenswrapper[4642]: I0128 06:48:38.323984 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e7ad39da-99cf-4851-be79-a7d38df54055-metrics-certs\") pod \"network-metrics-daemon-bpz6r\" (UID: \"e7ad39da-99cf-4851-be79-a7d38df54055\") " pod="openshift-multus/network-metrics-daemon-bpz6r" Jan 28 06:48:38 crc kubenswrapper[4642]: E0128 06:48:38.324239 4642 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 06:48:38 crc kubenswrapper[4642]: E0128 06:48:38.324332 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e7ad39da-99cf-4851-be79-a7d38df54055-metrics-certs podName:e7ad39da-99cf-4851-be79-a7d38df54055 nodeName:}" failed. No retries permitted until 2026-01-28 06:48:42.324306103 +0000 UTC m=+45.556394922 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e7ad39da-99cf-4851-be79-a7d38df54055-metrics-certs") pod "network-metrics-daemon-bpz6r" (UID: "e7ad39da-99cf-4851-be79-a7d38df54055") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 06:48:38 crc kubenswrapper[4642]: I0128 06:48:38.419057 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:38 crc kubenswrapper[4642]: I0128 06:48:38.419085 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:38 crc kubenswrapper[4642]: I0128 06:48:38.419096 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:38 crc kubenswrapper[4642]: I0128 06:48:38.419111 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:38 crc kubenswrapper[4642]: I0128 06:48:38.419123 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:38Z","lastTransitionTime":"2026-01-28T06:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:38 crc kubenswrapper[4642]: I0128 06:48:38.521929 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:38 crc kubenswrapper[4642]: I0128 06:48:38.521985 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:38 crc kubenswrapper[4642]: I0128 06:48:38.521995 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:38 crc kubenswrapper[4642]: I0128 06:48:38.522020 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:38 crc kubenswrapper[4642]: I0128 06:48:38.522061 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:38Z","lastTransitionTime":"2026-01-28T06:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:38 crc kubenswrapper[4642]: I0128 06:48:38.624614 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:38 crc kubenswrapper[4642]: I0128 06:48:38.624665 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:38 crc kubenswrapper[4642]: I0128 06:48:38.624677 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:38 crc kubenswrapper[4642]: I0128 06:48:38.624697 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:38 crc kubenswrapper[4642]: I0128 06:48:38.624711 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:38Z","lastTransitionTime":"2026-01-28T06:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:38 crc kubenswrapper[4642]: I0128 06:48:38.726929 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:38 crc kubenswrapper[4642]: I0128 06:48:38.726986 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:38 crc kubenswrapper[4642]: I0128 06:48:38.726998 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:38 crc kubenswrapper[4642]: I0128 06:48:38.727015 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:38 crc kubenswrapper[4642]: I0128 06:48:38.727025 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:38Z","lastTransitionTime":"2026-01-28T06:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:38 crc kubenswrapper[4642]: I0128 06:48:38.829037 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:38 crc kubenswrapper[4642]: I0128 06:48:38.829086 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:38 crc kubenswrapper[4642]: I0128 06:48:38.829099 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:38 crc kubenswrapper[4642]: I0128 06:48:38.829118 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:38 crc kubenswrapper[4642]: I0128 06:48:38.829129 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:38Z","lastTransitionTime":"2026-01-28T06:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:38 crc kubenswrapper[4642]: I0128 06:48:38.931326 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:38 crc kubenswrapper[4642]: I0128 06:48:38.931388 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:38 crc kubenswrapper[4642]: I0128 06:48:38.931401 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:38 crc kubenswrapper[4642]: I0128 06:48:38.931419 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:38 crc kubenswrapper[4642]: I0128 06:48:38.931430 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:38Z","lastTransitionTime":"2026-01-28T06:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:39 crc kubenswrapper[4642]: I0128 06:48:39.033839 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:39 crc kubenswrapper[4642]: I0128 06:48:39.033878 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:39 crc kubenswrapper[4642]: I0128 06:48:39.033887 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:39 crc kubenswrapper[4642]: I0128 06:48:39.033901 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:39 crc kubenswrapper[4642]: I0128 06:48:39.033909 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:39Z","lastTransitionTime":"2026-01-28T06:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:39 crc kubenswrapper[4642]: I0128 06:48:39.083728 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 10:03:18.718974345 +0000 UTC Jan 28 06:48:39 crc kubenswrapper[4642]: I0128 06:48:39.098272 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 06:48:39 crc kubenswrapper[4642]: I0128 06:48:39.098293 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 06:48:39 crc kubenswrapper[4642]: I0128 06:48:39.098376 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 06:48:39 crc kubenswrapper[4642]: E0128 06:48:39.098432 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 06:48:39 crc kubenswrapper[4642]: E0128 06:48:39.098634 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 06:48:39 crc kubenswrapper[4642]: E0128 06:48:39.098830 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 06:48:39 crc kubenswrapper[4642]: I0128 06:48:39.135629 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:39 crc kubenswrapper[4642]: I0128 06:48:39.135662 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:39 crc kubenswrapper[4642]: I0128 06:48:39.135672 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:39 crc kubenswrapper[4642]: I0128 06:48:39.135683 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:39 crc kubenswrapper[4642]: I0128 06:48:39.135692 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:39Z","lastTransitionTime":"2026-01-28T06:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:39 crc kubenswrapper[4642]: I0128 06:48:39.237756 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:39 crc kubenswrapper[4642]: I0128 06:48:39.237801 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:39 crc kubenswrapper[4642]: I0128 06:48:39.237818 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:39 crc kubenswrapper[4642]: I0128 06:48:39.237837 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:39 crc kubenswrapper[4642]: I0128 06:48:39.237850 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:39Z","lastTransitionTime":"2026-01-28T06:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:39 crc kubenswrapper[4642]: I0128 06:48:39.339540 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:39 crc kubenswrapper[4642]: I0128 06:48:39.339592 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:39 crc kubenswrapper[4642]: I0128 06:48:39.339605 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:39 crc kubenswrapper[4642]: I0128 06:48:39.339626 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:39 crc kubenswrapper[4642]: I0128 06:48:39.339640 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:39Z","lastTransitionTime":"2026-01-28T06:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:39 crc kubenswrapper[4642]: I0128 06:48:39.441344 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:39 crc kubenswrapper[4642]: I0128 06:48:39.441399 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:39 crc kubenswrapper[4642]: I0128 06:48:39.441408 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:39 crc kubenswrapper[4642]: I0128 06:48:39.441423 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:39 crc kubenswrapper[4642]: I0128 06:48:39.441436 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:39Z","lastTransitionTime":"2026-01-28T06:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:39 crc kubenswrapper[4642]: I0128 06:48:39.543967 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:39 crc kubenswrapper[4642]: I0128 06:48:39.544013 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:39 crc kubenswrapper[4642]: I0128 06:48:39.544024 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:39 crc kubenswrapper[4642]: I0128 06:48:39.544045 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:39 crc kubenswrapper[4642]: I0128 06:48:39.544061 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:39Z","lastTransitionTime":"2026-01-28T06:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:39 crc kubenswrapper[4642]: I0128 06:48:39.646142 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:39 crc kubenswrapper[4642]: I0128 06:48:39.646205 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:39 crc kubenswrapper[4642]: I0128 06:48:39.646214 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:39 crc kubenswrapper[4642]: I0128 06:48:39.646234 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:39 crc kubenswrapper[4642]: I0128 06:48:39.646245 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:39Z","lastTransitionTime":"2026-01-28T06:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:39 crc kubenswrapper[4642]: I0128 06:48:39.748041 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:39 crc kubenswrapper[4642]: I0128 06:48:39.748092 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:39 crc kubenswrapper[4642]: I0128 06:48:39.748104 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:39 crc kubenswrapper[4642]: I0128 06:48:39.748119 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:39 crc kubenswrapper[4642]: I0128 06:48:39.748130 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:39Z","lastTransitionTime":"2026-01-28T06:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:39 crc kubenswrapper[4642]: I0128 06:48:39.850686 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:39 crc kubenswrapper[4642]: I0128 06:48:39.850722 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:39 crc kubenswrapper[4642]: I0128 06:48:39.850731 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:39 crc kubenswrapper[4642]: I0128 06:48:39.850747 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:39 crc kubenswrapper[4642]: I0128 06:48:39.850761 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:39Z","lastTransitionTime":"2026-01-28T06:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:39 crc kubenswrapper[4642]: I0128 06:48:39.955578 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:39 crc kubenswrapper[4642]: I0128 06:48:39.955627 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:39 crc kubenswrapper[4642]: I0128 06:48:39.955638 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:39 crc kubenswrapper[4642]: I0128 06:48:39.955658 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:39 crc kubenswrapper[4642]: I0128 06:48:39.955671 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:39Z","lastTransitionTime":"2026-01-28T06:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:40 crc kubenswrapper[4642]: I0128 06:48:40.057613 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:40 crc kubenswrapper[4642]: I0128 06:48:40.057674 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:40 crc kubenswrapper[4642]: I0128 06:48:40.057683 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:40 crc kubenswrapper[4642]: I0128 06:48:40.057702 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:40 crc kubenswrapper[4642]: I0128 06:48:40.057719 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:40Z","lastTransitionTime":"2026-01-28T06:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:40 crc kubenswrapper[4642]: I0128 06:48:40.084836 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 14:37:49.669396448 +0000 UTC Jan 28 06:48:40 crc kubenswrapper[4642]: I0128 06:48:40.098244 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bpz6r" Jan 28 06:48:40 crc kubenswrapper[4642]: E0128 06:48:40.098462 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bpz6r" podUID="e7ad39da-99cf-4851-be79-a7d38df54055" Jan 28 06:48:40 crc kubenswrapper[4642]: I0128 06:48:40.159923 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:40 crc kubenswrapper[4642]: I0128 06:48:40.160082 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:40 crc kubenswrapper[4642]: I0128 06:48:40.160147 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:40 crc kubenswrapper[4642]: I0128 06:48:40.160234 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:40 crc kubenswrapper[4642]: I0128 06:48:40.160292 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:40Z","lastTransitionTime":"2026-01-28T06:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:40 crc kubenswrapper[4642]: I0128 06:48:40.264715 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:40 crc kubenswrapper[4642]: I0128 06:48:40.264786 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:40 crc kubenswrapper[4642]: I0128 06:48:40.264799 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:40 crc kubenswrapper[4642]: I0128 06:48:40.264828 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:40 crc kubenswrapper[4642]: I0128 06:48:40.264849 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:40Z","lastTransitionTime":"2026-01-28T06:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:40 crc kubenswrapper[4642]: I0128 06:48:40.367653 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:40 crc kubenswrapper[4642]: I0128 06:48:40.367698 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:40 crc kubenswrapper[4642]: I0128 06:48:40.367707 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:40 crc kubenswrapper[4642]: I0128 06:48:40.367730 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:40 crc kubenswrapper[4642]: I0128 06:48:40.367743 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:40Z","lastTransitionTime":"2026-01-28T06:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:40 crc kubenswrapper[4642]: I0128 06:48:40.474163 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:40 crc kubenswrapper[4642]: I0128 06:48:40.474238 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:40 crc kubenswrapper[4642]: I0128 06:48:40.474250 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:40 crc kubenswrapper[4642]: I0128 06:48:40.474270 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:40 crc kubenswrapper[4642]: I0128 06:48:40.474286 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:40Z","lastTransitionTime":"2026-01-28T06:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:40 crc kubenswrapper[4642]: I0128 06:48:40.576687 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:40 crc kubenswrapper[4642]: I0128 06:48:40.576731 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:40 crc kubenswrapper[4642]: I0128 06:48:40.576740 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:40 crc kubenswrapper[4642]: I0128 06:48:40.576757 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:40 crc kubenswrapper[4642]: I0128 06:48:40.576768 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:40Z","lastTransitionTime":"2026-01-28T06:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:40 crc kubenswrapper[4642]: I0128 06:48:40.679562 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:40 crc kubenswrapper[4642]: I0128 06:48:40.679598 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:40 crc kubenswrapper[4642]: I0128 06:48:40.679611 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:40 crc kubenswrapper[4642]: I0128 06:48:40.679632 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:40 crc kubenswrapper[4642]: I0128 06:48:40.679642 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:40Z","lastTransitionTime":"2026-01-28T06:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:40 crc kubenswrapper[4642]: I0128 06:48:40.781554 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:40 crc kubenswrapper[4642]: I0128 06:48:40.781741 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:40 crc kubenswrapper[4642]: I0128 06:48:40.781824 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:40 crc kubenswrapper[4642]: I0128 06:48:40.781909 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:40 crc kubenswrapper[4642]: I0128 06:48:40.781975 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:40Z","lastTransitionTime":"2026-01-28T06:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:40 crc kubenswrapper[4642]: I0128 06:48:40.884315 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:40 crc kubenswrapper[4642]: I0128 06:48:40.884383 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:40 crc kubenswrapper[4642]: I0128 06:48:40.884395 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:40 crc kubenswrapper[4642]: I0128 06:48:40.884417 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:40 crc kubenswrapper[4642]: I0128 06:48:40.884428 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:40Z","lastTransitionTime":"2026-01-28T06:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:40 crc kubenswrapper[4642]: I0128 06:48:40.986717 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:40 crc kubenswrapper[4642]: I0128 06:48:40.986762 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:40 crc kubenswrapper[4642]: I0128 06:48:40.986772 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:40 crc kubenswrapper[4642]: I0128 06:48:40.986791 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:40 crc kubenswrapper[4642]: I0128 06:48:40.986803 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:40Z","lastTransitionTime":"2026-01-28T06:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:41 crc kubenswrapper[4642]: I0128 06:48:41.085690 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 14:46:29.516182294 +0000 UTC Jan 28 06:48:41 crc kubenswrapper[4642]: I0128 06:48:41.088646 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:41 crc kubenswrapper[4642]: I0128 06:48:41.088686 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:41 crc kubenswrapper[4642]: I0128 06:48:41.088696 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:41 crc kubenswrapper[4642]: I0128 06:48:41.088711 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:41 crc kubenswrapper[4642]: I0128 06:48:41.088722 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:41Z","lastTransitionTime":"2026-01-28T06:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:41 crc kubenswrapper[4642]: I0128 06:48:41.098010 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 06:48:41 crc kubenswrapper[4642]: I0128 06:48:41.098039 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 06:48:41 crc kubenswrapper[4642]: I0128 06:48:41.098087 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 06:48:41 crc kubenswrapper[4642]: E0128 06:48:41.098208 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 06:48:41 crc kubenswrapper[4642]: E0128 06:48:41.098323 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 06:48:41 crc kubenswrapper[4642]: E0128 06:48:41.098434 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 06:48:41 crc kubenswrapper[4642]: I0128 06:48:41.192796 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:41 crc kubenswrapper[4642]: I0128 06:48:41.192953 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:41 crc kubenswrapper[4642]: I0128 06:48:41.192970 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:41 crc kubenswrapper[4642]: I0128 06:48:41.192991 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:41 crc kubenswrapper[4642]: I0128 06:48:41.193004 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:41Z","lastTransitionTime":"2026-01-28T06:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:41 crc kubenswrapper[4642]: I0128 06:48:41.295121 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:41 crc kubenswrapper[4642]: I0128 06:48:41.295180 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:41 crc kubenswrapper[4642]: I0128 06:48:41.295211 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:41 crc kubenswrapper[4642]: I0128 06:48:41.295236 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:41 crc kubenswrapper[4642]: I0128 06:48:41.295252 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:41Z","lastTransitionTime":"2026-01-28T06:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:41 crc kubenswrapper[4642]: I0128 06:48:41.397820 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:41 crc kubenswrapper[4642]: I0128 06:48:41.397877 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:41 crc kubenswrapper[4642]: I0128 06:48:41.397888 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:41 crc kubenswrapper[4642]: I0128 06:48:41.397908 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:41 crc kubenswrapper[4642]: I0128 06:48:41.397922 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:41Z","lastTransitionTime":"2026-01-28T06:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:41 crc kubenswrapper[4642]: I0128 06:48:41.500512 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:41 crc kubenswrapper[4642]: I0128 06:48:41.500564 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:41 crc kubenswrapper[4642]: I0128 06:48:41.500576 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:41 crc kubenswrapper[4642]: I0128 06:48:41.500594 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:41 crc kubenswrapper[4642]: I0128 06:48:41.500606 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:41Z","lastTransitionTime":"2026-01-28T06:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:41 crc kubenswrapper[4642]: I0128 06:48:41.603202 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:41 crc kubenswrapper[4642]: I0128 06:48:41.603254 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:41 crc kubenswrapper[4642]: I0128 06:48:41.603263 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:41 crc kubenswrapper[4642]: I0128 06:48:41.603281 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:41 crc kubenswrapper[4642]: I0128 06:48:41.603293 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:41Z","lastTransitionTime":"2026-01-28T06:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:41 crc kubenswrapper[4642]: I0128 06:48:41.705266 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:41 crc kubenswrapper[4642]: I0128 06:48:41.705320 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:41 crc kubenswrapper[4642]: I0128 06:48:41.705332 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:41 crc kubenswrapper[4642]: I0128 06:48:41.705366 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:41 crc kubenswrapper[4642]: I0128 06:48:41.705382 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:41Z","lastTransitionTime":"2026-01-28T06:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:41 crc kubenswrapper[4642]: I0128 06:48:41.807333 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:41 crc kubenswrapper[4642]: I0128 06:48:41.807404 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:41 crc kubenswrapper[4642]: I0128 06:48:41.807413 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:41 crc kubenswrapper[4642]: I0128 06:48:41.807435 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:41 crc kubenswrapper[4642]: I0128 06:48:41.807448 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:41Z","lastTransitionTime":"2026-01-28T06:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:41 crc kubenswrapper[4642]: I0128 06:48:41.909628 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:41 crc kubenswrapper[4642]: I0128 06:48:41.909664 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:41 crc kubenswrapper[4642]: I0128 06:48:41.909676 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:41 crc kubenswrapper[4642]: I0128 06:48:41.909691 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:41 crc kubenswrapper[4642]: I0128 06:48:41.909704 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:41Z","lastTransitionTime":"2026-01-28T06:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:42 crc kubenswrapper[4642]: I0128 06:48:42.011230 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:42 crc kubenswrapper[4642]: I0128 06:48:42.011263 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:42 crc kubenswrapper[4642]: I0128 06:48:42.011272 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:42 crc kubenswrapper[4642]: I0128 06:48:42.011285 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:42 crc kubenswrapper[4642]: I0128 06:48:42.011294 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:42Z","lastTransitionTime":"2026-01-28T06:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:42 crc kubenswrapper[4642]: I0128 06:48:42.086799 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 12:58:47.892841261 +0000 UTC Jan 28 06:48:42 crc kubenswrapper[4642]: I0128 06:48:42.098212 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bpz6r" Jan 28 06:48:42 crc kubenswrapper[4642]: E0128 06:48:42.098331 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bpz6r" podUID="e7ad39da-99cf-4851-be79-a7d38df54055" Jan 28 06:48:42 crc kubenswrapper[4642]: I0128 06:48:42.113201 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:42 crc kubenswrapper[4642]: I0128 06:48:42.113228 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:42 crc kubenswrapper[4642]: I0128 06:48:42.113238 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:42 crc kubenswrapper[4642]: I0128 06:48:42.113250 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:42 crc kubenswrapper[4642]: I0128 06:48:42.113258 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:42Z","lastTransitionTime":"2026-01-28T06:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:42 crc kubenswrapper[4642]: I0128 06:48:42.214893 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:42 crc kubenswrapper[4642]: I0128 06:48:42.214919 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:42 crc kubenswrapper[4642]: I0128 06:48:42.214927 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:42 crc kubenswrapper[4642]: I0128 06:48:42.214940 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:42 crc kubenswrapper[4642]: I0128 06:48:42.214948 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:42Z","lastTransitionTime":"2026-01-28T06:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:42 crc kubenswrapper[4642]: I0128 06:48:42.317014 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:42 crc kubenswrapper[4642]: I0128 06:48:42.317043 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:42 crc kubenswrapper[4642]: I0128 06:48:42.317053 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:42 crc kubenswrapper[4642]: I0128 06:48:42.317063 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:42 crc kubenswrapper[4642]: I0128 06:48:42.317070 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:42Z","lastTransitionTime":"2026-01-28T06:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:42 crc kubenswrapper[4642]: I0128 06:48:42.359631 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e7ad39da-99cf-4851-be79-a7d38df54055-metrics-certs\") pod \"network-metrics-daemon-bpz6r\" (UID: \"e7ad39da-99cf-4851-be79-a7d38df54055\") " pod="openshift-multus/network-metrics-daemon-bpz6r" Jan 28 06:48:42 crc kubenswrapper[4642]: E0128 06:48:42.359775 4642 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 06:48:42 crc kubenswrapper[4642]: E0128 06:48:42.359827 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e7ad39da-99cf-4851-be79-a7d38df54055-metrics-certs podName:e7ad39da-99cf-4851-be79-a7d38df54055 nodeName:}" failed. No retries permitted until 2026-01-28 06:48:50.35981266 +0000 UTC m=+53.591901469 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e7ad39da-99cf-4851-be79-a7d38df54055-metrics-certs") pod "network-metrics-daemon-bpz6r" (UID: "e7ad39da-99cf-4851-be79-a7d38df54055") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 06:48:42 crc kubenswrapper[4642]: I0128 06:48:42.418887 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:42 crc kubenswrapper[4642]: I0128 06:48:42.418911 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:42 crc kubenswrapper[4642]: I0128 06:48:42.418919 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:42 crc kubenswrapper[4642]: I0128 06:48:42.418930 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:42 crc kubenswrapper[4642]: I0128 06:48:42.418939 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:42Z","lastTransitionTime":"2026-01-28T06:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:42 crc kubenswrapper[4642]: I0128 06:48:42.520782 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:42 crc kubenswrapper[4642]: I0128 06:48:42.520821 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:42 crc kubenswrapper[4642]: I0128 06:48:42.520831 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:42 crc kubenswrapper[4642]: I0128 06:48:42.520843 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:42 crc kubenswrapper[4642]: I0128 06:48:42.520854 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:42Z","lastTransitionTime":"2026-01-28T06:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:42 crc kubenswrapper[4642]: I0128 06:48:42.623707 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:42 crc kubenswrapper[4642]: I0128 06:48:42.623917 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:42 crc kubenswrapper[4642]: I0128 06:48:42.623978 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:42 crc kubenswrapper[4642]: I0128 06:48:42.624038 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:42 crc kubenswrapper[4642]: I0128 06:48:42.624096 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:42Z","lastTransitionTime":"2026-01-28T06:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:42 crc kubenswrapper[4642]: I0128 06:48:42.726919 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:42 crc kubenswrapper[4642]: I0128 06:48:42.727111 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:42 crc kubenswrapper[4642]: I0128 06:48:42.727172 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:42 crc kubenswrapper[4642]: I0128 06:48:42.727285 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:42 crc kubenswrapper[4642]: I0128 06:48:42.727371 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:42Z","lastTransitionTime":"2026-01-28T06:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:42 crc kubenswrapper[4642]: I0128 06:48:42.829843 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:42 crc kubenswrapper[4642]: I0128 06:48:42.829879 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:42 crc kubenswrapper[4642]: I0128 06:48:42.829889 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:42 crc kubenswrapper[4642]: I0128 06:48:42.829903 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:42 crc kubenswrapper[4642]: I0128 06:48:42.829913 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:42Z","lastTransitionTime":"2026-01-28T06:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:42 crc kubenswrapper[4642]: I0128 06:48:42.932280 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:42 crc kubenswrapper[4642]: I0128 06:48:42.932322 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:42 crc kubenswrapper[4642]: I0128 06:48:42.932331 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:42 crc kubenswrapper[4642]: I0128 06:48:42.932358 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:42 crc kubenswrapper[4642]: I0128 06:48:42.932368 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:42Z","lastTransitionTime":"2026-01-28T06:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:43 crc kubenswrapper[4642]: I0128 06:48:43.034310 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:43 crc kubenswrapper[4642]: I0128 06:48:43.034368 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:43 crc kubenswrapper[4642]: I0128 06:48:43.034379 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:43 crc kubenswrapper[4642]: I0128 06:48:43.034395 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:43 crc kubenswrapper[4642]: I0128 06:48:43.034404 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:43Z","lastTransitionTime":"2026-01-28T06:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:43 crc kubenswrapper[4642]: I0128 06:48:43.087571 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 17:53:41.28039842 +0000 UTC Jan 28 06:48:43 crc kubenswrapper[4642]: I0128 06:48:43.098051 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 06:48:43 crc kubenswrapper[4642]: I0128 06:48:43.098076 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 06:48:43 crc kubenswrapper[4642]: E0128 06:48:43.098179 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 06:48:43 crc kubenswrapper[4642]: E0128 06:48:43.098290 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 06:48:43 crc kubenswrapper[4642]: I0128 06:48:43.098331 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 06:48:43 crc kubenswrapper[4642]: E0128 06:48:43.098653 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 06:48:43 crc kubenswrapper[4642]: I0128 06:48:43.098996 4642 scope.go:117] "RemoveContainer" containerID="b35193ef5e485263e1e373e9e2db577f3019bf0d55871920e59a0b313baed854" Jan 28 06:48:43 crc kubenswrapper[4642]: I0128 06:48:43.136555 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:43 crc kubenswrapper[4642]: I0128 06:48:43.136673 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:43 crc kubenswrapper[4642]: I0128 06:48:43.136748 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:43 crc kubenswrapper[4642]: I0128 06:48:43.136811 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:43 crc kubenswrapper[4642]: I0128 06:48:43.136871 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:43Z","lastTransitionTime":"2026-01-28T06:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:43 crc kubenswrapper[4642]: I0128 06:48:43.239517 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:43 crc kubenswrapper[4642]: I0128 06:48:43.239780 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:43 crc kubenswrapper[4642]: I0128 06:48:43.239791 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:43 crc kubenswrapper[4642]: I0128 06:48:43.239808 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:43 crc kubenswrapper[4642]: I0128 06:48:43.239818 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:43Z","lastTransitionTime":"2026-01-28T06:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:43 crc kubenswrapper[4642]: I0128 06:48:43.337070 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7fdwx_0f5d2a3f-25d8-4051-8000-30ec01a14eb0/ovnkube-controller/1.log" Jan 28 06:48:43 crc kubenswrapper[4642]: I0128 06:48:43.339314 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" event={"ID":"0f5d2a3f-25d8-4051-8000-30ec01a14eb0","Type":"ContainerStarted","Data":"b55c1b9cd344284bee0378e9e6530f30e7315cdc57bd43c1b862457c4aeb02eb"} Jan 28 06:48:43 crc kubenswrapper[4642]: I0128 06:48:43.339450 4642 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 28 06:48:43 crc kubenswrapper[4642]: I0128 06:48:43.341098 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:43 crc kubenswrapper[4642]: I0128 06:48:43.341137 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:43 crc kubenswrapper[4642]: I0128 06:48:43.341148 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:43 crc kubenswrapper[4642]: I0128 06:48:43.341164 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:43 crc kubenswrapper[4642]: I0128 06:48:43.341175 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:43Z","lastTransitionTime":"2026-01-28T06:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:43 crc kubenswrapper[4642]: I0128 06:48:43.358514 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"286f1a08-e4be-475e-a3ff-c37c99c41ea6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6769a5284fe4dae7df7948532cabc41d1f4a27a7e61ab818d61b26eb8165a750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75be46ee251716ce59f7d4e31811114c790e07ff2068465e1bba6ee4abc6909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1c4b797e1cd78fa6a850c8c7491a690dab16a3c06eeb71b2dc7e6799cf285a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7181241cbadd0b8edc9a6ff7f035813a1b2a06cdec451dc94d869bb56424a485\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70d9e422e29c4563585ee85b03f4d3e5705c7de8c388d337a52b30a7f0ccc096\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0128 06:48:09.026722 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 06:48:09.029824 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3147749855/tls.crt::/tmp/serving-cert-3147749855/tls.key\\\\\\\"\\\\nI0128 06:48:14.183760 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 06:48:14.185955 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 06:48:14.185992 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 06:48:14.186011 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 06:48:14.186020 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 06:48:14.192438 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0128 06:48:14.192454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0128 06:48:14.192477 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 06:48:14.192484 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 06:48:14.192488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 06:48:14.192490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 06:48:14.192493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 06:48:14.192496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0128 06:48:14.194350 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6d564af60cb16fea7167aea68703adced71f83b8ff6a2270e2e6d109f6e9c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d544adc85b9e84ad7685c6ce4e1ae6e7679fb24a21b7d44eb402b613ef6be7ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d544adc85b9e84ad7685c6ce4e1ae6e7679fb24a21b7d44eb402b613ef6be7ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:43Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:43 crc kubenswrapper[4642]: I0128 06:48:43.376813 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:43Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:43 crc kubenswrapper[4642]: I0128 06:48:43.389807 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zwkn6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e8cd657-e170-4331-9f82-7b84d122c8e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a285511bdaad491d745092714ce6a3d6b08414c40333226f4c7a0a1176ea7260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd2d14172912918c88948764f41aa4b5f2f97fa49f9edf2007a478b7c06b807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fd2d14172912918c88948764f41aa4b5f2f97fa49f9edf2007a478b7c06b807\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa815d145b52bbd51ede4a9e3c88571f861f70a6f5dc998910e506e358f73af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7aa815d145b52bbd51ede4a9e3c88571f861f70a6f5dc998910e506e358f73af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e6b7369b4926f78f6257e3f166c6540e93281a880c2a21ae3f87ecf32aaca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e6b7369b4926f78f6257e3f166c6540e93281a880c2a21ae3f87ecf32aaca0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce747d5ffc2f77c768c9e7a472ef0dbf11387e6c11215622ffceef721c9f677\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce747d5ffc2f77c768c9e7a472ef0dbf11387e6c11215622ffceef721c9f677\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a45118261ada0368155ab783800af3a71afe445370f06759b98928518999151d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a45118261ada0368155ab783800af3a71afe445370f06759b98928518999151d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7d61a01712de033907e9fec425f948bcf708899adea8b125ff47bc3b9953f3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7d61a01712de033907e9fec425f948bcf708899adea8b125ff47bc3b9953f3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zwkn6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:43Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:43 crc kubenswrapper[4642]: I0128 06:48:43.399852 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-28n48" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d569b7c-8a0e-4074-b61f-4139413b9849\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04752968302ea040e4a64b015290addc260974a7243137d55948c3e1a3e3a849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kp8zj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-28n48\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:43Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:43 crc kubenswrapper[4642]: I0128 06:48:43.407493 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wcwkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0008ccce-c71a-484f-9df7-02d5d92e8b02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04f0fbcd26ac8f7340450ee1edb0990146e1fee5ac1a5de78611d4a3c6ca7516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s57nl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wcwkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:43Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:43 crc kubenswrapper[4642]: I0128 06:48:43.415290 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"338ae955-434d-40bd-8519-580badf3e175\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8c176f35c846bbf16f17d93e6c30e3708e24e70060ebc0e78f9714fc8c402f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdf4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f517e5ecc6bf060062365392506b3ee3a09354a4f5cfaa980e0e3a2507f298c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdf4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hdsmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:43Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:43 crc kubenswrapper[4642]: I0128 06:48:43.422604 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bpz6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ad39da-99cf-4851-be79-a7d38df54055\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7dl8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7dl8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bpz6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:43Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:43 crc kubenswrapper[4642]: I0128 06:48:43.443554 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:43 crc kubenswrapper[4642]: I0128 06:48:43.443587 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:43 crc kubenswrapper[4642]: I0128 06:48:43.443595 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:43 crc kubenswrapper[4642]: I0128 06:48:43.443610 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:43 crc kubenswrapper[4642]: I0128 06:48:43.443620 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:43Z","lastTransitionTime":"2026-01-28T06:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:43 crc kubenswrapper[4642]: I0128 06:48:43.444722 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:43Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:43 crc kubenswrapper[4642]: I0128 06:48:43.466675 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:43Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:43 crc kubenswrapper[4642]: I0128 06:48:43.483343 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4acb8b3be7913130bfe0acca1f8ed39f30ee2121eec228e4878f5645899f53cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfb4620ab8d587f286964f74ee575b4837fe55979a2f70767d30709cdb4fb2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:43Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:43 crc kubenswrapper[4642]: I0128 06:48:43.491097 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tzmpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e61b4710-6f7c-4ab1-b7bb-50d445aeda93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51dac6b178ad3961488d10a37458f5d40c3345f0091f7050a4c3d310ea46704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rfx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tzmpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:43Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:43 crc kubenswrapper[4642]: I0128 06:48:43.499958 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e3e00c82a4c8d33e89ddc75f7c6c393e11f7a98c88c8e3e160fd8f206ece474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:43Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:43 crc kubenswrapper[4642]: I0128 06:48:43.507912 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7js4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c649375a-21a3-43f5-bd77-fbc87de527fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56f22af3d5ead6bcc96f5a78901579a4c02637186393613dad0006ecc09e4731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjkkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ad8af253ff523788331637fe108b083ad990eea4334fdea34d2e3b299de83eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjkkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-f7js4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:43Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:43 crc kubenswrapper[4642]: I0128 06:48:43.516025 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d609634-f051-46d5-9a1d-97785c7d8c67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76f1f4a4bf66b6661b0b869ac209c793c9066518674be305037372a22141c1d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f31fe6b99be2f9edd215e88aadd39790ce7f144061336772561af3eab969fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f826dab9c51a49c5b54e934e0be64c809ff09247f8aa766b9a20734a3f1277da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf1949a0768be6ba1b7cfc9107df62005fa50466747e22440e5ea8cd04ec908c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:43Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:43 crc kubenswrapper[4642]: I0128 06:48:43.526051 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5886e382e4852d848554f98099a7cbdd4d0bc3f161f96fd6f34f300a748b0172\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:43Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:43 crc kubenswrapper[4642]: I0128 06:48:43.545806 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:43 crc kubenswrapper[4642]: I0128 06:48:43.545845 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:43 crc kubenswrapper[4642]: I0128 06:48:43.545856 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:43 crc kubenswrapper[4642]: I0128 06:48:43.545873 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:43 crc kubenswrapper[4642]: I0128 06:48:43.545883 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:43Z","lastTransitionTime":"2026-01-28T06:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:43 crc kubenswrapper[4642]: I0128 06:48:43.551174 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c33d9fa76f5feb29f150ff38354d292c6cb68dedb78eb8496754c23272d137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9c679b07a820cca68800f672c74f133d871d41e4cfe6c18e101febd7110b497\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba5c27390ef97be2917d1b375ca741d7a74118ba523043b1c826e4fccaee05d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a597a1b6d4b61efc1b46773965ad598626e298815da9ceffaf97965d72286d3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e87e992d302acbefdcab2e332757e1f6d6535a0acdbe268b1e2eb1d49a615940\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6187660025657deea7e0cd68d1341136261c5e66c9e32c3b00b181df029843b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b55c1b9cd344284bee0378e9e6530f30e7315cdc57bd43c1b862457c4aeb02eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b35193ef5e485263e1e373e9e2db577f3019bf0d55871920e59a0b313baed854\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T06:48:29Z\\\",\\\"message\\\":\\\"ol:\\\\\\\"TCP\\\\\\\", inport:9393, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0128 06:48:29.954821 6081 lb_config.go:1031] Cluster endpoints for openshift-kube-apiserver-operator/metrics for network=default are: map[]\\\\nI0128 06:48:29.954839 6081 services_controller.go:444] Built service openshift-ingress-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0128 06:48:29.954849 6081 services_controller.go:445] Built service openshift-ingress-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF0128 06:48:29.954846 6081 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for banpInformer during admin network policy controller initialization, handler {0x1fcc300 0x1fcbfe0 0x1fcbf80} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cdaa92914e736d4130cee0be4f11381b0edec756ca70235c6a4205f04777874\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecf4d023197544de3b0fe0f541dc75332bb08d0f44046e46fd468e3c5a0f251e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecf4d023197544de3b0fe0f541dc75332bb08d0f44046e46fd468e3c5a0f251e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7fdwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:43Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:43 crc kubenswrapper[4642]: I0128 06:48:43.648551 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:43 crc kubenswrapper[4642]: I0128 06:48:43.648591 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:43 crc kubenswrapper[4642]: I0128 06:48:43.648603 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:43 crc kubenswrapper[4642]: I0128 06:48:43.648618 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:43 crc kubenswrapper[4642]: I0128 06:48:43.648628 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:43Z","lastTransitionTime":"2026-01-28T06:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:43 crc kubenswrapper[4642]: I0128 06:48:43.751433 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:43 crc kubenswrapper[4642]: I0128 06:48:43.751483 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:43 crc kubenswrapper[4642]: I0128 06:48:43.751495 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:43 crc kubenswrapper[4642]: I0128 06:48:43.751514 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:43 crc kubenswrapper[4642]: I0128 06:48:43.751527 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:43Z","lastTransitionTime":"2026-01-28T06:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:43 crc kubenswrapper[4642]: I0128 06:48:43.853858 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:43 crc kubenswrapper[4642]: I0128 06:48:43.853901 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:43 crc kubenswrapper[4642]: I0128 06:48:43.853910 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:43 crc kubenswrapper[4642]: I0128 06:48:43.853932 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:43 crc kubenswrapper[4642]: I0128 06:48:43.853942 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:43Z","lastTransitionTime":"2026-01-28T06:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:43 crc kubenswrapper[4642]: I0128 06:48:43.956041 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:43 crc kubenswrapper[4642]: I0128 06:48:43.956082 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:43 crc kubenswrapper[4642]: I0128 06:48:43.956094 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:43 crc kubenswrapper[4642]: I0128 06:48:43.956111 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:43 crc kubenswrapper[4642]: I0128 06:48:43.956122 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:43Z","lastTransitionTime":"2026-01-28T06:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:44 crc kubenswrapper[4642]: I0128 06:48:44.058983 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:44 crc kubenswrapper[4642]: I0128 06:48:44.059035 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:44 crc kubenswrapper[4642]: I0128 06:48:44.059046 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:44 crc kubenswrapper[4642]: I0128 06:48:44.059065 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:44 crc kubenswrapper[4642]: I0128 06:48:44.059077 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:44Z","lastTransitionTime":"2026-01-28T06:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:44 crc kubenswrapper[4642]: I0128 06:48:44.088469 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 12:17:17.279020753 +0000 UTC Jan 28 06:48:44 crc kubenswrapper[4642]: I0128 06:48:44.097975 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bpz6r" Jan 28 06:48:44 crc kubenswrapper[4642]: E0128 06:48:44.098118 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bpz6r" podUID="e7ad39da-99cf-4851-be79-a7d38df54055" Jan 28 06:48:44 crc kubenswrapper[4642]: I0128 06:48:44.162155 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:44 crc kubenswrapper[4642]: I0128 06:48:44.162224 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:44 crc kubenswrapper[4642]: I0128 06:48:44.162237 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:44 crc kubenswrapper[4642]: I0128 06:48:44.162255 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:44 crc kubenswrapper[4642]: I0128 06:48:44.162266 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:44Z","lastTransitionTime":"2026-01-28T06:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:44 crc kubenswrapper[4642]: I0128 06:48:44.264408 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:44 crc kubenswrapper[4642]: I0128 06:48:44.264470 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:44 crc kubenswrapper[4642]: I0128 06:48:44.264481 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:44 crc kubenswrapper[4642]: I0128 06:48:44.264501 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:44 crc kubenswrapper[4642]: I0128 06:48:44.264513 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:44Z","lastTransitionTime":"2026-01-28T06:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:44 crc kubenswrapper[4642]: I0128 06:48:44.343209 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7fdwx_0f5d2a3f-25d8-4051-8000-30ec01a14eb0/ovnkube-controller/2.log" Jan 28 06:48:44 crc kubenswrapper[4642]: I0128 06:48:44.343827 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7fdwx_0f5d2a3f-25d8-4051-8000-30ec01a14eb0/ovnkube-controller/1.log" Jan 28 06:48:44 crc kubenswrapper[4642]: I0128 06:48:44.346569 4642 generic.go:334] "Generic (PLEG): container finished" podID="0f5d2a3f-25d8-4051-8000-30ec01a14eb0" containerID="b55c1b9cd344284bee0378e9e6530f30e7315cdc57bd43c1b862457c4aeb02eb" exitCode=1 Jan 28 06:48:44 crc kubenswrapper[4642]: I0128 06:48:44.346620 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" event={"ID":"0f5d2a3f-25d8-4051-8000-30ec01a14eb0","Type":"ContainerDied","Data":"b55c1b9cd344284bee0378e9e6530f30e7315cdc57bd43c1b862457c4aeb02eb"} Jan 28 06:48:44 crc kubenswrapper[4642]: I0128 06:48:44.346668 4642 scope.go:117] "RemoveContainer" containerID="b35193ef5e485263e1e373e9e2db577f3019bf0d55871920e59a0b313baed854" Jan 28 06:48:44 crc kubenswrapper[4642]: I0128 06:48:44.347289 4642 scope.go:117] "RemoveContainer" containerID="b55c1b9cd344284bee0378e9e6530f30e7315cdc57bd43c1b862457c4aeb02eb" Jan 28 06:48:44 crc kubenswrapper[4642]: E0128 06:48:44.347450 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-7fdwx_openshift-ovn-kubernetes(0f5d2a3f-25d8-4051-8000-30ec01a14eb0)\"" pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" podUID="0f5d2a3f-25d8-4051-8000-30ec01a14eb0" Jan 28 06:48:44 crc kubenswrapper[4642]: I0128 06:48:44.358381 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:44Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:44 crc kubenswrapper[4642]: I0128 06:48:44.366146 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:44 crc kubenswrapper[4642]: I0128 06:48:44.366182 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:44 crc kubenswrapper[4642]: I0128 06:48:44.366205 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:44 crc kubenswrapper[4642]: I0128 06:48:44.366220 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:44 crc kubenswrapper[4642]: I0128 06:48:44.366231 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:44Z","lastTransitionTime":"2026-01-28T06:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:44 crc kubenswrapper[4642]: I0128 06:48:44.369586 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4acb8b3be7913130bfe0acca1f8ed39f30ee2121eec228e4878f5645899f53cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfb4620ab8d587f286964f74ee575b4837fe55979a2f70767d30709cdb4fb2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:44Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:44 crc kubenswrapper[4642]: I0128 06:48:44.376804 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tzmpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e61b4710-6f7c-4ab1-b7bb-50d445aeda93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51dac6b178ad3961488d10a37458f5d40c3345f0091f7050a4c3d310ea46704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rfx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tzmpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:44Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:44 crc kubenswrapper[4642]: I0128 06:48:44.384682 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"338ae955-434d-40bd-8519-580badf3e175\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8c176f35c846bbf16f17d93e6c30e3708e24e70060ebc0e78f9714fc8c402f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdf4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f517e5ecc6bf060062365392506b3ee3a09354a4f5cfaa980e0e3a2507f298c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdf4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hdsmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:44Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:44 crc kubenswrapper[4642]: I0128 06:48:44.392941 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bpz6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ad39da-99cf-4851-be79-a7d38df54055\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7dl8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7dl8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bpz6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:44Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:44 crc kubenswrapper[4642]: I0128 06:48:44.402366 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:44Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:44 crc kubenswrapper[4642]: I0128 06:48:44.411118 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e3e00c82a4c8d33e89ddc75f7c6c393e11f7a98c88c8e3e160fd8f206ece474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:44Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:44 crc kubenswrapper[4642]: I0128 06:48:44.419908 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7js4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c649375a-21a3-43f5-bd77-fbc87de527fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56f22af3d5ead6bcc96f5a78901579a4c02637186393613dad0006ecc09e4731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjkkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ad8af253ff523788331637fe108b083ad990eea4334fdea34d2e3b299de83eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjkkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-f7js4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:44Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:44 crc kubenswrapper[4642]: I0128 06:48:44.429214 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d609634-f051-46d5-9a1d-97785c7d8c67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76f1f4a4bf66b6661b0b869ac209c793c9066518674be305037372a22141c1d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f31fe6b99be2f9edd215e88aadd39790ce7f144061336772561af3eab969fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f826dab9c51a49c5b54e934e0be64c809ff09247f8aa766b9a20734a3f1277da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf1949a0768be6ba1b7cfc9107df62005fa50466747e22440e5ea8cd04ec908c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:44Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:44 crc kubenswrapper[4642]: I0128 06:48:44.440685 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5886e382e4852d848554f98099a7cbdd4d0bc3f161f96fd6f34f300a748b0172\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:44Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:44 crc kubenswrapper[4642]: I0128 06:48:44.454889 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c33d9fa76f5feb29f150ff38354d292c6cb68dedb78eb8496754c23272d137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9c679b07a820cca68800f672c74f133d871d41e4cfe6c18e101febd7110b497\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba5c27390ef97be2917d1b375ca741d7a74118ba523043b1c826e4fccaee05d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a597a1b6d4b61efc1b46773965ad598626e298815da9ceffaf97965d72286d3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e87e992d302acbefdcab2e332757e1f6d6535a0acdbe268b1e2eb1d49a615940\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6187660025657deea7e0cd68d1341136261c5e66c9e32c3b00b181df029843b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b55c1b9cd344284bee0378e9e6530f30e7315cdc57bd43c1b862457c4aeb02eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b35193ef5e485263e1e373e9e2db577f3019bf0d55871920e59a0b313baed854\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T06:48:29Z\\\",\\\"message\\\":\\\"ol:\\\\\\\"TCP\\\\\\\", inport:9393, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0128 06:48:29.954821 6081 lb_config.go:1031] Cluster endpoints for openshift-kube-apiserver-operator/metrics for network=default are: map[]\\\\nI0128 06:48:29.954839 6081 services_controller.go:444] Built service openshift-ingress-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0128 06:48:29.954849 6081 services_controller.go:445] Built service openshift-ingress-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF0128 06:48:29.954846 6081 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for banpInformer during admin network policy controller initialization, handler {0x1fcc300 0x1fcbfe0 0x1fcbf80} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b55c1b9cd344284bee0378e9e6530f30e7315cdc57bd43c1b862457c4aeb02eb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T06:48:43Z\\\",\\\"message\\\":\\\"nshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0128 06:48:43.781118 6303 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-controller\\\\\\\"}\\\\nI0128 06:48:43.781450 6303 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0128 06:48:43.781451 6303 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nF0128 06:48:43.781452 6303 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cdaa92914e736d4130cee0be4f11381b0edec756ca70235c6a4205f04777874\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecf4d023197544de3b0fe0f541dc75332bb08d0f44046e46fd468e3c5a0f251e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecf4d023197544de3b0fe0f541dc75332bb08d0f44046e46fd468e3c5a0f251e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7fdwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:44Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:44 crc kubenswrapper[4642]: I0128 06:48:44.464127 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-28n48" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d569b7c-8a0e-4074-b61f-4139413b9849\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04752968302ea040e4a64b015290addc260974a7243137d55948c3e1a3e3a849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kp8zj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-28n48\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:44Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:44 crc kubenswrapper[4642]: I0128 06:48:44.468785 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:44 crc kubenswrapper[4642]: I0128 06:48:44.468823 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:44 crc kubenswrapper[4642]: I0128 06:48:44.468834 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:44 crc kubenswrapper[4642]: I0128 06:48:44.468852 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:44 crc kubenswrapper[4642]: I0128 06:48:44.468863 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:44Z","lastTransitionTime":"2026-01-28T06:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:44 crc kubenswrapper[4642]: I0128 06:48:44.472898 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wcwkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0008ccce-c71a-484f-9df7-02d5d92e8b02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04f0fbcd26ac8f7340450ee1edb0990146e1fee5ac1a5de78611d4a3c6ca7516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s57nl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wcwkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:44Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:44 crc kubenswrapper[4642]: I0128 06:48:44.482896 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"286f1a08-e4be-475e-a3ff-c37c99c41ea6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6769a5284fe4dae7df7948532cabc41d1f4a27a7e61ab818d61b26eb8165a750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75be46ee251716ce59f7d4e31811114c790e07ff2068465e1bba6ee4abc6909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1c4b797e1cd78fa6a850c8c7491a690dab16a3c06eeb71b2dc7e6799cf285a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7181241cbadd0b8edc9a6ff7f035813a1b2a06cdec451dc94d869bb56424a485\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70d9e422e29c4563585ee85b03f4d3e5705c7de8c388d337a52b30a7f0ccc096\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0128 06:48:09.026722 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 06:48:09.029824 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3147749855/tls.crt::/tmp/serving-cert-3147749855/tls.key\\\\\\\"\\\\nI0128 06:48:14.183760 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 06:48:14.185955 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 06:48:14.185992 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 06:48:14.186011 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 06:48:14.186020 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 06:48:14.192438 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0128 06:48:14.192454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0128 06:48:14.192477 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 06:48:14.192484 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 06:48:14.192488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 06:48:14.192490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 06:48:14.192493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 06:48:14.192496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0128 06:48:14.194350 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6d564af60cb16fea7167aea68703adced71f83b8ff6a2270e2e6d109f6e9c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d544adc85b9e84ad7685c6ce4e1ae6e7679fb24a21b7d44eb402b613ef6be7ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d544adc85b9e84ad7685c6ce4e1ae6e7679fb24a21b7d44eb402b613ef6be7ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:44Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:44 crc kubenswrapper[4642]: I0128 06:48:44.492325 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:44Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:44 crc kubenswrapper[4642]: I0128 06:48:44.502997 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zwkn6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e8cd657-e170-4331-9f82-7b84d122c8e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a285511bdaad491d745092714ce6a3d6b08414c40333226f4c7a0a1176ea7260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd2d14172912918c88948764f41aa4b5f2f97fa49f9edf2007a478b7c06b807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fd2d14172912918c88948764f41aa4b5f2f97fa49f9edf2007a478b7c06b807\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa815d145b52bbd51ede4a9e3c88571f861f70a6f5dc998910e506e358f73af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7aa815d145b52bbd51ede4a9e3c88571f861f70a6f5dc998910e506e358f73af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e6b7369b4926f78f6257e3f166c6540e93281a880c2a21ae3f87ecf32aaca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e6b7369b4926f78f6257e3f166c6540e93281a880c2a21ae3f87ecf32aaca0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce747d5ffc2f77c768c9e7a472ef0dbf11387e6c11215622ffceef721c9f677\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce747d5ffc2f77c768c9e7a472ef0dbf11387e6c11215622ffceef721c9f677\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a45118261ada0368155ab783800af3a71afe445370f06759b98928518999151d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a45118261ada0368155ab783800af3a71afe445370f06759b98928518999151d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7d61a01712de033907e9fec425f948bcf708899adea8b125ff47bc3b9953f3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7d61a01712de033907e9fec425f948bcf708899adea8b125ff47bc3b9953f3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zwkn6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:44Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:44 crc kubenswrapper[4642]: I0128 06:48:44.571113 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:44 crc kubenswrapper[4642]: I0128 06:48:44.571265 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:44 crc kubenswrapper[4642]: I0128 06:48:44.571363 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:44 crc kubenswrapper[4642]: I0128 06:48:44.571449 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:44 crc kubenswrapper[4642]: I0128 06:48:44.571532 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:44Z","lastTransitionTime":"2026-01-28T06:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:44 crc kubenswrapper[4642]: I0128 06:48:44.673845 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:44 crc kubenswrapper[4642]: I0128 06:48:44.673887 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:44 crc kubenswrapper[4642]: I0128 06:48:44.673897 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:44 crc kubenswrapper[4642]: I0128 06:48:44.673918 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:44 crc kubenswrapper[4642]: I0128 06:48:44.673931 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:44Z","lastTransitionTime":"2026-01-28T06:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:44 crc kubenswrapper[4642]: I0128 06:48:44.776357 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:44 crc kubenswrapper[4642]: I0128 06:48:44.776405 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:44 crc kubenswrapper[4642]: I0128 06:48:44.776415 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:44 crc kubenswrapper[4642]: I0128 06:48:44.776434 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:44 crc kubenswrapper[4642]: I0128 06:48:44.776445 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:44Z","lastTransitionTime":"2026-01-28T06:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:44 crc kubenswrapper[4642]: I0128 06:48:44.878793 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:44 crc kubenswrapper[4642]: I0128 06:48:44.878848 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:44 crc kubenswrapper[4642]: I0128 06:48:44.878866 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:44 crc kubenswrapper[4642]: I0128 06:48:44.878890 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:44 crc kubenswrapper[4642]: I0128 06:48:44.878903 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:44Z","lastTransitionTime":"2026-01-28T06:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:44 crc kubenswrapper[4642]: I0128 06:48:44.981712 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:44 crc kubenswrapper[4642]: I0128 06:48:44.981767 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:44 crc kubenswrapper[4642]: I0128 06:48:44.981779 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:44 crc kubenswrapper[4642]: I0128 06:48:44.981800 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:44 crc kubenswrapper[4642]: I0128 06:48:44.981819 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:44Z","lastTransitionTime":"2026-01-28T06:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:45 crc kubenswrapper[4642]: I0128 06:48:45.084424 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:45 crc kubenswrapper[4642]: I0128 06:48:45.084469 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:45 crc kubenswrapper[4642]: I0128 06:48:45.084481 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:45 crc kubenswrapper[4642]: I0128 06:48:45.084499 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:45 crc kubenswrapper[4642]: I0128 06:48:45.084515 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:45Z","lastTransitionTime":"2026-01-28T06:48:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:45 crc kubenswrapper[4642]: I0128 06:48:45.088729 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 17:10:53.799936173 +0000 UTC Jan 28 06:48:45 crc kubenswrapper[4642]: I0128 06:48:45.098172 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 06:48:45 crc kubenswrapper[4642]: E0128 06:48:45.098326 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 06:48:45 crc kubenswrapper[4642]: I0128 06:48:45.098367 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 06:48:45 crc kubenswrapper[4642]: E0128 06:48:45.098457 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 06:48:45 crc kubenswrapper[4642]: I0128 06:48:45.098662 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 06:48:45 crc kubenswrapper[4642]: E0128 06:48:45.098842 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 06:48:45 crc kubenswrapper[4642]: I0128 06:48:45.187096 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:45 crc kubenswrapper[4642]: I0128 06:48:45.187139 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:45 crc kubenswrapper[4642]: I0128 06:48:45.187150 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:45 crc kubenswrapper[4642]: I0128 06:48:45.187170 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:45 crc kubenswrapper[4642]: I0128 06:48:45.187180 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:45Z","lastTransitionTime":"2026-01-28T06:48:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:45 crc kubenswrapper[4642]: I0128 06:48:45.289159 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:45 crc kubenswrapper[4642]: I0128 06:48:45.289230 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:45 crc kubenswrapper[4642]: I0128 06:48:45.289241 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:45 crc kubenswrapper[4642]: I0128 06:48:45.289263 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:45 crc kubenswrapper[4642]: I0128 06:48:45.289276 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:45Z","lastTransitionTime":"2026-01-28T06:48:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:45 crc kubenswrapper[4642]: I0128 06:48:45.352227 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7fdwx_0f5d2a3f-25d8-4051-8000-30ec01a14eb0/ovnkube-controller/2.log" Jan 28 06:48:45 crc kubenswrapper[4642]: I0128 06:48:45.392244 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:45 crc kubenswrapper[4642]: I0128 06:48:45.392294 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:45 crc kubenswrapper[4642]: I0128 06:48:45.392304 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:45 crc kubenswrapper[4642]: I0128 06:48:45.392329 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:45 crc kubenswrapper[4642]: I0128 06:48:45.392357 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:45Z","lastTransitionTime":"2026-01-28T06:48:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:45 crc kubenswrapper[4642]: I0128 06:48:45.494802 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:45 crc kubenswrapper[4642]: I0128 06:48:45.494844 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:45 crc kubenswrapper[4642]: I0128 06:48:45.494872 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:45 crc kubenswrapper[4642]: I0128 06:48:45.494886 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:45 crc kubenswrapper[4642]: I0128 06:48:45.494896 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:45Z","lastTransitionTime":"2026-01-28T06:48:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:45 crc kubenswrapper[4642]: I0128 06:48:45.598038 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:45 crc kubenswrapper[4642]: I0128 06:48:45.598095 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:45 crc kubenswrapper[4642]: I0128 06:48:45.598105 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:45 crc kubenswrapper[4642]: I0128 06:48:45.598123 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:45 crc kubenswrapper[4642]: I0128 06:48:45.598137 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:45Z","lastTransitionTime":"2026-01-28T06:48:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:45 crc kubenswrapper[4642]: I0128 06:48:45.700452 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:45 crc kubenswrapper[4642]: I0128 06:48:45.700514 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:45 crc kubenswrapper[4642]: I0128 06:48:45.700528 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:45 crc kubenswrapper[4642]: I0128 06:48:45.700547 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:45 crc kubenswrapper[4642]: I0128 06:48:45.700561 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:45Z","lastTransitionTime":"2026-01-28T06:48:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:45 crc kubenswrapper[4642]: I0128 06:48:45.803144 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:45 crc kubenswrapper[4642]: I0128 06:48:45.803221 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:45 crc kubenswrapper[4642]: I0128 06:48:45.803233 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:45 crc kubenswrapper[4642]: I0128 06:48:45.803253 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:45 crc kubenswrapper[4642]: I0128 06:48:45.803265 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:45Z","lastTransitionTime":"2026-01-28T06:48:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:45 crc kubenswrapper[4642]: I0128 06:48:45.905528 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:45 crc kubenswrapper[4642]: I0128 06:48:45.905590 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:45 crc kubenswrapper[4642]: I0128 06:48:45.905602 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:45 crc kubenswrapper[4642]: I0128 06:48:45.905625 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:45 crc kubenswrapper[4642]: I0128 06:48:45.905637 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:45Z","lastTransitionTime":"2026-01-28T06:48:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:46 crc kubenswrapper[4642]: I0128 06:48:46.009093 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:46 crc kubenswrapper[4642]: I0128 06:48:46.009154 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:46 crc kubenswrapper[4642]: I0128 06:48:46.009172 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:46 crc kubenswrapper[4642]: I0128 06:48:46.009215 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:46 crc kubenswrapper[4642]: I0128 06:48:46.009240 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:46Z","lastTransitionTime":"2026-01-28T06:48:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:46 crc kubenswrapper[4642]: I0128 06:48:46.089355 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 00:45:09.612001914 +0000 UTC Jan 28 06:48:46 crc kubenswrapper[4642]: I0128 06:48:46.098238 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bpz6r" Jan 28 06:48:46 crc kubenswrapper[4642]: E0128 06:48:46.098387 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bpz6r" podUID="e7ad39da-99cf-4851-be79-a7d38df54055" Jan 28 06:48:46 crc kubenswrapper[4642]: I0128 06:48:46.111857 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:46 crc kubenswrapper[4642]: I0128 06:48:46.111900 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:46 crc kubenswrapper[4642]: I0128 06:48:46.111912 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:46 crc kubenswrapper[4642]: I0128 06:48:46.111931 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:46 crc kubenswrapper[4642]: I0128 06:48:46.111944 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:46Z","lastTransitionTime":"2026-01-28T06:48:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:46 crc kubenswrapper[4642]: I0128 06:48:46.214865 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:46 crc kubenswrapper[4642]: I0128 06:48:46.214914 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:46 crc kubenswrapper[4642]: I0128 06:48:46.214926 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:46 crc kubenswrapper[4642]: I0128 06:48:46.214945 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:46 crc kubenswrapper[4642]: I0128 06:48:46.214958 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:46Z","lastTransitionTime":"2026-01-28T06:48:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:46 crc kubenswrapper[4642]: I0128 06:48:46.317356 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:46 crc kubenswrapper[4642]: I0128 06:48:46.317399 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:46 crc kubenswrapper[4642]: I0128 06:48:46.317412 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:46 crc kubenswrapper[4642]: I0128 06:48:46.317429 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:46 crc kubenswrapper[4642]: I0128 06:48:46.317439 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:46Z","lastTransitionTime":"2026-01-28T06:48:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:46 crc kubenswrapper[4642]: I0128 06:48:46.420267 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:46 crc kubenswrapper[4642]: I0128 06:48:46.420327 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:46 crc kubenswrapper[4642]: I0128 06:48:46.420338 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:46 crc kubenswrapper[4642]: I0128 06:48:46.420374 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:46 crc kubenswrapper[4642]: I0128 06:48:46.420387 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:46Z","lastTransitionTime":"2026-01-28T06:48:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:46 crc kubenswrapper[4642]: I0128 06:48:46.523807 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:46 crc kubenswrapper[4642]: I0128 06:48:46.523862 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:46 crc kubenswrapper[4642]: I0128 06:48:46.523872 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:46 crc kubenswrapper[4642]: I0128 06:48:46.523890 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:46 crc kubenswrapper[4642]: I0128 06:48:46.523903 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:46Z","lastTransitionTime":"2026-01-28T06:48:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:46 crc kubenswrapper[4642]: I0128 06:48:46.626992 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:46 crc kubenswrapper[4642]: I0128 06:48:46.627042 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:46 crc kubenswrapper[4642]: I0128 06:48:46.627054 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:46 crc kubenswrapper[4642]: I0128 06:48:46.627073 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:46 crc kubenswrapper[4642]: I0128 06:48:46.627088 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:46Z","lastTransitionTime":"2026-01-28T06:48:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:46 crc kubenswrapper[4642]: I0128 06:48:46.729752 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:46 crc kubenswrapper[4642]: I0128 06:48:46.729802 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:46 crc kubenswrapper[4642]: I0128 06:48:46.729811 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:46 crc kubenswrapper[4642]: I0128 06:48:46.729837 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:46 crc kubenswrapper[4642]: I0128 06:48:46.729849 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:46Z","lastTransitionTime":"2026-01-28T06:48:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:46 crc kubenswrapper[4642]: I0128 06:48:46.803863 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 06:48:46 crc kubenswrapper[4642]: E0128 06:48:46.804063 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 06:49:18.804040277 +0000 UTC m=+82.036129087 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 06:48:46 crc kubenswrapper[4642]: I0128 06:48:46.804284 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 06:48:46 crc kubenswrapper[4642]: I0128 06:48:46.804362 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 06:48:46 crc kubenswrapper[4642]: E0128 06:48:46.804509 4642 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 06:48:46 crc kubenswrapper[4642]: E0128 06:48:46.804584 4642 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 06:48:46 crc kubenswrapper[4642]: E0128 06:48:46.804628 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 06:49:18.804606432 +0000 UTC m=+82.036695241 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 06:48:46 crc kubenswrapper[4642]: E0128 06:48:46.804664 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 06:49:18.804655363 +0000 UTC m=+82.036744173 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 06:48:46 crc kubenswrapper[4642]: I0128 06:48:46.832321 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:46 crc kubenswrapper[4642]: I0128 06:48:46.832391 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:46 crc kubenswrapper[4642]: I0128 06:48:46.832402 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:46 crc kubenswrapper[4642]: I0128 06:48:46.832424 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:46 crc kubenswrapper[4642]: I0128 06:48:46.832434 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:46Z","lastTransitionTime":"2026-01-28T06:48:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:46 crc kubenswrapper[4642]: I0128 06:48:46.905634 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 06:48:46 crc kubenswrapper[4642]: I0128 06:48:46.905731 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 06:48:46 crc kubenswrapper[4642]: E0128 06:48:46.905924 4642 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 06:48:46 crc kubenswrapper[4642]: E0128 06:48:46.905984 4642 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 06:48:46 crc kubenswrapper[4642]: E0128 06:48:46.906002 4642 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 06:48:46 crc kubenswrapper[4642]: E0128 06:48:46.905930 4642 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 06:48:46 crc kubenswrapper[4642]: E0128 06:48:46.906095 4642 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 06:48:46 crc kubenswrapper[4642]: E0128 06:48:46.906114 4642 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 06:48:46 crc kubenswrapper[4642]: E0128 06:48:46.906077 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 06:49:18.906055294 +0000 UTC m=+82.138144113 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 06:48:46 crc kubenswrapper[4642]: E0128 06:48:46.906180 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 06:49:18.906165622 +0000 UTC m=+82.138254431 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 06:48:46 crc kubenswrapper[4642]: I0128 06:48:46.935219 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:46 crc kubenswrapper[4642]: I0128 06:48:46.935269 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:46 crc kubenswrapper[4642]: I0128 06:48:46.935283 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:46 crc kubenswrapper[4642]: I0128 06:48:46.935305 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:46 crc kubenswrapper[4642]: I0128 06:48:46.935323 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:46Z","lastTransitionTime":"2026-01-28T06:48:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.037819 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.037864 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.037874 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.037893 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.037906 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:47Z","lastTransitionTime":"2026-01-28T06:48:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.090232 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 14:40:52.592336367 +0000 UTC Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.097743 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.097804 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.097935 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 06:48:47 crc kubenswrapper[4642]: E0128 06:48:47.098100 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 06:48:47 crc kubenswrapper[4642]: E0128 06:48:47.098245 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 06:48:47 crc kubenswrapper[4642]: E0128 06:48:47.098358 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.110237 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wcwkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0008ccce-c71a-484f-9df7-02d5d92e8b02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04f0fbcd26ac8f7340450ee1edb0990146e1fee5ac1a5de78611d4a3c6ca7516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s57nl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wcwkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:47Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.123557 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"286f1a08-e4be-475e-a3ff-c37c99c41ea6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6769a5284fe4dae7df7948532cabc41d1f4a27a7e61ab818d61b26eb8165a750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75be46ee251716ce59f7d4e31811114c790e07ff2068465e1bba6ee4abc6909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1c4b797e1cd78fa6a850c8c7491a690dab16a3c06eeb71b2dc7e6799cf285a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7181241cbadd0b8edc9a6ff7f035813a1b2a06cdec451dc94d869bb56424a485\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70d9e422e29c4563585ee85b03f4d3e5705c7de8c388d337a52b30a7f0ccc096\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0128 06:48:09.026722 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 06:48:09.029824 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3147749855/tls.crt::/tmp/serving-cert-3147749855/tls.key\\\\\\\"\\\\nI0128 06:48:14.183760 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 06:48:14.185955 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 06:48:14.185992 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 06:48:14.186011 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 06:48:14.186020 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 06:48:14.192438 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0128 06:48:14.192454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0128 06:48:14.192477 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 06:48:14.192484 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 06:48:14.192488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 06:48:14.192490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 06:48:14.192493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 06:48:14.192496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0128 06:48:14.194350 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6d564af60cb16fea7167aea68703adced71f83b8ff6a2270e2e6d109f6e9c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d544adc85b9e84ad7685c6ce4e1ae6e7679fb24a21b7d44eb402b613ef6be7ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d544adc85b9e84ad7685c6ce4e1ae6e7679fb24a21b7d44eb402b613ef6be7ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:47Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.135007 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:47Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.139181 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.139242 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.139253 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.139273 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.139288 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:47Z","lastTransitionTime":"2026-01-28T06:48:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.148814 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zwkn6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e8cd657-e170-4331-9f82-7b84d122c8e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a285511bdaad491d745092714ce6a3d6b08414c40333226f4c7a0a1176ea7260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd2d14172912918c88948764f41aa4b5f2f97fa49f9edf2007a478b7c06b807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fd2d14172912918c88948764f41aa4b5f2f97fa49f9edf2007a478b7c06b807\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa815d145b52bbd51ede4a9e3c88571f861f70a6f5dc998910e506e358f73af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7aa815d145b52bbd51ede4a9e3c88571f861f70a6f5dc998910e506e358f73af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e6b7369b4926f78f6257e3f166c6540e93281a880c2a21ae3f87ecf32aaca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e6b7369b4926f78f6257e3f166c6540e93281a880c2a21ae3f87ecf32aaca0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce747d5ffc2f77c768c9e7a472ef0dbf11387e6c11215622ffceef721c9f677\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce747d5ffc2f77c768c9e7a472ef0dbf11387e6c11215622ffceef721c9f677\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a45118261ada0368155ab783800af3a71afe445370f06759b98928518999151d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a45118261ada0368155ab783800af3a71afe445370f06759b98928518999151d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7d61a01712de033907e9fec425f948bcf708899adea8b125ff47bc3b9953f3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7d61a01712de033907e9fec425f948bcf708899adea8b125ff47bc3b9953f3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zwkn6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:47Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.160507 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-28n48" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d569b7c-8a0e-4074-b61f-4139413b9849\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04752968302ea040e4a64b015290addc260974a7243137d55948c3e1a3e3a849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kp8zj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-28n48\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:47Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.166916 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.167654 4642 scope.go:117] "RemoveContainer" containerID="b55c1b9cd344284bee0378e9e6530f30e7315cdc57bd43c1b862457c4aeb02eb" Jan 28 06:48:47 crc kubenswrapper[4642]: E0128 06:48:47.167823 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-7fdwx_openshift-ovn-kubernetes(0f5d2a3f-25d8-4051-8000-30ec01a14eb0)\"" pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" podUID="0f5d2a3f-25d8-4051-8000-30ec01a14eb0" Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.172857 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4acb8b3be7913130bfe0acca1f8ed39f30ee2121eec228e4878f5645899f53cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfb4620ab8d587f286964f74ee575b4837fe55979a2f70767d30709cdb4fb2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:47Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.181829 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tzmpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e61b4710-6f7c-4ab1-b7bb-50d445aeda93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51dac6b178ad3961488d10a37458f5d40c3345f0091f7050a4c3d310ea46704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rfx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tzmpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:47Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.191699 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"338ae955-434d-40bd-8519-580badf3e175\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8c176f35c846bbf16f17d93e6c30e3708e24e70060ebc0e78f9714fc8c402f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdf4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f517e5ecc6bf060062365392506b3ee3a09354a4f5cfaa980e0e3a2507f298c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdf4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hdsmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:47Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.200109 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bpz6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ad39da-99cf-4851-be79-a7d38df54055\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7dl8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7dl8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bpz6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:47Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.210545 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:47Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.219874 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:47Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.228813 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e3e00c82a4c8d33e89ddc75f7c6c393e11f7a98c88c8e3e160fd8f206ece474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:47Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.238009 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7js4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c649375a-21a3-43f5-bd77-fbc87de527fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56f22af3d5ead6bcc96f5a78901579a4c02637186393613dad0006ecc09e4731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjkkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ad8af253ff523788331637fe108b083ad990eea4334fdea34d2e3b299de83eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjkkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-f7js4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:47Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.241457 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.241506 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.241518 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.241537 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.241549 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:47Z","lastTransitionTime":"2026-01-28T06:48:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.252338 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d609634-f051-46d5-9a1d-97785c7d8c67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76f1f4a4bf66b6661b0b869ac209c793c9066518674be305037372a22141c1d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f31fe6b99be2f9edd215e88aadd39790ce7f144061336772561af3eab969fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f826dab9c51a49c5b54e934e0be64c809ff09247f8aa766b9a20734a3f1277da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf1949a0768be6ba1b7cfc9107df62005fa50466747e22440e5ea8cd04ec908c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:47Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.261334 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5886e382e4852d848554f98099a7cbdd4d0bc3f161f96fd6f34f300a748b0172\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:47Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.276052 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c33d9fa76f5feb29f150ff38354d292c6cb68dedb78eb8496754c23272d137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9c679b07a820cca68800f672c74f133d871d41e4cfe6c18e101febd7110b497\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba5c27390ef97be2917d1b375ca741d7a74118ba523043b1c826e4fccaee05d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a597a1b6d4b61efc1b46773965ad598626e298815da9ceffaf97965d72286d3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e87e992d302acbefdcab2e332757e1f6d6535a0acdbe268b1e2eb1d49a615940\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6187660025657deea7e0cd68d1341136261c5e66c9e32c3b00b181df029843b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b55c1b9cd344284bee0378e9e6530f30e7315cdc57bd43c1b862457c4aeb02eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b35193ef5e485263e1e373e9e2db577f3019bf0d55871920e59a0b313baed854\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T06:48:29Z\\\",\\\"message\\\":\\\"ol:\\\\\\\"TCP\\\\\\\", inport:9393, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0128 06:48:29.954821 6081 lb_config.go:1031] Cluster endpoints for openshift-kube-apiserver-operator/metrics for network=default are: map[]\\\\nI0128 06:48:29.954839 6081 services_controller.go:444] Built service openshift-ingress-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0128 06:48:29.954849 6081 services_controller.go:445] Built service openshift-ingress-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF0128 06:48:29.954846 6081 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for banpInformer during admin network policy controller initialization, handler {0x1fcc300 0x1fcbfe0 0x1fcbf80} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b55c1b9cd344284bee0378e9e6530f30e7315cdc57bd43c1b862457c4aeb02eb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T06:48:43Z\\\",\\\"message\\\":\\\"nshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0128 06:48:43.781118 6303 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-controller\\\\\\\"}\\\\nI0128 06:48:43.781450 6303 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0128 06:48:43.781451 6303 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nF0128 06:48:43.781452 6303 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cdaa92914e736d4130cee0be4f11381b0edec756ca70235c6a4205f04777874\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecf4d023197544de3b0fe0f541dc75332bb08d0f44046e46fd468e3c5a0f251e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecf4d023197544de3b0fe0f541dc75332bb08d0f44046e46fd468e3c5a0f251e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7fdwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:47Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.284800 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d609634-f051-46d5-9a1d-97785c7d8c67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76f1f4a4bf66b6661b0b869ac209c793c9066518674be305037372a22141c1d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f31fe6b99be2f9edd215e88aadd39790ce7f144061336772561af3eab969fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f826dab9c51a49c5b54e934e0be64c809ff09247f8aa766b9a20734a3f1277da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf1949a0768be6ba1b7cfc9107df62005fa50466747e22440e5ea8cd04ec908c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:47Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.293818 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5886e382e4852d848554f98099a7cbdd4d0bc3f161f96fd6f34f300a748b0172\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:47Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.306902 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c33d9fa76f5feb29f150ff38354d292c6cb68dedb78eb8496754c23272d137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9c679b07a820cca68800f672c74f133d871d41e4cfe6c18e101febd7110b497\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba5c27390ef97be2917d1b375ca741d7a74118ba523043b1c826e4fccaee05d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a597a1b6d4b61efc1b46773965ad598626e298815da9ceffaf97965d72286d3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e87e992d302acbefdcab2e332757e1f6d6535a0acdbe268b1e2eb1d49a615940\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6187660025657deea7e0cd68d1341136261c5e66c9e32c3b00b181df029843b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b55c1b9cd344284bee0378e9e6530f30e7315cdc57bd43c1b862457c4aeb02eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b55c1b9cd344284bee0378e9e6530f30e7315cdc57bd43c1b862457c4aeb02eb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T06:48:43Z\\\",\\\"message\\\":\\\"nshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0128 06:48:43.781118 6303 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-controller\\\\\\\"}\\\\nI0128 06:48:43.781450 6303 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0128 06:48:43.781451 6303 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nF0128 06:48:43.781452 6303 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-7fdwx_openshift-ovn-kubernetes(0f5d2a3f-25d8-4051-8000-30ec01a14eb0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cdaa92914e736d4130cee0be4f11381b0edec756ca70235c6a4205f04777874\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecf4d023197544de3b0fe0f541dc75332bb08d0f44046e46fd468e3c5a0f251e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecf4d023197544de3b0fe0f541dc75332bb08d0f44046e46fd468e3c5a0f251e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7fdwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:47Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.316318 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"286f1a08-e4be-475e-a3ff-c37c99c41ea6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6769a5284fe4dae7df7948532cabc41d1f4a27a7e61ab818d61b26eb8165a750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75be46ee251716ce59f7d4e31811114c790e07ff2068465e1bba6ee4abc6909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1c4b797e1cd78fa6a850c8c7491a690dab16a3c06eeb71b2dc7e6799cf285a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7181241cbadd0b8edc9a6ff7f035813a1b2a06cdec451dc94d869bb56424a485\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70d9e422e29c4563585ee85b03f4d3e5705c7de8c388d337a52b30a7f0ccc096\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0128 06:48:09.026722 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 06:48:09.029824 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3147749855/tls.crt::/tmp/serving-cert-3147749855/tls.key\\\\\\\"\\\\nI0128 06:48:14.183760 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 06:48:14.185955 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 06:48:14.185992 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 06:48:14.186011 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 06:48:14.186020 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 06:48:14.192438 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0128 06:48:14.192454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0128 06:48:14.192477 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 06:48:14.192484 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 06:48:14.192488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 06:48:14.192490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 06:48:14.192493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 06:48:14.192496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0128 06:48:14.194350 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6d564af60cb16fea7167aea68703adced71f83b8ff6a2270e2e6d109f6e9c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d544adc85b9e84ad7685c6ce4e1ae6e7679fb24a21b7d44eb402b613ef6be7ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d544adc85b9e84ad7685c6ce4e1ae6e7679fb24a21b7d44eb402b613ef6be7ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:47Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.325961 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:47Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.336941 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zwkn6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e8cd657-e170-4331-9f82-7b84d122c8e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a285511bdaad491d745092714ce6a3d6b08414c40333226f4c7a0a1176ea7260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd2d14172912918c88948764f41aa4b5f2f97fa49f9edf2007a478b7c06b807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fd2d14172912918c88948764f41aa4b5f2f97fa49f9edf2007a478b7c06b807\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa815d145b52bbd51ede4a9e3c88571f861f70a6f5dc998910e506e358f73af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7aa815d145b52bbd51ede4a9e3c88571f861f70a6f5dc998910e506e358f73af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e6b7369b4926f78f6257e3f166c6540e93281a880c2a21ae3f87ecf32aaca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e6b7369b4926f78f6257e3f166c6540e93281a880c2a21ae3f87ecf32aaca0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce747d5ffc2f77c768c9e7a472ef0dbf11387e6c11215622ffceef721c9f677\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce747d5ffc2f77c768c9e7a472ef0dbf11387e6c11215622ffceef721c9f677\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a45118261ada0368155ab783800af3a71afe445370f06759b98928518999151d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a45118261ada0368155ab783800af3a71afe445370f06759b98928518999151d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7d61a01712de033907e9fec425f948bcf708899adea8b125ff47bc3b9953f3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7d61a01712de033907e9fec425f948bcf708899adea8b125ff47bc3b9953f3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zwkn6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:47Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.345269 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.345306 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.345316 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.345331 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.345352 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:47Z","lastTransitionTime":"2026-01-28T06:48:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.347578 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-28n48" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d569b7c-8a0e-4074-b61f-4139413b9849\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04752968302ea040e4a64b015290addc260974a7243137d55948c3e1a3e3a849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kp8zj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-28n48\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:47Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.355378 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wcwkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0008ccce-c71a-484f-9df7-02d5d92e8b02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04f0fbcd26ac8f7340450ee1edb0990146e1fee5ac1a5de78611d4a3c6ca7516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s57nl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wcwkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:47Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.365173 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:47Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.374499 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:47Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.384995 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4acb8b3be7913130bfe0acca1f8ed39f30ee2121eec228e4878f5645899f53cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfb4620ab8d587f286964f74ee575b4837fe55979a2f70767d30709cdb4fb2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:47Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.394930 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tzmpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e61b4710-6f7c-4ab1-b7bb-50d445aeda93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51dac6b178ad3961488d10a37458f5d40c3345f0091f7050a4c3d310ea46704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rfx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tzmpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:47Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.404735 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"338ae955-434d-40bd-8519-580badf3e175\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8c176f35c846bbf16f17d93e6c30e3708e24e70060ebc0e78f9714fc8c402f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdf4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f517e5ecc6bf060062365392506b3ee3a09354a4f5cfaa980e0e3a2507f298c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdf4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hdsmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:47Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.415325 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bpz6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ad39da-99cf-4851-be79-a7d38df54055\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7dl8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7dl8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bpz6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:47Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.425490 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e3e00c82a4c8d33e89ddc75f7c6c393e11f7a98c88c8e3e160fd8f206ece474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:47Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.434116 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7js4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c649375a-21a3-43f5-bd77-fbc87de527fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56f22af3d5ead6bcc96f5a78901579a4c02637186393613dad0006ecc09e4731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjkkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ad8af253ff523788331637fe108b083ad990eea4334fdea34d2e3b299de83eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjkkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-f7js4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:47Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.447953 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.447992 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.448003 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.448023 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.448036 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:47Z","lastTransitionTime":"2026-01-28T06:48:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.551213 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.551260 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.551271 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.551291 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.551305 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:47Z","lastTransitionTime":"2026-01-28T06:48:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.653717 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.653771 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.653784 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.653802 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.653814 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:47Z","lastTransitionTime":"2026-01-28T06:48:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.756227 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.756272 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.756283 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.756301 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.756312 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:47Z","lastTransitionTime":"2026-01-28T06:48:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.850390 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.859087 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.859133 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.859144 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.859166 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.859211 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:47Z","lastTransitionTime":"2026-01-28T06:48:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.861722 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.864014 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e3e00c82a4c8d33e89ddc75f7c6c393e11f7a98c88c8e3e160fd8f206ece474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:47Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.871925 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7js4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c649375a-21a3-43f5-bd77-fbc87de527fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56f22af3d5ead6bcc96f5a78901579a4c02637186393613dad0006ecc09e4731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjkkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ad8af253ff523788331637fe108b083ad990eea4334fdea34d2e3b299de83eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjkkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-f7js4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:47Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.881341 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d609634-f051-46d5-9a1d-97785c7d8c67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76f1f4a4bf66b6661b0b869ac209c793c9066518674be305037372a22141c1d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f31fe6b99be2f9edd215e88aadd39790ce7f144061336772561af3eab969fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f826dab9c51a49c5b54e934e0be64c809ff09247f8aa766b9a20734a3f1277da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf1949a0768be6ba1b7cfc9107df62005fa50466747e22440e5ea8cd04ec908c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:47Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.891620 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5886e382e4852d848554f98099a7cbdd4d0bc3f161f96fd6f34f300a748b0172\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:47Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.904735 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c33d9fa76f5feb29f150ff38354d292c6cb68dedb78eb8496754c23272d137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9c679b07a820cca68800f672c74f133d871d41e4cfe6c18e101febd7110b497\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba5c27390ef97be2917d1b375ca741d7a74118ba523043b1c826e4fccaee05d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a597a1b6d4b61efc1b46773965ad598626e298815da9ceffaf97965d72286d3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e87e992d302acbefdcab2e332757e1f6d6535a0acdbe268b1e2eb1d49a615940\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6187660025657deea7e0cd68d1341136261c5e66c9e32c3b00b181df029843b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b55c1b9cd344284bee0378e9e6530f30e7315cdc57bd43c1b862457c4aeb02eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b55c1b9cd344284bee0378e9e6530f30e7315cdc57bd43c1b862457c4aeb02eb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T06:48:43Z\\\",\\\"message\\\":\\\"nshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0128 06:48:43.781118 6303 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-controller\\\\\\\"}\\\\nI0128 06:48:43.781450 6303 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0128 06:48:43.781451 6303 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nF0128 06:48:43.781452 6303 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-7fdwx_openshift-ovn-kubernetes(0f5d2a3f-25d8-4051-8000-30ec01a14eb0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cdaa92914e736d4130cee0be4f11381b0edec756ca70235c6a4205f04777874\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecf4d023197544de3b0fe0f541dc75332bb08d0f44046e46fd468e3c5a0f251e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecf4d023197544de3b0fe0f541dc75332bb08d0f44046e46fd468e3c5a0f251e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7fdwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:47Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.915548 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"286f1a08-e4be-475e-a3ff-c37c99c41ea6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6769a5284fe4dae7df7948532cabc41d1f4a27a7e61ab818d61b26eb8165a750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75be46ee251716ce59f7d4e31811114c790e07ff2068465e1bba6ee4abc6909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1c4b797e1cd78fa6a850c8c7491a690dab16a3c06eeb71b2dc7e6799cf285a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7181241cbadd0b8edc9a6ff7f035813a1b2a06cdec451dc94d869bb56424a485\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70d9e422e29c4563585ee85b03f4d3e5705c7de8c388d337a52b30a7f0ccc096\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0128 06:48:09.026722 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 06:48:09.029824 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3147749855/tls.crt::/tmp/serving-cert-3147749855/tls.key\\\\\\\"\\\\nI0128 06:48:14.183760 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 06:48:14.185955 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 06:48:14.185992 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 06:48:14.186011 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 06:48:14.186020 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 06:48:14.192438 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0128 06:48:14.192454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0128 06:48:14.192477 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 06:48:14.192484 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 06:48:14.192488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 06:48:14.192490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 06:48:14.192493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 06:48:14.192496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0128 06:48:14.194350 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6d564af60cb16fea7167aea68703adced71f83b8ff6a2270e2e6d109f6e9c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d544adc85b9e84ad7685c6ce4e1ae6e7679fb24a21b7d44eb402b613ef6be7ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d544adc85b9e84ad7685c6ce4e1ae6e7679fb24a21b7d44eb402b613ef6be7ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:47Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.924460 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:47Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.937298 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zwkn6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e8cd657-e170-4331-9f82-7b84d122c8e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a285511bdaad491d745092714ce6a3d6b08414c40333226f4c7a0a1176ea7260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd2d14172912918c88948764f41aa4b5f2f97fa49f9edf2007a478b7c06b807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fd2d14172912918c88948764f41aa4b5f2f97fa49f9edf2007a478b7c06b807\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa815d145b52bbd51ede4a9e3c88571f861f70a6f5dc998910e506e358f73af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7aa815d145b52bbd51ede4a9e3c88571f861f70a6f5dc998910e506e358f73af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e6b7369b4926f78f6257e3f166c6540e93281a880c2a21ae3f87ecf32aaca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e6b7369b4926f78f6257e3f166c6540e93281a880c2a21ae3f87ecf32aaca0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce747d5ffc2f77c768c9e7a472ef0dbf11387e6c11215622ffceef721c9f677\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce747d5ffc2f77c768c9e7a472ef0dbf11387e6c11215622ffceef721c9f677\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a45118261ada0368155ab783800af3a71afe445370f06759b98928518999151d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a45118261ada0368155ab783800af3a71afe445370f06759b98928518999151d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7d61a01712de033907e9fec425f948bcf708899adea8b125ff47bc3b9953f3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7d61a01712de033907e9fec425f948bcf708899adea8b125ff47bc3b9953f3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zwkn6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:47Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.945825 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-28n48" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d569b7c-8a0e-4074-b61f-4139413b9849\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04752968302ea040e4a64b015290addc260974a7243137d55948c3e1a3e3a849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kp8zj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-28n48\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:47Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.952461 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wcwkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0008ccce-c71a-484f-9df7-02d5d92e8b02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04f0fbcd26ac8f7340450ee1edb0990146e1fee5ac1a5de78611d4a3c6ca7516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s57nl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wcwkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:47Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.960048 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bpz6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ad39da-99cf-4851-be79-a7d38df54055\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7dl8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7dl8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bpz6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:47Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.960861 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.960892 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.960902 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.960920 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.960931 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:47Z","lastTransitionTime":"2026-01-28T06:48:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.969315 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:47Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.978256 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:47Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.988266 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4acb8b3be7913130bfe0acca1f8ed39f30ee2121eec228e4878f5645899f53cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfb4620ab8d587f286964f74ee575b4837fe55979a2f70767d30709cdb4fb2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:47Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:47 crc kubenswrapper[4642]: I0128 06:48:47.997501 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tzmpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e61b4710-6f7c-4ab1-b7bb-50d445aeda93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51dac6b178ad3961488d10a37458f5d40c3345f0091f7050a4c3d310ea46704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rfx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tzmpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:47Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:48 crc kubenswrapper[4642]: I0128 06:48:48.006519 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"338ae955-434d-40bd-8519-580badf3e175\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8c176f35c846bbf16f17d93e6c30e3708e24e70060ebc0e78f9714fc8c402f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdf4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f517e5ecc6bf060062365392506b3ee3a09354a4f5cfaa980e0e3a2507f298c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdf4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hdsmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:48Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:48 crc kubenswrapper[4642]: I0128 06:48:48.016873 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:48 crc kubenswrapper[4642]: I0128 06:48:48.016916 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:48 crc kubenswrapper[4642]: I0128 06:48:48.016928 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:48 crc kubenswrapper[4642]: I0128 06:48:48.016948 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:48 crc kubenswrapper[4642]: I0128 06:48:48.016960 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:48Z","lastTransitionTime":"2026-01-28T06:48:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:48 crc kubenswrapper[4642]: E0128 06:48:48.026725 4642 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"af9fb491-28d4-46e5-857c-727aaf9d83d0\\\",\\\"systemUUID\\\":\\\"6d7f2c45-295c-4dcd-b97d-d5c383274b44\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:48Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:48 crc kubenswrapper[4642]: I0128 06:48:48.029949 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:48 crc kubenswrapper[4642]: I0128 06:48:48.030003 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:48 crc kubenswrapper[4642]: I0128 06:48:48.030016 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:48 crc kubenswrapper[4642]: I0128 06:48:48.030033 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:48 crc kubenswrapper[4642]: I0128 06:48:48.030045 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:48Z","lastTransitionTime":"2026-01-28T06:48:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:48 crc kubenswrapper[4642]: E0128 06:48:48.040052 4642 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"af9fb491-28d4-46e5-857c-727aaf9d83d0\\\",\\\"systemUUID\\\":\\\"6d7f2c45-295c-4dcd-b97d-d5c383274b44\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:48Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:48 crc kubenswrapper[4642]: I0128 06:48:48.043142 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:48 crc kubenswrapper[4642]: I0128 06:48:48.043177 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:48 crc kubenswrapper[4642]: I0128 06:48:48.043205 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:48 crc kubenswrapper[4642]: I0128 06:48:48.043222 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:48 crc kubenswrapper[4642]: I0128 06:48:48.043233 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:48Z","lastTransitionTime":"2026-01-28T06:48:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:48 crc kubenswrapper[4642]: E0128 06:48:48.052937 4642 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"af9fb491-28d4-46e5-857c-727aaf9d83d0\\\",\\\"systemUUID\\\":\\\"6d7f2c45-295c-4dcd-b97d-d5c383274b44\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:48Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:48 crc kubenswrapper[4642]: I0128 06:48:48.056099 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:48 crc kubenswrapper[4642]: I0128 06:48:48.056158 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:48 crc kubenswrapper[4642]: I0128 06:48:48.056174 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:48 crc kubenswrapper[4642]: I0128 06:48:48.056210 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:48 crc kubenswrapper[4642]: I0128 06:48:48.056222 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:48Z","lastTransitionTime":"2026-01-28T06:48:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:48 crc kubenswrapper[4642]: E0128 06:48:48.066205 4642 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"af9fb491-28d4-46e5-857c-727aaf9d83d0\\\",\\\"systemUUID\\\":\\\"6d7f2c45-295c-4dcd-b97d-d5c383274b44\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:48Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:48 crc kubenswrapper[4642]: I0128 06:48:48.069014 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:48 crc kubenswrapper[4642]: I0128 06:48:48.069048 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:48 crc kubenswrapper[4642]: I0128 06:48:48.069059 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:48 crc kubenswrapper[4642]: I0128 06:48:48.069077 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:48 crc kubenswrapper[4642]: I0128 06:48:48.069089 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:48Z","lastTransitionTime":"2026-01-28T06:48:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:48 crc kubenswrapper[4642]: E0128 06:48:48.078862 4642 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"af9fb491-28d4-46e5-857c-727aaf9d83d0\\\",\\\"systemUUID\\\":\\\"6d7f2c45-295c-4dcd-b97d-d5c383274b44\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:48Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:48 crc kubenswrapper[4642]: E0128 06:48:48.078983 4642 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 06:48:48 crc kubenswrapper[4642]: I0128 06:48:48.080062 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:48 crc kubenswrapper[4642]: I0128 06:48:48.080099 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:48 crc kubenswrapper[4642]: I0128 06:48:48.080112 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:48 crc kubenswrapper[4642]: I0128 06:48:48.080124 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:48 crc kubenswrapper[4642]: I0128 06:48:48.080136 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:48Z","lastTransitionTime":"2026-01-28T06:48:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:48 crc kubenswrapper[4642]: I0128 06:48:48.091245 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 02:25:50.181443037 +0000 UTC Jan 28 06:48:48 crc kubenswrapper[4642]: I0128 06:48:48.098390 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bpz6r" Jan 28 06:48:48 crc kubenswrapper[4642]: E0128 06:48:48.098517 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bpz6r" podUID="e7ad39da-99cf-4851-be79-a7d38df54055" Jan 28 06:48:48 crc kubenswrapper[4642]: I0128 06:48:48.182548 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:48 crc kubenswrapper[4642]: I0128 06:48:48.182587 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:48 crc kubenswrapper[4642]: I0128 06:48:48.182600 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:48 crc kubenswrapper[4642]: I0128 06:48:48.182614 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:48 crc kubenswrapper[4642]: I0128 06:48:48.182628 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:48Z","lastTransitionTime":"2026-01-28T06:48:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:48 crc kubenswrapper[4642]: I0128 06:48:48.284787 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:48 crc kubenswrapper[4642]: I0128 06:48:48.284837 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:48 crc kubenswrapper[4642]: I0128 06:48:48.284849 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:48 crc kubenswrapper[4642]: I0128 06:48:48.284873 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:48 crc kubenswrapper[4642]: I0128 06:48:48.284889 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:48Z","lastTransitionTime":"2026-01-28T06:48:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:48 crc kubenswrapper[4642]: I0128 06:48:48.387394 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:48 crc kubenswrapper[4642]: I0128 06:48:48.387433 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:48 crc kubenswrapper[4642]: I0128 06:48:48.387445 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:48 crc kubenswrapper[4642]: I0128 06:48:48.387461 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:48 crc kubenswrapper[4642]: I0128 06:48:48.387474 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:48Z","lastTransitionTime":"2026-01-28T06:48:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:48 crc kubenswrapper[4642]: I0128 06:48:48.490240 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:48 crc kubenswrapper[4642]: I0128 06:48:48.490320 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:48 crc kubenswrapper[4642]: I0128 06:48:48.490330 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:48 crc kubenswrapper[4642]: I0128 06:48:48.490365 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:48 crc kubenswrapper[4642]: I0128 06:48:48.490377 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:48Z","lastTransitionTime":"2026-01-28T06:48:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:48 crc kubenswrapper[4642]: I0128 06:48:48.592746 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:48 crc kubenswrapper[4642]: I0128 06:48:48.592781 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:48 crc kubenswrapper[4642]: I0128 06:48:48.592793 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:48 crc kubenswrapper[4642]: I0128 06:48:48.592810 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:48 crc kubenswrapper[4642]: I0128 06:48:48.592820 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:48Z","lastTransitionTime":"2026-01-28T06:48:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:48 crc kubenswrapper[4642]: I0128 06:48:48.695287 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:48 crc kubenswrapper[4642]: I0128 06:48:48.695327 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:48 crc kubenswrapper[4642]: I0128 06:48:48.695337 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:48 crc kubenswrapper[4642]: I0128 06:48:48.695372 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:48 crc kubenswrapper[4642]: I0128 06:48:48.695385 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:48Z","lastTransitionTime":"2026-01-28T06:48:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:48 crc kubenswrapper[4642]: I0128 06:48:48.797783 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:48 crc kubenswrapper[4642]: I0128 06:48:48.797828 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:48 crc kubenswrapper[4642]: I0128 06:48:48.797838 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:48 crc kubenswrapper[4642]: I0128 06:48:48.797853 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:48 crc kubenswrapper[4642]: I0128 06:48:48.797862 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:48Z","lastTransitionTime":"2026-01-28T06:48:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:48 crc kubenswrapper[4642]: I0128 06:48:48.900696 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:48 crc kubenswrapper[4642]: I0128 06:48:48.900754 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:48 crc kubenswrapper[4642]: I0128 06:48:48.900765 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:48 crc kubenswrapper[4642]: I0128 06:48:48.900785 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:48 crc kubenswrapper[4642]: I0128 06:48:48.900798 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:48Z","lastTransitionTime":"2026-01-28T06:48:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:49 crc kubenswrapper[4642]: I0128 06:48:49.004283 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:49 crc kubenswrapper[4642]: I0128 06:48:49.004338 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:49 crc kubenswrapper[4642]: I0128 06:48:49.004368 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:49 crc kubenswrapper[4642]: I0128 06:48:49.004396 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:49 crc kubenswrapper[4642]: I0128 06:48:49.004408 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:49Z","lastTransitionTime":"2026-01-28T06:48:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:49 crc kubenswrapper[4642]: I0128 06:48:49.092237 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 03:21:03.642705476 +0000 UTC Jan 28 06:48:49 crc kubenswrapper[4642]: I0128 06:48:49.098058 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 06:48:49 crc kubenswrapper[4642]: I0128 06:48:49.098103 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 06:48:49 crc kubenswrapper[4642]: E0128 06:48:49.098252 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 06:48:49 crc kubenswrapper[4642]: I0128 06:48:49.098319 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 06:48:49 crc kubenswrapper[4642]: E0128 06:48:49.098411 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 06:48:49 crc kubenswrapper[4642]: E0128 06:48:49.098485 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 06:48:49 crc kubenswrapper[4642]: I0128 06:48:49.106875 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:49 crc kubenswrapper[4642]: I0128 06:48:49.106925 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:49 crc kubenswrapper[4642]: I0128 06:48:49.106936 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:49 crc kubenswrapper[4642]: I0128 06:48:49.106959 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:49 crc kubenswrapper[4642]: I0128 06:48:49.106972 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:49Z","lastTransitionTime":"2026-01-28T06:48:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:49 crc kubenswrapper[4642]: I0128 06:48:49.209585 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:49 crc kubenswrapper[4642]: I0128 06:48:49.209628 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:49 crc kubenswrapper[4642]: I0128 06:48:49.209638 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:49 crc kubenswrapper[4642]: I0128 06:48:49.209654 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:49 crc kubenswrapper[4642]: I0128 06:48:49.209665 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:49Z","lastTransitionTime":"2026-01-28T06:48:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:49 crc kubenswrapper[4642]: I0128 06:48:49.311613 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:49 crc kubenswrapper[4642]: I0128 06:48:49.311659 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:49 crc kubenswrapper[4642]: I0128 06:48:49.311670 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:49 crc kubenswrapper[4642]: I0128 06:48:49.311685 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:49 crc kubenswrapper[4642]: I0128 06:48:49.311697 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:49Z","lastTransitionTime":"2026-01-28T06:48:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:49 crc kubenswrapper[4642]: I0128 06:48:49.413814 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:49 crc kubenswrapper[4642]: I0128 06:48:49.413863 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:49 crc kubenswrapper[4642]: I0128 06:48:49.413873 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:49 crc kubenswrapper[4642]: I0128 06:48:49.413889 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:49 crc kubenswrapper[4642]: I0128 06:48:49.413901 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:49Z","lastTransitionTime":"2026-01-28T06:48:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:49 crc kubenswrapper[4642]: I0128 06:48:49.516386 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:49 crc kubenswrapper[4642]: I0128 06:48:49.516461 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:49 crc kubenswrapper[4642]: I0128 06:48:49.516473 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:49 crc kubenswrapper[4642]: I0128 06:48:49.516494 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:49 crc kubenswrapper[4642]: I0128 06:48:49.516507 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:49Z","lastTransitionTime":"2026-01-28T06:48:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:49 crc kubenswrapper[4642]: I0128 06:48:49.619252 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:49 crc kubenswrapper[4642]: I0128 06:48:49.619300 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:49 crc kubenswrapper[4642]: I0128 06:48:49.619312 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:49 crc kubenswrapper[4642]: I0128 06:48:49.619331 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:49 crc kubenswrapper[4642]: I0128 06:48:49.619356 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:49Z","lastTransitionTime":"2026-01-28T06:48:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:49 crc kubenswrapper[4642]: I0128 06:48:49.721214 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:49 crc kubenswrapper[4642]: I0128 06:48:49.721248 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:49 crc kubenswrapper[4642]: I0128 06:48:49.721257 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:49 crc kubenswrapper[4642]: I0128 06:48:49.721270 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:49 crc kubenswrapper[4642]: I0128 06:48:49.721281 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:49Z","lastTransitionTime":"2026-01-28T06:48:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:49 crc kubenswrapper[4642]: I0128 06:48:49.822862 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:49 crc kubenswrapper[4642]: I0128 06:48:49.822893 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:49 crc kubenswrapper[4642]: I0128 06:48:49.822902 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:49 crc kubenswrapper[4642]: I0128 06:48:49.822919 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:49 crc kubenswrapper[4642]: I0128 06:48:49.822928 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:49Z","lastTransitionTime":"2026-01-28T06:48:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:49 crc kubenswrapper[4642]: I0128 06:48:49.924999 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:49 crc kubenswrapper[4642]: I0128 06:48:49.925041 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:49 crc kubenswrapper[4642]: I0128 06:48:49.925053 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:49 crc kubenswrapper[4642]: I0128 06:48:49.925071 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:49 crc kubenswrapper[4642]: I0128 06:48:49.925082 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:49Z","lastTransitionTime":"2026-01-28T06:48:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:50 crc kubenswrapper[4642]: I0128 06:48:50.027064 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:50 crc kubenswrapper[4642]: I0128 06:48:50.027120 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:50 crc kubenswrapper[4642]: I0128 06:48:50.027130 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:50 crc kubenswrapper[4642]: I0128 06:48:50.027143 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:50 crc kubenswrapper[4642]: I0128 06:48:50.027172 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:50Z","lastTransitionTime":"2026-01-28T06:48:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:50 crc kubenswrapper[4642]: I0128 06:48:50.092804 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 23:11:34.921249791 +0000 UTC Jan 28 06:48:50 crc kubenswrapper[4642]: I0128 06:48:50.098334 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bpz6r" Jan 28 06:48:50 crc kubenswrapper[4642]: E0128 06:48:50.098457 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bpz6r" podUID="e7ad39da-99cf-4851-be79-a7d38df54055" Jan 28 06:48:50 crc kubenswrapper[4642]: I0128 06:48:50.129039 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:50 crc kubenswrapper[4642]: I0128 06:48:50.129077 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:50 crc kubenswrapper[4642]: I0128 06:48:50.129086 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:50 crc kubenswrapper[4642]: I0128 06:48:50.129103 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:50 crc kubenswrapper[4642]: I0128 06:48:50.129113 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:50Z","lastTransitionTime":"2026-01-28T06:48:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:50 crc kubenswrapper[4642]: I0128 06:48:50.231709 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:50 crc kubenswrapper[4642]: I0128 06:48:50.231759 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:50 crc kubenswrapper[4642]: I0128 06:48:50.231772 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:50 crc kubenswrapper[4642]: I0128 06:48:50.231789 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:50 crc kubenswrapper[4642]: I0128 06:48:50.231802 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:50Z","lastTransitionTime":"2026-01-28T06:48:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:50 crc kubenswrapper[4642]: I0128 06:48:50.334123 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:50 crc kubenswrapper[4642]: I0128 06:48:50.334152 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:50 crc kubenswrapper[4642]: I0128 06:48:50.334162 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:50 crc kubenswrapper[4642]: I0128 06:48:50.334176 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:50 crc kubenswrapper[4642]: I0128 06:48:50.334206 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:50Z","lastTransitionTime":"2026-01-28T06:48:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:50 crc kubenswrapper[4642]: I0128 06:48:50.436245 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:50 crc kubenswrapper[4642]: I0128 06:48:50.436284 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:50 crc kubenswrapper[4642]: I0128 06:48:50.436292 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:50 crc kubenswrapper[4642]: I0128 06:48:50.436307 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:50 crc kubenswrapper[4642]: I0128 06:48:50.436318 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:50Z","lastTransitionTime":"2026-01-28T06:48:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:50 crc kubenswrapper[4642]: I0128 06:48:50.444549 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e7ad39da-99cf-4851-be79-a7d38df54055-metrics-certs\") pod \"network-metrics-daemon-bpz6r\" (UID: \"e7ad39da-99cf-4851-be79-a7d38df54055\") " pod="openshift-multus/network-metrics-daemon-bpz6r" Jan 28 06:48:50 crc kubenswrapper[4642]: E0128 06:48:50.444684 4642 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 06:48:50 crc kubenswrapper[4642]: E0128 06:48:50.444761 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e7ad39da-99cf-4851-be79-a7d38df54055-metrics-certs podName:e7ad39da-99cf-4851-be79-a7d38df54055 nodeName:}" failed. No retries permitted until 2026-01-28 06:49:06.444741053 +0000 UTC m=+69.676829863 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e7ad39da-99cf-4851-be79-a7d38df54055-metrics-certs") pod "network-metrics-daemon-bpz6r" (UID: "e7ad39da-99cf-4851-be79-a7d38df54055") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 06:48:50 crc kubenswrapper[4642]: I0128 06:48:50.538594 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:50 crc kubenswrapper[4642]: I0128 06:48:50.538629 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:50 crc kubenswrapper[4642]: I0128 06:48:50.538639 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:50 crc kubenswrapper[4642]: I0128 06:48:50.538656 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:50 crc kubenswrapper[4642]: I0128 06:48:50.538670 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:50Z","lastTransitionTime":"2026-01-28T06:48:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:50 crc kubenswrapper[4642]: I0128 06:48:50.640983 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:50 crc kubenswrapper[4642]: I0128 06:48:50.641033 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:50 crc kubenswrapper[4642]: I0128 06:48:50.641044 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:50 crc kubenswrapper[4642]: I0128 06:48:50.641063 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:50 crc kubenswrapper[4642]: I0128 06:48:50.641075 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:50Z","lastTransitionTime":"2026-01-28T06:48:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:50 crc kubenswrapper[4642]: I0128 06:48:50.743112 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:50 crc kubenswrapper[4642]: I0128 06:48:50.743208 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:50 crc kubenswrapper[4642]: I0128 06:48:50.743223 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:50 crc kubenswrapper[4642]: I0128 06:48:50.743245 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:50 crc kubenswrapper[4642]: I0128 06:48:50.743262 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:50Z","lastTransitionTime":"2026-01-28T06:48:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:50 crc kubenswrapper[4642]: I0128 06:48:50.845325 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:50 crc kubenswrapper[4642]: I0128 06:48:50.845373 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:50 crc kubenswrapper[4642]: I0128 06:48:50.845382 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:50 crc kubenswrapper[4642]: I0128 06:48:50.845402 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:50 crc kubenswrapper[4642]: I0128 06:48:50.845411 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:50Z","lastTransitionTime":"2026-01-28T06:48:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:50 crc kubenswrapper[4642]: I0128 06:48:50.947094 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:50 crc kubenswrapper[4642]: I0128 06:48:50.947142 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:50 crc kubenswrapper[4642]: I0128 06:48:50.947151 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:50 crc kubenswrapper[4642]: I0128 06:48:50.947169 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:50 crc kubenswrapper[4642]: I0128 06:48:50.947203 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:50Z","lastTransitionTime":"2026-01-28T06:48:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:51 crc kubenswrapper[4642]: I0128 06:48:51.049683 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:51 crc kubenswrapper[4642]: I0128 06:48:51.049726 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:51 crc kubenswrapper[4642]: I0128 06:48:51.049737 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:51 crc kubenswrapper[4642]: I0128 06:48:51.049753 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:51 crc kubenswrapper[4642]: I0128 06:48:51.049765 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:51Z","lastTransitionTime":"2026-01-28T06:48:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:51 crc kubenswrapper[4642]: I0128 06:48:51.093867 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 19:31:21.831695129 +0000 UTC Jan 28 06:48:51 crc kubenswrapper[4642]: I0128 06:48:51.098873 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 06:48:51 crc kubenswrapper[4642]: I0128 06:48:51.098895 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 06:48:51 crc kubenswrapper[4642]: E0128 06:48:51.099059 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 06:48:51 crc kubenswrapper[4642]: I0128 06:48:51.098895 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 06:48:51 crc kubenswrapper[4642]: E0128 06:48:51.099144 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 06:48:51 crc kubenswrapper[4642]: E0128 06:48:51.099223 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 06:48:51 crc kubenswrapper[4642]: I0128 06:48:51.151900 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:51 crc kubenswrapper[4642]: I0128 06:48:51.151945 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:51 crc kubenswrapper[4642]: I0128 06:48:51.151957 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:51 crc kubenswrapper[4642]: I0128 06:48:51.151976 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:51 crc kubenswrapper[4642]: I0128 06:48:51.151988 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:51Z","lastTransitionTime":"2026-01-28T06:48:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:51 crc kubenswrapper[4642]: I0128 06:48:51.255059 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:51 crc kubenswrapper[4642]: I0128 06:48:51.255106 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:51 crc kubenswrapper[4642]: I0128 06:48:51.255118 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:51 crc kubenswrapper[4642]: I0128 06:48:51.255137 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:51 crc kubenswrapper[4642]: I0128 06:48:51.255156 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:51Z","lastTransitionTime":"2026-01-28T06:48:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:51 crc kubenswrapper[4642]: I0128 06:48:51.357516 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:51 crc kubenswrapper[4642]: I0128 06:48:51.357581 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:51 crc kubenswrapper[4642]: I0128 06:48:51.357591 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:51 crc kubenswrapper[4642]: I0128 06:48:51.357610 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:51 crc kubenswrapper[4642]: I0128 06:48:51.357622 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:51Z","lastTransitionTime":"2026-01-28T06:48:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:51 crc kubenswrapper[4642]: I0128 06:48:51.460605 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:51 crc kubenswrapper[4642]: I0128 06:48:51.460672 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:51 crc kubenswrapper[4642]: I0128 06:48:51.460686 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:51 crc kubenswrapper[4642]: I0128 06:48:51.460712 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:51 crc kubenswrapper[4642]: I0128 06:48:51.460734 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:51Z","lastTransitionTime":"2026-01-28T06:48:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:51 crc kubenswrapper[4642]: I0128 06:48:51.563651 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:51 crc kubenswrapper[4642]: I0128 06:48:51.563701 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:51 crc kubenswrapper[4642]: I0128 06:48:51.563713 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:51 crc kubenswrapper[4642]: I0128 06:48:51.563733 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:51 crc kubenswrapper[4642]: I0128 06:48:51.563748 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:51Z","lastTransitionTime":"2026-01-28T06:48:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:51 crc kubenswrapper[4642]: I0128 06:48:51.666100 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:51 crc kubenswrapper[4642]: I0128 06:48:51.666149 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:51 crc kubenswrapper[4642]: I0128 06:48:51.666160 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:51 crc kubenswrapper[4642]: I0128 06:48:51.666175 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:51 crc kubenswrapper[4642]: I0128 06:48:51.666208 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:51Z","lastTransitionTime":"2026-01-28T06:48:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:51 crc kubenswrapper[4642]: I0128 06:48:51.768639 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:51 crc kubenswrapper[4642]: I0128 06:48:51.768679 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:51 crc kubenswrapper[4642]: I0128 06:48:51.768689 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:51 crc kubenswrapper[4642]: I0128 06:48:51.768706 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:51 crc kubenswrapper[4642]: I0128 06:48:51.768720 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:51Z","lastTransitionTime":"2026-01-28T06:48:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:51 crc kubenswrapper[4642]: I0128 06:48:51.870813 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:51 crc kubenswrapper[4642]: I0128 06:48:51.871179 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:51 crc kubenswrapper[4642]: I0128 06:48:51.871234 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:51 crc kubenswrapper[4642]: I0128 06:48:51.871254 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:51 crc kubenswrapper[4642]: I0128 06:48:51.871266 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:51Z","lastTransitionTime":"2026-01-28T06:48:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:51 crc kubenswrapper[4642]: I0128 06:48:51.974633 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:51 crc kubenswrapper[4642]: I0128 06:48:51.974668 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:51 crc kubenswrapper[4642]: I0128 06:48:51.974678 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:51 crc kubenswrapper[4642]: I0128 06:48:51.974692 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:51 crc kubenswrapper[4642]: I0128 06:48:51.974705 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:51Z","lastTransitionTime":"2026-01-28T06:48:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:52 crc kubenswrapper[4642]: I0128 06:48:52.080781 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:52 crc kubenswrapper[4642]: I0128 06:48:52.080836 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:52 crc kubenswrapper[4642]: I0128 06:48:52.080848 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:52 crc kubenswrapper[4642]: I0128 06:48:52.080867 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:52 crc kubenswrapper[4642]: I0128 06:48:52.080891 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:52Z","lastTransitionTime":"2026-01-28T06:48:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:52 crc kubenswrapper[4642]: I0128 06:48:52.094067 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 04:37:31.198139339 +0000 UTC Jan 28 06:48:52 crc kubenswrapper[4642]: I0128 06:48:52.098376 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bpz6r" Jan 28 06:48:52 crc kubenswrapper[4642]: E0128 06:48:52.098509 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bpz6r" podUID="e7ad39da-99cf-4851-be79-a7d38df54055" Jan 28 06:48:52 crc kubenswrapper[4642]: I0128 06:48:52.183406 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:52 crc kubenswrapper[4642]: I0128 06:48:52.183447 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:52 crc kubenswrapper[4642]: I0128 06:48:52.183463 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:52 crc kubenswrapper[4642]: I0128 06:48:52.183481 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:52 crc kubenswrapper[4642]: I0128 06:48:52.183494 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:52Z","lastTransitionTime":"2026-01-28T06:48:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:52 crc kubenswrapper[4642]: I0128 06:48:52.286493 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:52 crc kubenswrapper[4642]: I0128 06:48:52.286536 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:52 crc kubenswrapper[4642]: I0128 06:48:52.286548 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:52 crc kubenswrapper[4642]: I0128 06:48:52.286563 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:52 crc kubenswrapper[4642]: I0128 06:48:52.286575 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:52Z","lastTransitionTime":"2026-01-28T06:48:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:52 crc kubenswrapper[4642]: I0128 06:48:52.388643 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:52 crc kubenswrapper[4642]: I0128 06:48:52.388684 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:52 crc kubenswrapper[4642]: I0128 06:48:52.388694 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:52 crc kubenswrapper[4642]: I0128 06:48:52.388707 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:52 crc kubenswrapper[4642]: I0128 06:48:52.388719 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:52Z","lastTransitionTime":"2026-01-28T06:48:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:52 crc kubenswrapper[4642]: I0128 06:48:52.491290 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:52 crc kubenswrapper[4642]: I0128 06:48:52.491339 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:52 crc kubenswrapper[4642]: I0128 06:48:52.491364 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:52 crc kubenswrapper[4642]: I0128 06:48:52.491385 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:52 crc kubenswrapper[4642]: I0128 06:48:52.491398 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:52Z","lastTransitionTime":"2026-01-28T06:48:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:52 crc kubenswrapper[4642]: I0128 06:48:52.593776 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:52 crc kubenswrapper[4642]: I0128 06:48:52.593825 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:52 crc kubenswrapper[4642]: I0128 06:48:52.593834 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:52 crc kubenswrapper[4642]: I0128 06:48:52.593851 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:52 crc kubenswrapper[4642]: I0128 06:48:52.593865 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:52Z","lastTransitionTime":"2026-01-28T06:48:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:52 crc kubenswrapper[4642]: I0128 06:48:52.697316 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:52 crc kubenswrapper[4642]: I0128 06:48:52.697371 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:52 crc kubenswrapper[4642]: I0128 06:48:52.697382 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:52 crc kubenswrapper[4642]: I0128 06:48:52.697398 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:52 crc kubenswrapper[4642]: I0128 06:48:52.697412 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:52Z","lastTransitionTime":"2026-01-28T06:48:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:52 crc kubenswrapper[4642]: I0128 06:48:52.799339 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:52 crc kubenswrapper[4642]: I0128 06:48:52.799386 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:52 crc kubenswrapper[4642]: I0128 06:48:52.799403 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:52 crc kubenswrapper[4642]: I0128 06:48:52.799420 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:52 crc kubenswrapper[4642]: I0128 06:48:52.799430 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:52Z","lastTransitionTime":"2026-01-28T06:48:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:52 crc kubenswrapper[4642]: I0128 06:48:52.901195 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:52 crc kubenswrapper[4642]: I0128 06:48:52.901225 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:52 crc kubenswrapper[4642]: I0128 06:48:52.901237 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:52 crc kubenswrapper[4642]: I0128 06:48:52.901248 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:52 crc kubenswrapper[4642]: I0128 06:48:52.901259 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:52Z","lastTransitionTime":"2026-01-28T06:48:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:53 crc kubenswrapper[4642]: I0128 06:48:53.002793 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:53 crc kubenswrapper[4642]: I0128 06:48:53.002835 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:53 crc kubenswrapper[4642]: I0128 06:48:53.002845 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:53 crc kubenswrapper[4642]: I0128 06:48:53.002855 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:53 crc kubenswrapper[4642]: I0128 06:48:53.002867 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:53Z","lastTransitionTime":"2026-01-28T06:48:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:53 crc kubenswrapper[4642]: I0128 06:48:53.094965 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 09:23:13.933442914 +0000 UTC Jan 28 06:48:53 crc kubenswrapper[4642]: I0128 06:48:53.098408 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 06:48:53 crc kubenswrapper[4642]: I0128 06:48:53.098410 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 06:48:53 crc kubenswrapper[4642]: E0128 06:48:53.098509 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 06:48:53 crc kubenswrapper[4642]: I0128 06:48:53.098546 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 06:48:53 crc kubenswrapper[4642]: E0128 06:48:53.098636 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 06:48:53 crc kubenswrapper[4642]: E0128 06:48:53.098752 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 06:48:53 crc kubenswrapper[4642]: I0128 06:48:53.104081 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:53 crc kubenswrapper[4642]: I0128 06:48:53.104100 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:53 crc kubenswrapper[4642]: I0128 06:48:53.104107 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:53 crc kubenswrapper[4642]: I0128 06:48:53.104115 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:53 crc kubenswrapper[4642]: I0128 06:48:53.104123 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:53Z","lastTransitionTime":"2026-01-28T06:48:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:53 crc kubenswrapper[4642]: I0128 06:48:53.206399 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:53 crc kubenswrapper[4642]: I0128 06:48:53.206426 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:53 crc kubenswrapper[4642]: I0128 06:48:53.206434 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:53 crc kubenswrapper[4642]: I0128 06:48:53.206444 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:53 crc kubenswrapper[4642]: I0128 06:48:53.206457 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:53Z","lastTransitionTime":"2026-01-28T06:48:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:53 crc kubenswrapper[4642]: I0128 06:48:53.308062 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:53 crc kubenswrapper[4642]: I0128 06:48:53.308107 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:53 crc kubenswrapper[4642]: I0128 06:48:53.308117 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:53 crc kubenswrapper[4642]: I0128 06:48:53.308133 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:53 crc kubenswrapper[4642]: I0128 06:48:53.308146 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:53Z","lastTransitionTime":"2026-01-28T06:48:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:53 crc kubenswrapper[4642]: I0128 06:48:53.410303 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:53 crc kubenswrapper[4642]: I0128 06:48:53.410338 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:53 crc kubenswrapper[4642]: I0128 06:48:53.410360 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:53 crc kubenswrapper[4642]: I0128 06:48:53.410374 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:53 crc kubenswrapper[4642]: I0128 06:48:53.410390 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:53Z","lastTransitionTime":"2026-01-28T06:48:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:53 crc kubenswrapper[4642]: I0128 06:48:53.513301 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:53 crc kubenswrapper[4642]: I0128 06:48:53.513339 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:53 crc kubenswrapper[4642]: I0128 06:48:53.513361 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:53 crc kubenswrapper[4642]: I0128 06:48:53.513376 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:53 crc kubenswrapper[4642]: I0128 06:48:53.513388 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:53Z","lastTransitionTime":"2026-01-28T06:48:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:53 crc kubenswrapper[4642]: I0128 06:48:53.615575 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:53 crc kubenswrapper[4642]: I0128 06:48:53.615606 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:53 crc kubenswrapper[4642]: I0128 06:48:53.615615 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:53 crc kubenswrapper[4642]: I0128 06:48:53.615626 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:53 crc kubenswrapper[4642]: I0128 06:48:53.615635 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:53Z","lastTransitionTime":"2026-01-28T06:48:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:53 crc kubenswrapper[4642]: I0128 06:48:53.717755 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:53 crc kubenswrapper[4642]: I0128 06:48:53.717788 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:53 crc kubenswrapper[4642]: I0128 06:48:53.717797 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:53 crc kubenswrapper[4642]: I0128 06:48:53.717810 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:53 crc kubenswrapper[4642]: I0128 06:48:53.717819 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:53Z","lastTransitionTime":"2026-01-28T06:48:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:53 crc kubenswrapper[4642]: I0128 06:48:53.820209 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:53 crc kubenswrapper[4642]: I0128 06:48:53.820253 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:53 crc kubenswrapper[4642]: I0128 06:48:53.820262 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:53 crc kubenswrapper[4642]: I0128 06:48:53.820281 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:53 crc kubenswrapper[4642]: I0128 06:48:53.820291 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:53Z","lastTransitionTime":"2026-01-28T06:48:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:53 crc kubenswrapper[4642]: I0128 06:48:53.922717 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:53 crc kubenswrapper[4642]: I0128 06:48:53.922778 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:53 crc kubenswrapper[4642]: I0128 06:48:53.922786 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:53 crc kubenswrapper[4642]: I0128 06:48:53.922802 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:53 crc kubenswrapper[4642]: I0128 06:48:53.922812 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:53Z","lastTransitionTime":"2026-01-28T06:48:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:54 crc kubenswrapper[4642]: I0128 06:48:54.025330 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:54 crc kubenswrapper[4642]: I0128 06:48:54.025388 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:54 crc kubenswrapper[4642]: I0128 06:48:54.025400 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:54 crc kubenswrapper[4642]: I0128 06:48:54.025418 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:54 crc kubenswrapper[4642]: I0128 06:48:54.025432 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:54Z","lastTransitionTime":"2026-01-28T06:48:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:54 crc kubenswrapper[4642]: I0128 06:48:54.095313 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 17:11:19.786638497 +0000 UTC Jan 28 06:48:54 crc kubenswrapper[4642]: I0128 06:48:54.097600 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bpz6r" Jan 28 06:48:54 crc kubenswrapper[4642]: E0128 06:48:54.097736 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bpz6r" podUID="e7ad39da-99cf-4851-be79-a7d38df54055" Jan 28 06:48:54 crc kubenswrapper[4642]: I0128 06:48:54.127224 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:54 crc kubenswrapper[4642]: I0128 06:48:54.127274 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:54 crc kubenswrapper[4642]: I0128 06:48:54.127285 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:54 crc kubenswrapper[4642]: I0128 06:48:54.127301 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:54 crc kubenswrapper[4642]: I0128 06:48:54.127310 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:54Z","lastTransitionTime":"2026-01-28T06:48:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:54 crc kubenswrapper[4642]: I0128 06:48:54.229861 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:54 crc kubenswrapper[4642]: I0128 06:48:54.229894 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:54 crc kubenswrapper[4642]: I0128 06:48:54.229903 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:54 crc kubenswrapper[4642]: I0128 06:48:54.229917 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:54 crc kubenswrapper[4642]: I0128 06:48:54.229925 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:54Z","lastTransitionTime":"2026-01-28T06:48:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:54 crc kubenswrapper[4642]: I0128 06:48:54.332261 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:54 crc kubenswrapper[4642]: I0128 06:48:54.332308 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:54 crc kubenswrapper[4642]: I0128 06:48:54.332317 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:54 crc kubenswrapper[4642]: I0128 06:48:54.332335 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:54 crc kubenswrapper[4642]: I0128 06:48:54.332365 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:54Z","lastTransitionTime":"2026-01-28T06:48:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:54 crc kubenswrapper[4642]: I0128 06:48:54.433974 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:54 crc kubenswrapper[4642]: I0128 06:48:54.434023 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:54 crc kubenswrapper[4642]: I0128 06:48:54.434035 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:54 crc kubenswrapper[4642]: I0128 06:48:54.434051 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:54 crc kubenswrapper[4642]: I0128 06:48:54.434064 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:54Z","lastTransitionTime":"2026-01-28T06:48:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:54 crc kubenswrapper[4642]: I0128 06:48:54.535835 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:54 crc kubenswrapper[4642]: I0128 06:48:54.535871 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:54 crc kubenswrapper[4642]: I0128 06:48:54.535881 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:54 crc kubenswrapper[4642]: I0128 06:48:54.535895 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:54 crc kubenswrapper[4642]: I0128 06:48:54.535905 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:54Z","lastTransitionTime":"2026-01-28T06:48:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:54 crc kubenswrapper[4642]: I0128 06:48:54.639624 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:54 crc kubenswrapper[4642]: I0128 06:48:54.640092 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:54 crc kubenswrapper[4642]: I0128 06:48:54.640173 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:54 crc kubenswrapper[4642]: I0128 06:48:54.640316 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:54 crc kubenswrapper[4642]: I0128 06:48:54.640424 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:54Z","lastTransitionTime":"2026-01-28T06:48:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:54 crc kubenswrapper[4642]: I0128 06:48:54.743285 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:54 crc kubenswrapper[4642]: I0128 06:48:54.743316 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:54 crc kubenswrapper[4642]: I0128 06:48:54.743324 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:54 crc kubenswrapper[4642]: I0128 06:48:54.743337 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:54 crc kubenswrapper[4642]: I0128 06:48:54.743357 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:54Z","lastTransitionTime":"2026-01-28T06:48:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:54 crc kubenswrapper[4642]: I0128 06:48:54.845570 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:54 crc kubenswrapper[4642]: I0128 06:48:54.845603 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:54 crc kubenswrapper[4642]: I0128 06:48:54.845614 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:54 crc kubenswrapper[4642]: I0128 06:48:54.845632 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:54 crc kubenswrapper[4642]: I0128 06:48:54.845846 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:54Z","lastTransitionTime":"2026-01-28T06:48:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:54 crc kubenswrapper[4642]: I0128 06:48:54.948903 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:54 crc kubenswrapper[4642]: I0128 06:48:54.948948 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:54 crc kubenswrapper[4642]: I0128 06:48:54.948958 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:54 crc kubenswrapper[4642]: I0128 06:48:54.948976 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:54 crc kubenswrapper[4642]: I0128 06:48:54.948989 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:54Z","lastTransitionTime":"2026-01-28T06:48:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:55 crc kubenswrapper[4642]: I0128 06:48:55.050699 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:55 crc kubenswrapper[4642]: I0128 06:48:55.050740 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:55 crc kubenswrapper[4642]: I0128 06:48:55.050751 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:55 crc kubenswrapper[4642]: I0128 06:48:55.050768 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:55 crc kubenswrapper[4642]: I0128 06:48:55.050780 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:55Z","lastTransitionTime":"2026-01-28T06:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:55 crc kubenswrapper[4642]: I0128 06:48:55.096388 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 21:43:09.839150566 +0000 UTC Jan 28 06:48:55 crc kubenswrapper[4642]: I0128 06:48:55.097619 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 06:48:55 crc kubenswrapper[4642]: I0128 06:48:55.097673 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 06:48:55 crc kubenswrapper[4642]: E0128 06:48:55.097756 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 06:48:55 crc kubenswrapper[4642]: I0128 06:48:55.097871 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 06:48:55 crc kubenswrapper[4642]: E0128 06:48:55.097958 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 06:48:55 crc kubenswrapper[4642]: E0128 06:48:55.098249 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 06:48:55 crc kubenswrapper[4642]: I0128 06:48:55.152862 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:55 crc kubenswrapper[4642]: I0128 06:48:55.152908 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:55 crc kubenswrapper[4642]: I0128 06:48:55.152919 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:55 crc kubenswrapper[4642]: I0128 06:48:55.152937 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:55 crc kubenswrapper[4642]: I0128 06:48:55.152950 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:55Z","lastTransitionTime":"2026-01-28T06:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:55 crc kubenswrapper[4642]: I0128 06:48:55.255104 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:55 crc kubenswrapper[4642]: I0128 06:48:55.255139 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:55 crc kubenswrapper[4642]: I0128 06:48:55.255162 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:55 crc kubenswrapper[4642]: I0128 06:48:55.255183 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:55 crc kubenswrapper[4642]: I0128 06:48:55.255217 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:55Z","lastTransitionTime":"2026-01-28T06:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:55 crc kubenswrapper[4642]: I0128 06:48:55.357313 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:55 crc kubenswrapper[4642]: I0128 06:48:55.357366 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:55 crc kubenswrapper[4642]: I0128 06:48:55.357378 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:55 crc kubenswrapper[4642]: I0128 06:48:55.357393 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:55 crc kubenswrapper[4642]: I0128 06:48:55.357404 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:55Z","lastTransitionTime":"2026-01-28T06:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:55 crc kubenswrapper[4642]: I0128 06:48:55.459047 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:55 crc kubenswrapper[4642]: I0128 06:48:55.459074 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:55 crc kubenswrapper[4642]: I0128 06:48:55.459084 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:55 crc kubenswrapper[4642]: I0128 06:48:55.459095 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:55 crc kubenswrapper[4642]: I0128 06:48:55.459105 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:55Z","lastTransitionTime":"2026-01-28T06:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:55 crc kubenswrapper[4642]: I0128 06:48:55.561355 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:55 crc kubenswrapper[4642]: I0128 06:48:55.561413 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:55 crc kubenswrapper[4642]: I0128 06:48:55.561425 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:55 crc kubenswrapper[4642]: I0128 06:48:55.561447 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:55 crc kubenswrapper[4642]: I0128 06:48:55.561464 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:55Z","lastTransitionTime":"2026-01-28T06:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:55 crc kubenswrapper[4642]: I0128 06:48:55.663569 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:55 crc kubenswrapper[4642]: I0128 06:48:55.663599 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:55 crc kubenswrapper[4642]: I0128 06:48:55.663607 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:55 crc kubenswrapper[4642]: I0128 06:48:55.663618 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:55 crc kubenswrapper[4642]: I0128 06:48:55.663628 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:55Z","lastTransitionTime":"2026-01-28T06:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:55 crc kubenswrapper[4642]: I0128 06:48:55.765041 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:55 crc kubenswrapper[4642]: I0128 06:48:55.765069 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:55 crc kubenswrapper[4642]: I0128 06:48:55.765080 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:55 crc kubenswrapper[4642]: I0128 06:48:55.765094 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:55 crc kubenswrapper[4642]: I0128 06:48:55.765107 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:55Z","lastTransitionTime":"2026-01-28T06:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:55 crc kubenswrapper[4642]: I0128 06:48:55.867774 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:55 crc kubenswrapper[4642]: I0128 06:48:55.867812 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:55 crc kubenswrapper[4642]: I0128 06:48:55.867822 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:55 crc kubenswrapper[4642]: I0128 06:48:55.867843 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:55 crc kubenswrapper[4642]: I0128 06:48:55.867854 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:55Z","lastTransitionTime":"2026-01-28T06:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:55 crc kubenswrapper[4642]: I0128 06:48:55.970688 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:55 crc kubenswrapper[4642]: I0128 06:48:55.970734 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:55 crc kubenswrapper[4642]: I0128 06:48:55.970747 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:55 crc kubenswrapper[4642]: I0128 06:48:55.970766 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:55 crc kubenswrapper[4642]: I0128 06:48:55.970778 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:55Z","lastTransitionTime":"2026-01-28T06:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:56 crc kubenswrapper[4642]: I0128 06:48:56.072859 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:56 crc kubenswrapper[4642]: I0128 06:48:56.072897 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:56 crc kubenswrapper[4642]: I0128 06:48:56.072909 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:56 crc kubenswrapper[4642]: I0128 06:48:56.072925 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:56 crc kubenswrapper[4642]: I0128 06:48:56.072934 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:56Z","lastTransitionTime":"2026-01-28T06:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:56 crc kubenswrapper[4642]: I0128 06:48:56.097301 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 20:20:55.871690875 +0000 UTC Jan 28 06:48:56 crc kubenswrapper[4642]: I0128 06:48:56.097438 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bpz6r" Jan 28 06:48:56 crc kubenswrapper[4642]: E0128 06:48:56.097613 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bpz6r" podUID="e7ad39da-99cf-4851-be79-a7d38df54055" Jan 28 06:48:56 crc kubenswrapper[4642]: I0128 06:48:56.175443 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:56 crc kubenswrapper[4642]: I0128 06:48:56.175495 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:56 crc kubenswrapper[4642]: I0128 06:48:56.175507 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:56 crc kubenswrapper[4642]: I0128 06:48:56.175523 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:56 crc kubenswrapper[4642]: I0128 06:48:56.175538 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:56Z","lastTransitionTime":"2026-01-28T06:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:56 crc kubenswrapper[4642]: I0128 06:48:56.278232 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:56 crc kubenswrapper[4642]: I0128 06:48:56.278269 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:56 crc kubenswrapper[4642]: I0128 06:48:56.278298 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:56 crc kubenswrapper[4642]: I0128 06:48:56.278315 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:56 crc kubenswrapper[4642]: I0128 06:48:56.278325 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:56Z","lastTransitionTime":"2026-01-28T06:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:56 crc kubenswrapper[4642]: I0128 06:48:56.381023 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:56 crc kubenswrapper[4642]: I0128 06:48:56.381077 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:56 crc kubenswrapper[4642]: I0128 06:48:56.381087 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:56 crc kubenswrapper[4642]: I0128 06:48:56.381104 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:56 crc kubenswrapper[4642]: I0128 06:48:56.381115 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:56Z","lastTransitionTime":"2026-01-28T06:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:56 crc kubenswrapper[4642]: I0128 06:48:56.483304 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:56 crc kubenswrapper[4642]: I0128 06:48:56.483363 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:56 crc kubenswrapper[4642]: I0128 06:48:56.483374 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:56 crc kubenswrapper[4642]: I0128 06:48:56.483401 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:56 crc kubenswrapper[4642]: I0128 06:48:56.483416 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:56Z","lastTransitionTime":"2026-01-28T06:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:56 crc kubenswrapper[4642]: I0128 06:48:56.585435 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:56 crc kubenswrapper[4642]: I0128 06:48:56.585460 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:56 crc kubenswrapper[4642]: I0128 06:48:56.585470 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:56 crc kubenswrapper[4642]: I0128 06:48:56.585480 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:56 crc kubenswrapper[4642]: I0128 06:48:56.585490 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:56Z","lastTransitionTime":"2026-01-28T06:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:56 crc kubenswrapper[4642]: I0128 06:48:56.688493 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:56 crc kubenswrapper[4642]: I0128 06:48:56.688548 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:56 crc kubenswrapper[4642]: I0128 06:48:56.688560 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:56 crc kubenswrapper[4642]: I0128 06:48:56.688580 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:56 crc kubenswrapper[4642]: I0128 06:48:56.688594 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:56Z","lastTransitionTime":"2026-01-28T06:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:56 crc kubenswrapper[4642]: I0128 06:48:56.791371 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:56 crc kubenswrapper[4642]: I0128 06:48:56.791434 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:56 crc kubenswrapper[4642]: I0128 06:48:56.791448 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:56 crc kubenswrapper[4642]: I0128 06:48:56.791471 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:56 crc kubenswrapper[4642]: I0128 06:48:56.791483 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:56Z","lastTransitionTime":"2026-01-28T06:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:56 crc kubenswrapper[4642]: I0128 06:48:56.893460 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:56 crc kubenswrapper[4642]: I0128 06:48:56.893522 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:56 crc kubenswrapper[4642]: I0128 06:48:56.893533 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:56 crc kubenswrapper[4642]: I0128 06:48:56.893552 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:56 crc kubenswrapper[4642]: I0128 06:48:56.893565 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:56Z","lastTransitionTime":"2026-01-28T06:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:56 crc kubenswrapper[4642]: I0128 06:48:56.995480 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:56 crc kubenswrapper[4642]: I0128 06:48:56.995510 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:56 crc kubenswrapper[4642]: I0128 06:48:56.995518 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:56 crc kubenswrapper[4642]: I0128 06:48:56.995532 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:56 crc kubenswrapper[4642]: I0128 06:48:56.995544 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:56Z","lastTransitionTime":"2026-01-28T06:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:57 crc kubenswrapper[4642]: I0128 06:48:57.097335 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 06:48:57 crc kubenswrapper[4642]: I0128 06:48:57.097437 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:57 crc kubenswrapper[4642]: I0128 06:48:57.097463 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:57 crc kubenswrapper[4642]: E0128 06:48:57.097460 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 06:48:57 crc kubenswrapper[4642]: I0128 06:48:57.097473 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:57 crc kubenswrapper[4642]: I0128 06:48:57.097507 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:57 crc kubenswrapper[4642]: I0128 06:48:57.097513 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 06:48:57 crc kubenswrapper[4642]: I0128 06:48:57.097521 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:57Z","lastTransitionTime":"2026-01-28T06:48:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:57 crc kubenswrapper[4642]: E0128 06:48:57.097609 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 06:48:57 crc kubenswrapper[4642]: I0128 06:48:57.097513 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 06:48:57 crc kubenswrapper[4642]: E0128 06:48:57.097690 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 06:48:57 crc kubenswrapper[4642]: I0128 06:48:57.097680 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 00:46:06.024279954 +0000 UTC Jan 28 06:48:57 crc kubenswrapper[4642]: I0128 06:48:57.110125 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d609634-f051-46d5-9a1d-97785c7d8c67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76f1f4a4bf66b6661b0b869ac209c793c9066518674be305037372a22141c1d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f31fe6b99be2f9edd215e88aadd39790ce7f144061336772561af3eab969fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f826dab9c51a49c5b54e934e0be64c809ff09247f8aa766b9a20734a3f1277da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf1949a0768be6ba1b7cfc9107df62005fa50466747e22440e5ea8cd04ec908c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:57Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:57 crc kubenswrapper[4642]: I0128 06:48:57.120960 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5886e382e4852d848554f98099a7cbdd4d0bc3f161f96fd6f34f300a748b0172\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:57Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:57 crc kubenswrapper[4642]: I0128 06:48:57.135257 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c33d9fa76f5feb29f150ff38354d292c6cb68dedb78eb8496754c23272d137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9c679b07a820cca68800f672c74f133d871d41e4cfe6c18e101febd7110b497\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba5c27390ef97be2917d1b375ca741d7a74118ba523043b1c826e4fccaee05d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a597a1b6d4b61efc1b46773965ad598626e298815da9ceffaf97965d72286d3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e87e992d302acbefdcab2e332757e1f6d6535a0acdbe268b1e2eb1d49a615940\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6187660025657deea7e0cd68d1341136261c5e66c9e32c3b00b181df029843b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b55c1b9cd344284bee0378e9e6530f30e7315cdc57bd43c1b862457c4aeb02eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b55c1b9cd344284bee0378e9e6530f30e7315cdc57bd43c1b862457c4aeb02eb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T06:48:43Z\\\",\\\"message\\\":\\\"nshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0128 06:48:43.781118 6303 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-controller\\\\\\\"}\\\\nI0128 06:48:43.781450 6303 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0128 06:48:43.781451 6303 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nF0128 06:48:43.781452 6303 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-7fdwx_openshift-ovn-kubernetes(0f5d2a3f-25d8-4051-8000-30ec01a14eb0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cdaa92914e736d4130cee0be4f11381b0edec756ca70235c6a4205f04777874\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecf4d023197544de3b0fe0f541dc75332bb08d0f44046e46fd468e3c5a0f251e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecf4d023197544de3b0fe0f541dc75332bb08d0f44046e46fd468e3c5a0f251e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7fdwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:57Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:57 crc kubenswrapper[4642]: I0128 06:48:57.145509 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-28n48" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d569b7c-8a0e-4074-b61f-4139413b9849\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04752968302ea040e4a64b015290addc260974a7243137d55948c3e1a3e3a849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kp8zj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-28n48\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:57Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:57 crc kubenswrapper[4642]: I0128 06:48:57.154321 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wcwkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0008ccce-c71a-484f-9df7-02d5d92e8b02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04f0fbcd26ac8f7340450ee1edb0990146e1fee5ac1a5de78611d4a3c6ca7516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s57nl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wcwkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:57Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:57 crc kubenswrapper[4642]: I0128 06:48:57.166319 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"286f1a08-e4be-475e-a3ff-c37c99c41ea6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6769a5284fe4dae7df7948532cabc41d1f4a27a7e61ab818d61b26eb8165a750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75be46ee251716ce59f7d4e31811114c790e07ff2068465e1bba6ee4abc6909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1c4b797e1cd78fa6a850c8c7491a690dab16a3c06eeb71b2dc7e6799cf285a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7181241cbadd0b8edc9a6ff7f035813a1b2a06cdec451dc94d869bb56424a485\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70d9e422e29c4563585ee85b03f4d3e5705c7de8c388d337a52b30a7f0ccc096\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0128 06:48:09.026722 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 06:48:09.029824 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3147749855/tls.crt::/tmp/serving-cert-3147749855/tls.key\\\\\\\"\\\\nI0128 06:48:14.183760 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 06:48:14.185955 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 06:48:14.185992 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 06:48:14.186011 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 06:48:14.186020 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 06:48:14.192438 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0128 06:48:14.192454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0128 06:48:14.192477 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 06:48:14.192484 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 06:48:14.192488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 06:48:14.192490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 06:48:14.192493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 06:48:14.192496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0128 06:48:14.194350 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6d564af60cb16fea7167aea68703adced71f83b8ff6a2270e2e6d109f6e9c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d544adc85b9e84ad7685c6ce4e1ae6e7679fb24a21b7d44eb402b613ef6be7ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d544adc85b9e84ad7685c6ce4e1ae6e7679fb24a21b7d44eb402b613ef6be7ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:57Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:57 crc kubenswrapper[4642]: I0128 06:48:57.176013 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:57Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:57 crc kubenswrapper[4642]: I0128 06:48:57.188550 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zwkn6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e8cd657-e170-4331-9f82-7b84d122c8e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a285511bdaad491d745092714ce6a3d6b08414c40333226f4c7a0a1176ea7260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd2d14172912918c88948764f41aa4b5f2f97fa49f9edf2007a478b7c06b807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fd2d14172912918c88948764f41aa4b5f2f97fa49f9edf2007a478b7c06b807\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa815d145b52bbd51ede4a9e3c88571f861f70a6f5dc998910e506e358f73af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7aa815d145b52bbd51ede4a9e3c88571f861f70a6f5dc998910e506e358f73af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e6b7369b4926f78f6257e3f166c6540e93281a880c2a21ae3f87ecf32aaca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e6b7369b4926f78f6257e3f166c6540e93281a880c2a21ae3f87ecf32aaca0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce747d5ffc2f77c768c9e7a472ef0dbf11387e6c11215622ffceef721c9f677\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce747d5ffc2f77c768c9e7a472ef0dbf11387e6c11215622ffceef721c9f677\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a45118261ada0368155ab783800af3a71afe445370f06759b98928518999151d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a45118261ada0368155ab783800af3a71afe445370f06759b98928518999151d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7d61a01712de033907e9fec425f948bcf708899adea8b125ff47bc3b9953f3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7d61a01712de033907e9fec425f948bcf708899adea8b125ff47bc3b9953f3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zwkn6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:57Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:57 crc kubenswrapper[4642]: I0128 06:48:57.197100 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:57Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:57 crc kubenswrapper[4642]: I0128 06:48:57.199460 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:57 crc kubenswrapper[4642]: I0128 06:48:57.199505 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:57 crc kubenswrapper[4642]: I0128 06:48:57.199514 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:57 crc kubenswrapper[4642]: I0128 06:48:57.199527 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:57 crc kubenswrapper[4642]: I0128 06:48:57.199536 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:57Z","lastTransitionTime":"2026-01-28T06:48:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:57 crc kubenswrapper[4642]: I0128 06:48:57.206867 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4acb8b3be7913130bfe0acca1f8ed39f30ee2121eec228e4878f5645899f53cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfb4620ab8d587f286964f74ee575b4837fe55979a2f70767d30709cdb4fb2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:57Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:57 crc kubenswrapper[4642]: I0128 06:48:57.213939 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tzmpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e61b4710-6f7c-4ab1-b7bb-50d445aeda93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51dac6b178ad3961488d10a37458f5d40c3345f0091f7050a4c3d310ea46704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rfx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tzmpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:57Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:57 crc kubenswrapper[4642]: I0128 06:48:57.224325 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"338ae955-434d-40bd-8519-580badf3e175\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8c176f35c846bbf16f17d93e6c30e3708e24e70060ebc0e78f9714fc8c402f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdf4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f517e5ecc6bf060062365392506b3ee3a09354a4f5cfaa980e0e3a2507f298c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdf4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hdsmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:57Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:57 crc kubenswrapper[4642]: I0128 06:48:57.232242 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bpz6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ad39da-99cf-4851-be79-a7d38df54055\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7dl8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7dl8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bpz6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:57Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:57 crc kubenswrapper[4642]: I0128 06:48:57.240802 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed93b88d-b3ec-4cd8-be12-bc48b0c33702\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aec208f2eb87996ab0f1883e6b216ded8157366b59ee57759273141e3e12d243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25a19a0551b29145aabbd9238cf56dafbbbdc28ad9fc6454753375eb5871265b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ce5d04a42dc4cfb6547e7bc0e2a9957b5eba820b37ad9f1c50dd412b28aec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3159057a36a02a3782786be649812ee0be2e1af5df8aeda45327dcee067a08dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3159057a36a02a3782786be649812ee0be2e1af5df8aeda45327dcee067a08dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:47:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:57Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:57 crc kubenswrapper[4642]: I0128 06:48:57.250154 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:57Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:57 crc kubenswrapper[4642]: I0128 06:48:57.259179 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e3e00c82a4c8d33e89ddc75f7c6c393e11f7a98c88c8e3e160fd8f206ece474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:57Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:57 crc kubenswrapper[4642]: I0128 06:48:57.267373 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7js4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c649375a-21a3-43f5-bd77-fbc87de527fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56f22af3d5ead6bcc96f5a78901579a4c02637186393613dad0006ecc09e4731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjkkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ad8af253ff523788331637fe108b083ad990eea4334fdea34d2e3b299de83eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjkkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-f7js4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:57Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:57 crc kubenswrapper[4642]: I0128 06:48:57.302424 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:57 crc kubenswrapper[4642]: I0128 06:48:57.302585 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:57 crc kubenswrapper[4642]: I0128 06:48:57.302670 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:57 crc kubenswrapper[4642]: I0128 06:48:57.302774 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:57 crc kubenswrapper[4642]: I0128 06:48:57.302839 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:57Z","lastTransitionTime":"2026-01-28T06:48:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:57 crc kubenswrapper[4642]: I0128 06:48:57.406869 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:57 crc kubenswrapper[4642]: I0128 06:48:57.407306 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:57 crc kubenswrapper[4642]: I0128 06:48:57.407393 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:57 crc kubenswrapper[4642]: I0128 06:48:57.407474 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:57 crc kubenswrapper[4642]: I0128 06:48:57.407558 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:57Z","lastTransitionTime":"2026-01-28T06:48:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:57 crc kubenswrapper[4642]: I0128 06:48:57.511424 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:57 crc kubenswrapper[4642]: I0128 06:48:57.511486 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:57 crc kubenswrapper[4642]: I0128 06:48:57.511497 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:57 crc kubenswrapper[4642]: I0128 06:48:57.511516 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:57 crc kubenswrapper[4642]: I0128 06:48:57.511529 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:57Z","lastTransitionTime":"2026-01-28T06:48:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:57 crc kubenswrapper[4642]: I0128 06:48:57.614167 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:57 crc kubenswrapper[4642]: I0128 06:48:57.614228 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:57 crc kubenswrapper[4642]: I0128 06:48:57.614239 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:57 crc kubenswrapper[4642]: I0128 06:48:57.614256 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:57 crc kubenswrapper[4642]: I0128 06:48:57.614267 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:57Z","lastTransitionTime":"2026-01-28T06:48:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:57 crc kubenswrapper[4642]: I0128 06:48:57.717216 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:57 crc kubenswrapper[4642]: I0128 06:48:57.717254 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:57 crc kubenswrapper[4642]: I0128 06:48:57.717268 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:57 crc kubenswrapper[4642]: I0128 06:48:57.717288 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:57 crc kubenswrapper[4642]: I0128 06:48:57.717302 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:57Z","lastTransitionTime":"2026-01-28T06:48:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:57 crc kubenswrapper[4642]: I0128 06:48:57.820495 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:57 crc kubenswrapper[4642]: I0128 06:48:57.820543 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:57 crc kubenswrapper[4642]: I0128 06:48:57.820553 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:57 crc kubenswrapper[4642]: I0128 06:48:57.820570 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:57 crc kubenswrapper[4642]: I0128 06:48:57.820583 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:57Z","lastTransitionTime":"2026-01-28T06:48:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:57 crc kubenswrapper[4642]: I0128 06:48:57.923095 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:57 crc kubenswrapper[4642]: I0128 06:48:57.923156 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:57 crc kubenswrapper[4642]: I0128 06:48:57.923166 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:57 crc kubenswrapper[4642]: I0128 06:48:57.923182 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:57 crc kubenswrapper[4642]: I0128 06:48:57.923217 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:57Z","lastTransitionTime":"2026-01-28T06:48:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:58 crc kubenswrapper[4642]: I0128 06:48:58.025356 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:58 crc kubenswrapper[4642]: I0128 06:48:58.025413 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:58 crc kubenswrapper[4642]: I0128 06:48:58.025424 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:58 crc kubenswrapper[4642]: I0128 06:48:58.025445 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:58 crc kubenswrapper[4642]: I0128 06:48:58.025456 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:58Z","lastTransitionTime":"2026-01-28T06:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:58 crc kubenswrapper[4642]: I0128 06:48:58.097448 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bpz6r" Jan 28 06:48:58 crc kubenswrapper[4642]: E0128 06:48:58.097630 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bpz6r" podUID="e7ad39da-99cf-4851-be79-a7d38df54055" Jan 28 06:48:58 crc kubenswrapper[4642]: I0128 06:48:58.098026 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 20:36:57.524836989 +0000 UTC Jan 28 06:48:58 crc kubenswrapper[4642]: I0128 06:48:58.128171 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:58 crc kubenswrapper[4642]: I0128 06:48:58.128237 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:58 crc kubenswrapper[4642]: I0128 06:48:58.128247 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:58 crc kubenswrapper[4642]: I0128 06:48:58.128269 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:58 crc kubenswrapper[4642]: I0128 06:48:58.128279 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:58Z","lastTransitionTime":"2026-01-28T06:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:58 crc kubenswrapper[4642]: I0128 06:48:58.230460 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:58 crc kubenswrapper[4642]: I0128 06:48:58.230570 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:58 crc kubenswrapper[4642]: I0128 06:48:58.230582 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:58 crc kubenswrapper[4642]: I0128 06:48:58.230604 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:58 crc kubenswrapper[4642]: I0128 06:48:58.230618 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:58Z","lastTransitionTime":"2026-01-28T06:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:58 crc kubenswrapper[4642]: I0128 06:48:58.333992 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:58 crc kubenswrapper[4642]: I0128 06:48:58.334035 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:58 crc kubenswrapper[4642]: I0128 06:48:58.334043 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:58 crc kubenswrapper[4642]: I0128 06:48:58.334059 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:58 crc kubenswrapper[4642]: I0128 06:48:58.334071 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:58Z","lastTransitionTime":"2026-01-28T06:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:58 crc kubenswrapper[4642]: I0128 06:48:58.431742 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:58 crc kubenswrapper[4642]: I0128 06:48:58.431785 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:58 crc kubenswrapper[4642]: I0128 06:48:58.431794 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:58 crc kubenswrapper[4642]: I0128 06:48:58.431808 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:58 crc kubenswrapper[4642]: I0128 06:48:58.431819 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:58Z","lastTransitionTime":"2026-01-28T06:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:58 crc kubenswrapper[4642]: E0128 06:48:58.443932 4642 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"af9fb491-28d4-46e5-857c-727aaf9d83d0\\\",\\\"systemUUID\\\":\\\"6d7f2c45-295c-4dcd-b97d-d5c383274b44\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:58Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:58 crc kubenswrapper[4642]: I0128 06:48:58.447822 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:58 crc kubenswrapper[4642]: I0128 06:48:58.447868 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:58 crc kubenswrapper[4642]: I0128 06:48:58.447881 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:58 crc kubenswrapper[4642]: I0128 06:48:58.447901 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:58 crc kubenswrapper[4642]: I0128 06:48:58.447914 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:58Z","lastTransitionTime":"2026-01-28T06:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:58 crc kubenswrapper[4642]: E0128 06:48:58.462711 4642 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"af9fb491-28d4-46e5-857c-727aaf9d83d0\\\",\\\"systemUUID\\\":\\\"6d7f2c45-295c-4dcd-b97d-d5c383274b44\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:58Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:58 crc kubenswrapper[4642]: I0128 06:48:58.465567 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:58 crc kubenswrapper[4642]: I0128 06:48:58.465604 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:58 crc kubenswrapper[4642]: I0128 06:48:58.465615 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:58 crc kubenswrapper[4642]: I0128 06:48:58.465627 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:58 crc kubenswrapper[4642]: I0128 06:48:58.465635 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:58Z","lastTransitionTime":"2026-01-28T06:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:58 crc kubenswrapper[4642]: E0128 06:48:58.474513 4642 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"af9fb491-28d4-46e5-857c-727aaf9d83d0\\\",\\\"systemUUID\\\":\\\"6d7f2c45-295c-4dcd-b97d-d5c383274b44\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:58Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:58 crc kubenswrapper[4642]: I0128 06:48:58.477051 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:58 crc kubenswrapper[4642]: I0128 06:48:58.477085 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:58 crc kubenswrapper[4642]: I0128 06:48:58.477095 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:58 crc kubenswrapper[4642]: I0128 06:48:58.477107 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:58 crc kubenswrapper[4642]: I0128 06:48:58.477116 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:58Z","lastTransitionTime":"2026-01-28T06:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:58 crc kubenswrapper[4642]: E0128 06:48:58.486101 4642 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"af9fb491-28d4-46e5-857c-727aaf9d83d0\\\",\\\"systemUUID\\\":\\\"6d7f2c45-295c-4dcd-b97d-d5c383274b44\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:58Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:58 crc kubenswrapper[4642]: I0128 06:48:58.488799 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:58 crc kubenswrapper[4642]: I0128 06:48:58.488841 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:58 crc kubenswrapper[4642]: I0128 06:48:58.488870 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:58 crc kubenswrapper[4642]: I0128 06:48:58.488889 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:58 crc kubenswrapper[4642]: I0128 06:48:58.488900 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:58Z","lastTransitionTime":"2026-01-28T06:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:58 crc kubenswrapper[4642]: E0128 06:48:58.498569 4642 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:48:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"af9fb491-28d4-46e5-857c-727aaf9d83d0\\\",\\\"systemUUID\\\":\\\"6d7f2c45-295c-4dcd-b97d-d5c383274b44\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48:58Z is after 2025-08-24T17:21:41Z" Jan 28 06:48:58 crc kubenswrapper[4642]: E0128 06:48:58.498689 4642 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 06:48:58 crc kubenswrapper[4642]: I0128 06:48:58.499858 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:58 crc kubenswrapper[4642]: I0128 06:48:58.499879 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:58 crc kubenswrapper[4642]: I0128 06:48:58.499888 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:58 crc kubenswrapper[4642]: I0128 06:48:58.499900 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:58 crc kubenswrapper[4642]: I0128 06:48:58.499910 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:58Z","lastTransitionTime":"2026-01-28T06:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:58 crc kubenswrapper[4642]: I0128 06:48:58.601674 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:58 crc kubenswrapper[4642]: I0128 06:48:58.601703 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:58 crc kubenswrapper[4642]: I0128 06:48:58.601711 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:58 crc kubenswrapper[4642]: I0128 06:48:58.601727 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:58 crc kubenswrapper[4642]: I0128 06:48:58.601738 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:58Z","lastTransitionTime":"2026-01-28T06:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:58 crc kubenswrapper[4642]: I0128 06:48:58.703537 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:58 crc kubenswrapper[4642]: I0128 06:48:58.703564 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:58 crc kubenswrapper[4642]: I0128 06:48:58.703573 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:58 crc kubenswrapper[4642]: I0128 06:48:58.703584 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:58 crc kubenswrapper[4642]: I0128 06:48:58.703593 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:58Z","lastTransitionTime":"2026-01-28T06:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:58 crc kubenswrapper[4642]: I0128 06:48:58.805567 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:58 crc kubenswrapper[4642]: I0128 06:48:58.805593 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:58 crc kubenswrapper[4642]: I0128 06:48:58.805602 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:58 crc kubenswrapper[4642]: I0128 06:48:58.805615 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:58 crc kubenswrapper[4642]: I0128 06:48:58.805624 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:58Z","lastTransitionTime":"2026-01-28T06:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:58 crc kubenswrapper[4642]: I0128 06:48:58.908233 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:58 crc kubenswrapper[4642]: I0128 06:48:58.908291 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:58 crc kubenswrapper[4642]: I0128 06:48:58.908303 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:58 crc kubenswrapper[4642]: I0128 06:48:58.908327 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:58 crc kubenswrapper[4642]: I0128 06:48:58.908350 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:58Z","lastTransitionTime":"2026-01-28T06:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:59 crc kubenswrapper[4642]: I0128 06:48:59.010610 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:59 crc kubenswrapper[4642]: I0128 06:48:59.010669 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:59 crc kubenswrapper[4642]: I0128 06:48:59.010680 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:59 crc kubenswrapper[4642]: I0128 06:48:59.010698 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:59 crc kubenswrapper[4642]: I0128 06:48:59.010710 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:59Z","lastTransitionTime":"2026-01-28T06:48:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:59 crc kubenswrapper[4642]: I0128 06:48:59.098486 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 06:48:59 crc kubenswrapper[4642]: I0128 06:48:59.098598 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 20:27:28.386113291 +0000 UTC Jan 28 06:48:59 crc kubenswrapper[4642]: I0128 06:48:59.098727 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 06:48:59 crc kubenswrapper[4642]: E0128 06:48:59.098815 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 06:48:59 crc kubenswrapper[4642]: I0128 06:48:59.098764 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 06:48:59 crc kubenswrapper[4642]: I0128 06:48:59.098987 4642 scope.go:117] "RemoveContainer" containerID="b55c1b9cd344284bee0378e9e6530f30e7315cdc57bd43c1b862457c4aeb02eb" Jan 28 06:48:59 crc kubenswrapper[4642]: E0128 06:48:59.098989 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 06:48:59 crc kubenswrapper[4642]: E0128 06:48:59.099095 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 06:48:59 crc kubenswrapper[4642]: E0128 06:48:59.099301 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-7fdwx_openshift-ovn-kubernetes(0f5d2a3f-25d8-4051-8000-30ec01a14eb0)\"" pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" podUID="0f5d2a3f-25d8-4051-8000-30ec01a14eb0" Jan 28 06:48:59 crc kubenswrapper[4642]: I0128 06:48:59.113026 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:59 crc kubenswrapper[4642]: I0128 06:48:59.113053 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:59 crc kubenswrapper[4642]: I0128 06:48:59.113063 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:59 crc kubenswrapper[4642]: I0128 06:48:59.113076 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:59 crc kubenswrapper[4642]: I0128 06:48:59.113088 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:59Z","lastTransitionTime":"2026-01-28T06:48:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:59 crc kubenswrapper[4642]: I0128 06:48:59.215510 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:59 crc kubenswrapper[4642]: I0128 06:48:59.215571 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:59 crc kubenswrapper[4642]: I0128 06:48:59.215582 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:59 crc kubenswrapper[4642]: I0128 06:48:59.215602 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:59 crc kubenswrapper[4642]: I0128 06:48:59.215962 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:59Z","lastTransitionTime":"2026-01-28T06:48:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:59 crc kubenswrapper[4642]: I0128 06:48:59.319585 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:59 crc kubenswrapper[4642]: I0128 06:48:59.319623 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:59 crc kubenswrapper[4642]: I0128 06:48:59.319633 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:59 crc kubenswrapper[4642]: I0128 06:48:59.319649 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:59 crc kubenswrapper[4642]: I0128 06:48:59.319658 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:59Z","lastTransitionTime":"2026-01-28T06:48:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:59 crc kubenswrapper[4642]: I0128 06:48:59.421953 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:59 crc kubenswrapper[4642]: I0128 06:48:59.422000 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:59 crc kubenswrapper[4642]: I0128 06:48:59.422010 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:59 crc kubenswrapper[4642]: I0128 06:48:59.422027 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:59 crc kubenswrapper[4642]: I0128 06:48:59.422037 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:59Z","lastTransitionTime":"2026-01-28T06:48:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:59 crc kubenswrapper[4642]: I0128 06:48:59.524914 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:59 crc kubenswrapper[4642]: I0128 06:48:59.525291 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:59 crc kubenswrapper[4642]: I0128 06:48:59.525408 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:59 crc kubenswrapper[4642]: I0128 06:48:59.525508 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:59 crc kubenswrapper[4642]: I0128 06:48:59.525587 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:59Z","lastTransitionTime":"2026-01-28T06:48:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:59 crc kubenswrapper[4642]: I0128 06:48:59.628747 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:59 crc kubenswrapper[4642]: I0128 06:48:59.628785 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:59 crc kubenswrapper[4642]: I0128 06:48:59.628794 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:59 crc kubenswrapper[4642]: I0128 06:48:59.628812 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:59 crc kubenswrapper[4642]: I0128 06:48:59.628822 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:59Z","lastTransitionTime":"2026-01-28T06:48:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:59 crc kubenswrapper[4642]: I0128 06:48:59.731175 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:59 crc kubenswrapper[4642]: I0128 06:48:59.731219 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:59 crc kubenswrapper[4642]: I0128 06:48:59.731229 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:59 crc kubenswrapper[4642]: I0128 06:48:59.731240 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:59 crc kubenswrapper[4642]: I0128 06:48:59.731249 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:59Z","lastTransitionTime":"2026-01-28T06:48:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:59 crc kubenswrapper[4642]: I0128 06:48:59.833658 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:59 crc kubenswrapper[4642]: I0128 06:48:59.833682 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:59 crc kubenswrapper[4642]: I0128 06:48:59.833693 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:59 crc kubenswrapper[4642]: I0128 06:48:59.833704 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:59 crc kubenswrapper[4642]: I0128 06:48:59.833711 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:59Z","lastTransitionTime":"2026-01-28T06:48:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:48:59 crc kubenswrapper[4642]: I0128 06:48:59.935336 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:48:59 crc kubenswrapper[4642]: I0128 06:48:59.935380 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:48:59 crc kubenswrapper[4642]: I0128 06:48:59.935389 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:48:59 crc kubenswrapper[4642]: I0128 06:48:59.935402 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:48:59 crc kubenswrapper[4642]: I0128 06:48:59.935411 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:48:59Z","lastTransitionTime":"2026-01-28T06:48:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:00 crc kubenswrapper[4642]: I0128 06:49:00.038079 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:00 crc kubenswrapper[4642]: I0128 06:49:00.038130 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:00 crc kubenswrapper[4642]: I0128 06:49:00.038141 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:00 crc kubenswrapper[4642]: I0128 06:49:00.038151 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:00 crc kubenswrapper[4642]: I0128 06:49:00.038159 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:00Z","lastTransitionTime":"2026-01-28T06:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:00 crc kubenswrapper[4642]: I0128 06:49:00.097723 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bpz6r" Jan 28 06:49:00 crc kubenswrapper[4642]: E0128 06:49:00.097818 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bpz6r" podUID="e7ad39da-99cf-4851-be79-a7d38df54055" Jan 28 06:49:00 crc kubenswrapper[4642]: I0128 06:49:00.099831 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 11:17:22.696259676 +0000 UTC Jan 28 06:49:00 crc kubenswrapper[4642]: I0128 06:49:00.140515 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:00 crc kubenswrapper[4642]: I0128 06:49:00.140562 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:00 crc kubenswrapper[4642]: I0128 06:49:00.140572 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:00 crc kubenswrapper[4642]: I0128 06:49:00.140590 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:00 crc kubenswrapper[4642]: I0128 06:49:00.140601 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:00Z","lastTransitionTime":"2026-01-28T06:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:00 crc kubenswrapper[4642]: I0128 06:49:00.243810 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:00 crc kubenswrapper[4642]: I0128 06:49:00.243855 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:00 crc kubenswrapper[4642]: I0128 06:49:00.243866 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:00 crc kubenswrapper[4642]: I0128 06:49:00.243884 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:00 crc kubenswrapper[4642]: I0128 06:49:00.243895 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:00Z","lastTransitionTime":"2026-01-28T06:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:00 crc kubenswrapper[4642]: I0128 06:49:00.346297 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:00 crc kubenswrapper[4642]: I0128 06:49:00.346329 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:00 crc kubenswrapper[4642]: I0128 06:49:00.346347 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:00 crc kubenswrapper[4642]: I0128 06:49:00.346362 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:00 crc kubenswrapper[4642]: I0128 06:49:00.346373 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:00Z","lastTransitionTime":"2026-01-28T06:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:00 crc kubenswrapper[4642]: I0128 06:49:00.448145 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:00 crc kubenswrapper[4642]: I0128 06:49:00.448180 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:00 crc kubenswrapper[4642]: I0128 06:49:00.448212 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:00 crc kubenswrapper[4642]: I0128 06:49:00.448229 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:00 crc kubenswrapper[4642]: I0128 06:49:00.448240 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:00Z","lastTransitionTime":"2026-01-28T06:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:00 crc kubenswrapper[4642]: I0128 06:49:00.551473 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:00 crc kubenswrapper[4642]: I0128 06:49:00.551509 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:00 crc kubenswrapper[4642]: I0128 06:49:00.551520 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:00 crc kubenswrapper[4642]: I0128 06:49:00.551537 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:00 crc kubenswrapper[4642]: I0128 06:49:00.551548 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:00Z","lastTransitionTime":"2026-01-28T06:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:00 crc kubenswrapper[4642]: I0128 06:49:00.653577 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:00 crc kubenswrapper[4642]: I0128 06:49:00.653733 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:00 crc kubenswrapper[4642]: I0128 06:49:00.653804 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:00 crc kubenswrapper[4642]: I0128 06:49:00.653871 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:00 crc kubenswrapper[4642]: I0128 06:49:00.653932 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:00Z","lastTransitionTime":"2026-01-28T06:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:00 crc kubenswrapper[4642]: I0128 06:49:00.755934 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:00 crc kubenswrapper[4642]: I0128 06:49:00.756063 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:00 crc kubenswrapper[4642]: I0128 06:49:00.756138 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:00 crc kubenswrapper[4642]: I0128 06:49:00.756224 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:00 crc kubenswrapper[4642]: I0128 06:49:00.756291 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:00Z","lastTransitionTime":"2026-01-28T06:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:00 crc kubenswrapper[4642]: I0128 06:49:00.858119 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:00 crc kubenswrapper[4642]: I0128 06:49:00.858144 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:00 crc kubenswrapper[4642]: I0128 06:49:00.858152 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:00 crc kubenswrapper[4642]: I0128 06:49:00.858164 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:00 crc kubenswrapper[4642]: I0128 06:49:00.858173 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:00Z","lastTransitionTime":"2026-01-28T06:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:00 crc kubenswrapper[4642]: I0128 06:49:00.960717 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:00 crc kubenswrapper[4642]: I0128 06:49:00.960761 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:00 crc kubenswrapper[4642]: I0128 06:49:00.960773 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:00 crc kubenswrapper[4642]: I0128 06:49:00.960795 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:00 crc kubenswrapper[4642]: I0128 06:49:00.960809 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:00Z","lastTransitionTime":"2026-01-28T06:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:01 crc kubenswrapper[4642]: I0128 06:49:01.063533 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:01 crc kubenswrapper[4642]: I0128 06:49:01.063567 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:01 crc kubenswrapper[4642]: I0128 06:49:01.063578 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:01 crc kubenswrapper[4642]: I0128 06:49:01.063595 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:01 crc kubenswrapper[4642]: I0128 06:49:01.063607 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:01Z","lastTransitionTime":"2026-01-28T06:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:01 crc kubenswrapper[4642]: I0128 06:49:01.098429 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 06:49:01 crc kubenswrapper[4642]: I0128 06:49:01.098464 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 06:49:01 crc kubenswrapper[4642]: I0128 06:49:01.098430 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 06:49:01 crc kubenswrapper[4642]: E0128 06:49:01.098546 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 06:49:01 crc kubenswrapper[4642]: E0128 06:49:01.098610 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 06:49:01 crc kubenswrapper[4642]: E0128 06:49:01.098686 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 06:49:01 crc kubenswrapper[4642]: I0128 06:49:01.100307 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 15:22:34.732726486 +0000 UTC Jan 28 06:49:01 crc kubenswrapper[4642]: I0128 06:49:01.166258 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:01 crc kubenswrapper[4642]: I0128 06:49:01.166318 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:01 crc kubenswrapper[4642]: I0128 06:49:01.166329 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:01 crc kubenswrapper[4642]: I0128 06:49:01.166361 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:01 crc kubenswrapper[4642]: I0128 06:49:01.166375 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:01Z","lastTransitionTime":"2026-01-28T06:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:01 crc kubenswrapper[4642]: I0128 06:49:01.268621 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:01 crc kubenswrapper[4642]: I0128 06:49:01.268660 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:01 crc kubenswrapper[4642]: I0128 06:49:01.268670 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:01 crc kubenswrapper[4642]: I0128 06:49:01.268682 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:01 crc kubenswrapper[4642]: I0128 06:49:01.268693 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:01Z","lastTransitionTime":"2026-01-28T06:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:01 crc kubenswrapper[4642]: I0128 06:49:01.371112 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:01 crc kubenswrapper[4642]: I0128 06:49:01.371148 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:01 crc kubenswrapper[4642]: I0128 06:49:01.371157 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:01 crc kubenswrapper[4642]: I0128 06:49:01.371170 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:01 crc kubenswrapper[4642]: I0128 06:49:01.371183 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:01Z","lastTransitionTime":"2026-01-28T06:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:01 crc kubenswrapper[4642]: I0128 06:49:01.474116 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:01 crc kubenswrapper[4642]: I0128 06:49:01.474160 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:01 crc kubenswrapper[4642]: I0128 06:49:01.474174 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:01 crc kubenswrapper[4642]: I0128 06:49:01.474211 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:01 crc kubenswrapper[4642]: I0128 06:49:01.474222 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:01Z","lastTransitionTime":"2026-01-28T06:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:01 crc kubenswrapper[4642]: I0128 06:49:01.576573 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:01 crc kubenswrapper[4642]: I0128 06:49:01.576602 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:01 crc kubenswrapper[4642]: I0128 06:49:01.576612 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:01 crc kubenswrapper[4642]: I0128 06:49:01.576624 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:01 crc kubenswrapper[4642]: I0128 06:49:01.576634 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:01Z","lastTransitionTime":"2026-01-28T06:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:01 crc kubenswrapper[4642]: I0128 06:49:01.678795 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:01 crc kubenswrapper[4642]: I0128 06:49:01.678825 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:01 crc kubenswrapper[4642]: I0128 06:49:01.678836 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:01 crc kubenswrapper[4642]: I0128 06:49:01.678846 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:01 crc kubenswrapper[4642]: I0128 06:49:01.678855 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:01Z","lastTransitionTime":"2026-01-28T06:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:01 crc kubenswrapper[4642]: I0128 06:49:01.780962 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:01 crc kubenswrapper[4642]: I0128 06:49:01.781022 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:01 crc kubenswrapper[4642]: I0128 06:49:01.781035 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:01 crc kubenswrapper[4642]: I0128 06:49:01.781056 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:01 crc kubenswrapper[4642]: I0128 06:49:01.781068 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:01Z","lastTransitionTime":"2026-01-28T06:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:01 crc kubenswrapper[4642]: I0128 06:49:01.884243 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:01 crc kubenswrapper[4642]: I0128 06:49:01.884596 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:01 crc kubenswrapper[4642]: I0128 06:49:01.884626 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:01 crc kubenswrapper[4642]: I0128 06:49:01.884658 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:01 crc kubenswrapper[4642]: I0128 06:49:01.884688 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:01Z","lastTransitionTime":"2026-01-28T06:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:01 crc kubenswrapper[4642]: I0128 06:49:01.987137 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:01 crc kubenswrapper[4642]: I0128 06:49:01.987206 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:01 crc kubenswrapper[4642]: I0128 06:49:01.987223 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:01 crc kubenswrapper[4642]: I0128 06:49:01.987241 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:01 crc kubenswrapper[4642]: I0128 06:49:01.987251 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:01Z","lastTransitionTime":"2026-01-28T06:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:02 crc kubenswrapper[4642]: I0128 06:49:02.089772 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:02 crc kubenswrapper[4642]: I0128 06:49:02.089808 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:02 crc kubenswrapper[4642]: I0128 06:49:02.089820 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:02 crc kubenswrapper[4642]: I0128 06:49:02.089836 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:02 crc kubenswrapper[4642]: I0128 06:49:02.089847 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:02Z","lastTransitionTime":"2026-01-28T06:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:02 crc kubenswrapper[4642]: I0128 06:49:02.098006 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bpz6r" Jan 28 06:49:02 crc kubenswrapper[4642]: E0128 06:49:02.098114 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bpz6r" podUID="e7ad39da-99cf-4851-be79-a7d38df54055" Jan 28 06:49:02 crc kubenswrapper[4642]: I0128 06:49:02.101182 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 20:18:35.679473772 +0000 UTC Jan 28 06:49:02 crc kubenswrapper[4642]: I0128 06:49:02.191988 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:02 crc kubenswrapper[4642]: I0128 06:49:02.192038 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:02 crc kubenswrapper[4642]: I0128 06:49:02.192053 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:02 crc kubenswrapper[4642]: I0128 06:49:02.192072 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:02 crc kubenswrapper[4642]: I0128 06:49:02.192085 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:02Z","lastTransitionTime":"2026-01-28T06:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:02 crc kubenswrapper[4642]: I0128 06:49:02.294530 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:02 crc kubenswrapper[4642]: I0128 06:49:02.294583 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:02 crc kubenswrapper[4642]: I0128 06:49:02.294595 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:02 crc kubenswrapper[4642]: I0128 06:49:02.294615 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:02 crc kubenswrapper[4642]: I0128 06:49:02.294630 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:02Z","lastTransitionTime":"2026-01-28T06:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:02 crc kubenswrapper[4642]: I0128 06:49:02.396798 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:02 crc kubenswrapper[4642]: I0128 06:49:02.396833 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:02 crc kubenswrapper[4642]: I0128 06:49:02.396844 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:02 crc kubenswrapper[4642]: I0128 06:49:02.396896 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:02 crc kubenswrapper[4642]: I0128 06:49:02.396906 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:02Z","lastTransitionTime":"2026-01-28T06:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:02 crc kubenswrapper[4642]: I0128 06:49:02.499160 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:02 crc kubenswrapper[4642]: I0128 06:49:02.499230 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:02 crc kubenswrapper[4642]: I0128 06:49:02.499242 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:02 crc kubenswrapper[4642]: I0128 06:49:02.499258 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:02 crc kubenswrapper[4642]: I0128 06:49:02.499273 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:02Z","lastTransitionTime":"2026-01-28T06:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:02 crc kubenswrapper[4642]: I0128 06:49:02.601279 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:02 crc kubenswrapper[4642]: I0128 06:49:02.601309 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:02 crc kubenswrapper[4642]: I0128 06:49:02.601318 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:02 crc kubenswrapper[4642]: I0128 06:49:02.601332 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:02 crc kubenswrapper[4642]: I0128 06:49:02.601349 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:02Z","lastTransitionTime":"2026-01-28T06:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:02 crc kubenswrapper[4642]: I0128 06:49:02.703583 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:02 crc kubenswrapper[4642]: I0128 06:49:02.703633 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:02 crc kubenswrapper[4642]: I0128 06:49:02.703645 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:02 crc kubenswrapper[4642]: I0128 06:49:02.703664 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:02 crc kubenswrapper[4642]: I0128 06:49:02.703676 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:02Z","lastTransitionTime":"2026-01-28T06:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:02 crc kubenswrapper[4642]: I0128 06:49:02.806556 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:02 crc kubenswrapper[4642]: I0128 06:49:02.806595 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:02 crc kubenswrapper[4642]: I0128 06:49:02.806606 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:02 crc kubenswrapper[4642]: I0128 06:49:02.806624 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:02 crc kubenswrapper[4642]: I0128 06:49:02.806633 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:02Z","lastTransitionTime":"2026-01-28T06:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:02 crc kubenswrapper[4642]: I0128 06:49:02.908710 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:02 crc kubenswrapper[4642]: I0128 06:49:02.908760 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:02 crc kubenswrapper[4642]: I0128 06:49:02.908772 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:02 crc kubenswrapper[4642]: I0128 06:49:02.908787 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:02 crc kubenswrapper[4642]: I0128 06:49:02.908802 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:02Z","lastTransitionTime":"2026-01-28T06:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:03 crc kubenswrapper[4642]: I0128 06:49:03.011236 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:03 crc kubenswrapper[4642]: I0128 06:49:03.011292 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:03 crc kubenswrapper[4642]: I0128 06:49:03.011305 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:03 crc kubenswrapper[4642]: I0128 06:49:03.011325 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:03 crc kubenswrapper[4642]: I0128 06:49:03.011368 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:03Z","lastTransitionTime":"2026-01-28T06:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:03 crc kubenswrapper[4642]: I0128 06:49:03.097668 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 06:49:03 crc kubenswrapper[4642]: I0128 06:49:03.097821 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 06:49:03 crc kubenswrapper[4642]: I0128 06:49:03.097935 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 06:49:03 crc kubenswrapper[4642]: E0128 06:49:03.097922 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 06:49:03 crc kubenswrapper[4642]: E0128 06:49:03.098078 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 06:49:03 crc kubenswrapper[4642]: E0128 06:49:03.098133 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 06:49:03 crc kubenswrapper[4642]: I0128 06:49:03.101262 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 10:59:56.179167941 +0000 UTC Jan 28 06:49:03 crc kubenswrapper[4642]: I0128 06:49:03.113753 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:03 crc kubenswrapper[4642]: I0128 06:49:03.113789 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:03 crc kubenswrapper[4642]: I0128 06:49:03.113799 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:03 crc kubenswrapper[4642]: I0128 06:49:03.113815 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:03 crc kubenswrapper[4642]: I0128 06:49:03.113830 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:03Z","lastTransitionTime":"2026-01-28T06:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:03 crc kubenswrapper[4642]: I0128 06:49:03.216313 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:03 crc kubenswrapper[4642]: I0128 06:49:03.216375 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:03 crc kubenswrapper[4642]: I0128 06:49:03.216392 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:03 crc kubenswrapper[4642]: I0128 06:49:03.216410 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:03 crc kubenswrapper[4642]: I0128 06:49:03.216422 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:03Z","lastTransitionTime":"2026-01-28T06:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:03 crc kubenswrapper[4642]: I0128 06:49:03.318580 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:03 crc kubenswrapper[4642]: I0128 06:49:03.318632 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:03 crc kubenswrapper[4642]: I0128 06:49:03.318642 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:03 crc kubenswrapper[4642]: I0128 06:49:03.318659 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:03 crc kubenswrapper[4642]: I0128 06:49:03.318673 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:03Z","lastTransitionTime":"2026-01-28T06:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:03 crc kubenswrapper[4642]: I0128 06:49:03.421098 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:03 crc kubenswrapper[4642]: I0128 06:49:03.421155 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:03 crc kubenswrapper[4642]: I0128 06:49:03.421166 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:03 crc kubenswrapper[4642]: I0128 06:49:03.421207 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:03 crc kubenswrapper[4642]: I0128 06:49:03.421225 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:03Z","lastTransitionTime":"2026-01-28T06:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:03 crc kubenswrapper[4642]: I0128 06:49:03.523859 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:03 crc kubenswrapper[4642]: I0128 06:49:03.523912 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:03 crc kubenswrapper[4642]: I0128 06:49:03.523922 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:03 crc kubenswrapper[4642]: I0128 06:49:03.523943 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:03 crc kubenswrapper[4642]: I0128 06:49:03.523958 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:03Z","lastTransitionTime":"2026-01-28T06:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:03 crc kubenswrapper[4642]: I0128 06:49:03.626727 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:03 crc kubenswrapper[4642]: I0128 06:49:03.626782 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:03 crc kubenswrapper[4642]: I0128 06:49:03.626795 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:03 crc kubenswrapper[4642]: I0128 06:49:03.626813 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:03 crc kubenswrapper[4642]: I0128 06:49:03.626825 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:03Z","lastTransitionTime":"2026-01-28T06:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:03 crc kubenswrapper[4642]: I0128 06:49:03.729484 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:03 crc kubenswrapper[4642]: I0128 06:49:03.729559 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:03 crc kubenswrapper[4642]: I0128 06:49:03.729569 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:03 crc kubenswrapper[4642]: I0128 06:49:03.729602 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:03 crc kubenswrapper[4642]: I0128 06:49:03.729614 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:03Z","lastTransitionTime":"2026-01-28T06:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:03 crc kubenswrapper[4642]: I0128 06:49:03.832548 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:03 crc kubenswrapper[4642]: I0128 06:49:03.833011 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:03 crc kubenswrapper[4642]: I0128 06:49:03.833114 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:03 crc kubenswrapper[4642]: I0128 06:49:03.833225 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:03 crc kubenswrapper[4642]: I0128 06:49:03.833303 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:03Z","lastTransitionTime":"2026-01-28T06:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:03 crc kubenswrapper[4642]: I0128 06:49:03.935745 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:03 crc kubenswrapper[4642]: I0128 06:49:03.935782 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:03 crc kubenswrapper[4642]: I0128 06:49:03.935792 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:03 crc kubenswrapper[4642]: I0128 06:49:03.935807 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:03 crc kubenswrapper[4642]: I0128 06:49:03.935818 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:03Z","lastTransitionTime":"2026-01-28T06:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:04 crc kubenswrapper[4642]: I0128 06:49:04.038284 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:04 crc kubenswrapper[4642]: I0128 06:49:04.038361 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:04 crc kubenswrapper[4642]: I0128 06:49:04.038373 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:04 crc kubenswrapper[4642]: I0128 06:49:04.038393 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:04 crc kubenswrapper[4642]: I0128 06:49:04.038405 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:04Z","lastTransitionTime":"2026-01-28T06:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:04 crc kubenswrapper[4642]: I0128 06:49:04.098369 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bpz6r" Jan 28 06:49:04 crc kubenswrapper[4642]: E0128 06:49:04.098521 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bpz6r" podUID="e7ad39da-99cf-4851-be79-a7d38df54055" Jan 28 06:49:04 crc kubenswrapper[4642]: I0128 06:49:04.102112 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 22:28:02.761139285 +0000 UTC Jan 28 06:49:04 crc kubenswrapper[4642]: I0128 06:49:04.141125 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:04 crc kubenswrapper[4642]: I0128 06:49:04.141161 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:04 crc kubenswrapper[4642]: I0128 06:49:04.141176 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:04 crc kubenswrapper[4642]: I0128 06:49:04.141217 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:04 crc kubenswrapper[4642]: I0128 06:49:04.141230 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:04Z","lastTransitionTime":"2026-01-28T06:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:04 crc kubenswrapper[4642]: I0128 06:49:04.243589 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:04 crc kubenswrapper[4642]: I0128 06:49:04.243636 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:04 crc kubenswrapper[4642]: I0128 06:49:04.243645 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:04 crc kubenswrapper[4642]: I0128 06:49:04.243664 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:04 crc kubenswrapper[4642]: I0128 06:49:04.243678 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:04Z","lastTransitionTime":"2026-01-28T06:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:04 crc kubenswrapper[4642]: I0128 06:49:04.346096 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:04 crc kubenswrapper[4642]: I0128 06:49:04.346139 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:04 crc kubenswrapper[4642]: I0128 06:49:04.346148 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:04 crc kubenswrapper[4642]: I0128 06:49:04.346167 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:04 crc kubenswrapper[4642]: I0128 06:49:04.346210 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:04Z","lastTransitionTime":"2026-01-28T06:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:04 crc kubenswrapper[4642]: I0128 06:49:04.449114 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:04 crc kubenswrapper[4642]: I0128 06:49:04.449163 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:04 crc kubenswrapper[4642]: I0128 06:49:04.449173 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:04 crc kubenswrapper[4642]: I0128 06:49:04.449216 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:04 crc kubenswrapper[4642]: I0128 06:49:04.449228 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:04Z","lastTransitionTime":"2026-01-28T06:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:04 crc kubenswrapper[4642]: I0128 06:49:04.553773 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:04 crc kubenswrapper[4642]: I0128 06:49:04.553813 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:04 crc kubenswrapper[4642]: I0128 06:49:04.553823 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:04 crc kubenswrapper[4642]: I0128 06:49:04.553840 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:04 crc kubenswrapper[4642]: I0128 06:49:04.553851 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:04Z","lastTransitionTime":"2026-01-28T06:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:04 crc kubenswrapper[4642]: I0128 06:49:04.656071 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:04 crc kubenswrapper[4642]: I0128 06:49:04.656111 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:04 crc kubenswrapper[4642]: I0128 06:49:04.656120 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:04 crc kubenswrapper[4642]: I0128 06:49:04.656135 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:04 crc kubenswrapper[4642]: I0128 06:49:04.656145 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:04Z","lastTransitionTime":"2026-01-28T06:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:04 crc kubenswrapper[4642]: I0128 06:49:04.758518 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:04 crc kubenswrapper[4642]: I0128 06:49:04.758556 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:04 crc kubenswrapper[4642]: I0128 06:49:04.758565 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:04 crc kubenswrapper[4642]: I0128 06:49:04.758592 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:04 crc kubenswrapper[4642]: I0128 06:49:04.758604 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:04Z","lastTransitionTime":"2026-01-28T06:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:04 crc kubenswrapper[4642]: I0128 06:49:04.861505 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:04 crc kubenswrapper[4642]: I0128 06:49:04.861560 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:04 crc kubenswrapper[4642]: I0128 06:49:04.861571 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:04 crc kubenswrapper[4642]: I0128 06:49:04.861591 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:04 crc kubenswrapper[4642]: I0128 06:49:04.861603 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:04Z","lastTransitionTime":"2026-01-28T06:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:04 crc kubenswrapper[4642]: I0128 06:49:04.965123 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:04 crc kubenswrapper[4642]: I0128 06:49:04.965229 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:04 crc kubenswrapper[4642]: I0128 06:49:04.965258 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:04 crc kubenswrapper[4642]: I0128 06:49:04.965288 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:04 crc kubenswrapper[4642]: I0128 06:49:04.965306 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:04Z","lastTransitionTime":"2026-01-28T06:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:05 crc kubenswrapper[4642]: I0128 06:49:05.067360 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:05 crc kubenswrapper[4642]: I0128 06:49:05.067401 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:05 crc kubenswrapper[4642]: I0128 06:49:05.067410 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:05 crc kubenswrapper[4642]: I0128 06:49:05.067428 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:05 crc kubenswrapper[4642]: I0128 06:49:05.067440 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:05Z","lastTransitionTime":"2026-01-28T06:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:05 crc kubenswrapper[4642]: I0128 06:49:05.098033 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 06:49:05 crc kubenswrapper[4642]: I0128 06:49:05.098081 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 06:49:05 crc kubenswrapper[4642]: E0128 06:49:05.098127 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 06:49:05 crc kubenswrapper[4642]: E0128 06:49:05.098214 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 06:49:05 crc kubenswrapper[4642]: I0128 06:49:05.098276 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 06:49:05 crc kubenswrapper[4642]: E0128 06:49:05.098444 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 06:49:05 crc kubenswrapper[4642]: I0128 06:49:05.103169 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 19:13:53.255502075 +0000 UTC Jan 28 06:49:05 crc kubenswrapper[4642]: I0128 06:49:05.169811 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:05 crc kubenswrapper[4642]: I0128 06:49:05.169989 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:05 crc kubenswrapper[4642]: I0128 06:49:05.170068 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:05 crc kubenswrapper[4642]: I0128 06:49:05.170134 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:05 crc kubenswrapper[4642]: I0128 06:49:05.170208 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:05Z","lastTransitionTime":"2026-01-28T06:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:05 crc kubenswrapper[4642]: I0128 06:49:05.272613 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:05 crc kubenswrapper[4642]: I0128 06:49:05.272653 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:05 crc kubenswrapper[4642]: I0128 06:49:05.272663 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:05 crc kubenswrapper[4642]: I0128 06:49:05.272682 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:05 crc kubenswrapper[4642]: I0128 06:49:05.272693 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:05Z","lastTransitionTime":"2026-01-28T06:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:05 crc kubenswrapper[4642]: I0128 06:49:05.375012 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:05 crc kubenswrapper[4642]: I0128 06:49:05.375055 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:05 crc kubenswrapper[4642]: I0128 06:49:05.375064 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:05 crc kubenswrapper[4642]: I0128 06:49:05.375081 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:05 crc kubenswrapper[4642]: I0128 06:49:05.375093 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:05Z","lastTransitionTime":"2026-01-28T06:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:05 crc kubenswrapper[4642]: I0128 06:49:05.477835 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:05 crc kubenswrapper[4642]: I0128 06:49:05.477880 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:05 crc kubenswrapper[4642]: I0128 06:49:05.477892 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:05 crc kubenswrapper[4642]: I0128 06:49:05.477909 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:05 crc kubenswrapper[4642]: I0128 06:49:05.477921 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:05Z","lastTransitionTime":"2026-01-28T06:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:05 crc kubenswrapper[4642]: I0128 06:49:05.580059 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:05 crc kubenswrapper[4642]: I0128 06:49:05.580254 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:05 crc kubenswrapper[4642]: I0128 06:49:05.580327 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:05 crc kubenswrapper[4642]: I0128 06:49:05.580414 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:05 crc kubenswrapper[4642]: I0128 06:49:05.580487 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:05Z","lastTransitionTime":"2026-01-28T06:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:05 crc kubenswrapper[4642]: I0128 06:49:05.682868 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:05 crc kubenswrapper[4642]: I0128 06:49:05.682905 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:05 crc kubenswrapper[4642]: I0128 06:49:05.682916 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:05 crc kubenswrapper[4642]: I0128 06:49:05.682935 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:05 crc kubenswrapper[4642]: I0128 06:49:05.682947 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:05Z","lastTransitionTime":"2026-01-28T06:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:05 crc kubenswrapper[4642]: I0128 06:49:05.785138 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:05 crc kubenswrapper[4642]: I0128 06:49:05.785176 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:05 crc kubenswrapper[4642]: I0128 06:49:05.785215 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:05 crc kubenswrapper[4642]: I0128 06:49:05.785234 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:05 crc kubenswrapper[4642]: I0128 06:49:05.785246 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:05Z","lastTransitionTime":"2026-01-28T06:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:05 crc kubenswrapper[4642]: I0128 06:49:05.887764 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:05 crc kubenswrapper[4642]: I0128 06:49:05.887814 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:05 crc kubenswrapper[4642]: I0128 06:49:05.887824 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:05 crc kubenswrapper[4642]: I0128 06:49:05.887841 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:05 crc kubenswrapper[4642]: I0128 06:49:05.887851 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:05Z","lastTransitionTime":"2026-01-28T06:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:05 crc kubenswrapper[4642]: I0128 06:49:05.990887 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:05 crc kubenswrapper[4642]: I0128 06:49:05.990943 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:05 crc kubenswrapper[4642]: I0128 06:49:05.990955 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:05 crc kubenswrapper[4642]: I0128 06:49:05.990983 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:05 crc kubenswrapper[4642]: I0128 06:49:05.990995 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:05Z","lastTransitionTime":"2026-01-28T06:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:06 crc kubenswrapper[4642]: I0128 06:49:06.094162 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:06 crc kubenswrapper[4642]: I0128 06:49:06.094269 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:06 crc kubenswrapper[4642]: I0128 06:49:06.094282 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:06 crc kubenswrapper[4642]: I0128 06:49:06.094302 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:06 crc kubenswrapper[4642]: I0128 06:49:06.094316 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:06Z","lastTransitionTime":"2026-01-28T06:49:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:06 crc kubenswrapper[4642]: I0128 06:49:06.098435 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bpz6r" Jan 28 06:49:06 crc kubenswrapper[4642]: E0128 06:49:06.098597 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bpz6r" podUID="e7ad39da-99cf-4851-be79-a7d38df54055" Jan 28 06:49:06 crc kubenswrapper[4642]: I0128 06:49:06.103621 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 13:05:05.684511156 +0000 UTC Jan 28 06:49:06 crc kubenswrapper[4642]: I0128 06:49:06.197322 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:06 crc kubenswrapper[4642]: I0128 06:49:06.197392 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:06 crc kubenswrapper[4642]: I0128 06:49:06.197402 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:06 crc kubenswrapper[4642]: I0128 06:49:06.197426 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:06 crc kubenswrapper[4642]: I0128 06:49:06.197440 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:06Z","lastTransitionTime":"2026-01-28T06:49:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:06 crc kubenswrapper[4642]: I0128 06:49:06.300385 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:06 crc kubenswrapper[4642]: I0128 06:49:06.300441 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:06 crc kubenswrapper[4642]: I0128 06:49:06.300453 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:06 crc kubenswrapper[4642]: I0128 06:49:06.300474 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:06 crc kubenswrapper[4642]: I0128 06:49:06.300487 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:06Z","lastTransitionTime":"2026-01-28T06:49:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:06 crc kubenswrapper[4642]: I0128 06:49:06.403302 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:06 crc kubenswrapper[4642]: I0128 06:49:06.403366 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:06 crc kubenswrapper[4642]: I0128 06:49:06.403378 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:06 crc kubenswrapper[4642]: I0128 06:49:06.403395 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:06 crc kubenswrapper[4642]: I0128 06:49:06.403408 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:06Z","lastTransitionTime":"2026-01-28T06:49:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:06 crc kubenswrapper[4642]: I0128 06:49:06.502501 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e7ad39da-99cf-4851-be79-a7d38df54055-metrics-certs\") pod \"network-metrics-daemon-bpz6r\" (UID: \"e7ad39da-99cf-4851-be79-a7d38df54055\") " pod="openshift-multus/network-metrics-daemon-bpz6r" Jan 28 06:49:06 crc kubenswrapper[4642]: E0128 06:49:06.502755 4642 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 06:49:06 crc kubenswrapper[4642]: E0128 06:49:06.502926 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e7ad39da-99cf-4851-be79-a7d38df54055-metrics-certs podName:e7ad39da-99cf-4851-be79-a7d38df54055 nodeName:}" failed. No retries permitted until 2026-01-28 06:49:38.502896174 +0000 UTC m=+101.734984982 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e7ad39da-99cf-4851-be79-a7d38df54055-metrics-certs") pod "network-metrics-daemon-bpz6r" (UID: "e7ad39da-99cf-4851-be79-a7d38df54055") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 06:49:06 crc kubenswrapper[4642]: I0128 06:49:06.506273 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:06 crc kubenswrapper[4642]: I0128 06:49:06.506322 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:06 crc kubenswrapper[4642]: I0128 06:49:06.506334 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:06 crc kubenswrapper[4642]: I0128 06:49:06.506365 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:06 crc kubenswrapper[4642]: I0128 06:49:06.506374 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:06Z","lastTransitionTime":"2026-01-28T06:49:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:06 crc kubenswrapper[4642]: I0128 06:49:06.609480 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:06 crc kubenswrapper[4642]: I0128 06:49:06.609521 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:06 crc kubenswrapper[4642]: I0128 06:49:06.609531 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:06 crc kubenswrapper[4642]: I0128 06:49:06.609543 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:06 crc kubenswrapper[4642]: I0128 06:49:06.609554 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:06Z","lastTransitionTime":"2026-01-28T06:49:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:06 crc kubenswrapper[4642]: I0128 06:49:06.711561 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:06 crc kubenswrapper[4642]: I0128 06:49:06.711592 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:06 crc kubenswrapper[4642]: I0128 06:49:06.711603 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:06 crc kubenswrapper[4642]: I0128 06:49:06.711616 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:06 crc kubenswrapper[4642]: I0128 06:49:06.711624 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:06Z","lastTransitionTime":"2026-01-28T06:49:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:06 crc kubenswrapper[4642]: I0128 06:49:06.813500 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:06 crc kubenswrapper[4642]: I0128 06:49:06.813536 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:06 crc kubenswrapper[4642]: I0128 06:49:06.813549 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:06 crc kubenswrapper[4642]: I0128 06:49:06.813561 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:06 crc kubenswrapper[4642]: I0128 06:49:06.813570 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:06Z","lastTransitionTime":"2026-01-28T06:49:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:06 crc kubenswrapper[4642]: I0128 06:49:06.915574 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:06 crc kubenswrapper[4642]: I0128 06:49:06.915624 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:06 crc kubenswrapper[4642]: I0128 06:49:06.915634 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:06 crc kubenswrapper[4642]: I0128 06:49:06.915657 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:06 crc kubenswrapper[4642]: I0128 06:49:06.915671 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:06Z","lastTransitionTime":"2026-01-28T06:49:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:07 crc kubenswrapper[4642]: I0128 06:49:07.018397 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:07 crc kubenswrapper[4642]: I0128 06:49:07.018446 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:07 crc kubenswrapper[4642]: I0128 06:49:07.018456 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:07 crc kubenswrapper[4642]: I0128 06:49:07.018474 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:07 crc kubenswrapper[4642]: I0128 06:49:07.018501 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:07Z","lastTransitionTime":"2026-01-28T06:49:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:07 crc kubenswrapper[4642]: I0128 06:49:07.097817 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 06:49:07 crc kubenswrapper[4642]: I0128 06:49:07.097852 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 06:49:07 crc kubenswrapper[4642]: I0128 06:49:07.097817 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 06:49:07 crc kubenswrapper[4642]: E0128 06:49:07.097972 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 06:49:07 crc kubenswrapper[4642]: E0128 06:49:07.098150 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 06:49:07 crc kubenswrapper[4642]: E0128 06:49:07.098264 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 06:49:07 crc kubenswrapper[4642]: I0128 06:49:07.103755 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 14:11:24.844470634 +0000 UTC Jan 28 06:49:07 crc kubenswrapper[4642]: I0128 06:49:07.109588 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bpz6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ad39da-99cf-4851-be79-a7d38df54055\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7dl8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7dl8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bpz6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:07Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:07 crc kubenswrapper[4642]: I0128 06:49:07.121482 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:07 crc kubenswrapper[4642]: I0128 06:49:07.121516 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:07 crc kubenswrapper[4642]: I0128 06:49:07.121527 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:07 crc kubenswrapper[4642]: I0128 06:49:07.121543 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:07 crc kubenswrapper[4642]: I0128 06:49:07.121555 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:07Z","lastTransitionTime":"2026-01-28T06:49:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:07 crc kubenswrapper[4642]: I0128 06:49:07.122290 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed93b88d-b3ec-4cd8-be12-bc48b0c33702\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aec208f2eb87996ab0f1883e6b216ded8157366b59ee57759273141e3e12d243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25a19a0551b29145aabbd9238cf56dafbbbdc28ad9fc6454753375eb5871265b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ce5d04a42dc4cfb6547e7bc0e2a9957b5eba820b37ad9f1c50dd412b28aec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3159057a36a02a3782786be649812ee0be2e1af5df8aeda45327dcee067a08dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3159057a36a02a3782786be649812ee0be2e1af5df8aeda45327dcee067a08dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:47:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:07Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:07 crc kubenswrapper[4642]: I0128 06:49:07.134984 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:07Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:07 crc kubenswrapper[4642]: I0128 06:49:07.145318 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:07Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:07 crc kubenswrapper[4642]: I0128 06:49:07.154818 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4acb8b3be7913130bfe0acca1f8ed39f30ee2121eec228e4878f5645899f53cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfb4620ab8d587f286964f74ee575b4837fe55979a2f70767d30709cdb4fb2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:07Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:07 crc kubenswrapper[4642]: I0128 06:49:07.165487 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tzmpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e61b4710-6f7c-4ab1-b7bb-50d445aeda93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51dac6b178ad3961488d10a37458f5d40c3345f0091f7050a4c3d310ea46704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rfx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tzmpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:07Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:07 crc kubenswrapper[4642]: I0128 06:49:07.175116 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"338ae955-434d-40bd-8519-580badf3e175\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8c176f35c846bbf16f17d93e6c30e3708e24e70060ebc0e78f9714fc8c402f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdf4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f517e5ecc6bf060062365392506b3ee3a09354a4f5cfaa980e0e3a2507f298c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdf4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hdsmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:07Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:07 crc kubenswrapper[4642]: I0128 06:49:07.183399 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e3e00c82a4c8d33e89ddc75f7c6c393e11f7a98c88c8e3e160fd8f206ece474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:07Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:07 crc kubenswrapper[4642]: I0128 06:49:07.192316 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7js4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c649375a-21a3-43f5-bd77-fbc87de527fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56f22af3d5ead6bcc96f5a78901579a4c02637186393613dad0006ecc09e4731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjkkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ad8af253ff523788331637fe108b083ad990eea4334fdea34d2e3b299de83eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjkkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-f7js4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:07Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:07 crc kubenswrapper[4642]: I0128 06:49:07.203228 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d609634-f051-46d5-9a1d-97785c7d8c67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76f1f4a4bf66b6661b0b869ac209c793c9066518674be305037372a22141c1d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f31fe6b99be2f9edd215e88aadd39790ce7f144061336772561af3eab969fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f826dab9c51a49c5b54e934e0be64c809ff09247f8aa766b9a20734a3f1277da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf1949a0768be6ba1b7cfc9107df62005fa50466747e22440e5ea8cd04ec908c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:07Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:07 crc kubenswrapper[4642]: I0128 06:49:07.213068 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5886e382e4852d848554f98099a7cbdd4d0bc3f161f96fd6f34f300a748b0172\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:07Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:07 crc kubenswrapper[4642]: I0128 06:49:07.223904 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:07 crc kubenswrapper[4642]: I0128 06:49:07.223955 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:07 crc kubenswrapper[4642]: I0128 06:49:07.223967 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:07 crc kubenswrapper[4642]: I0128 06:49:07.223983 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:07 crc kubenswrapper[4642]: I0128 06:49:07.223994 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:07Z","lastTransitionTime":"2026-01-28T06:49:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:07 crc kubenswrapper[4642]: I0128 06:49:07.226594 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c33d9fa76f5feb29f150ff38354d292c6cb68dedb78eb8496754c23272d137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9c679b07a820cca68800f672c74f133d871d41e4cfe6c18e101febd7110b497\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba5c27390ef97be2917d1b375ca741d7a74118ba523043b1c826e4fccaee05d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a597a1b6d4b61efc1b46773965ad598626e298815da9ceffaf97965d72286d3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e87e992d302acbefdcab2e332757e1f6d6535a0acdbe268b1e2eb1d49a615940\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6187660025657deea7e0cd68d1341136261c5e66c9e32c3b00b181df029843b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b55c1b9cd344284bee0378e9e6530f30e7315cdc57bd43c1b862457c4aeb02eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b55c1b9cd344284bee0378e9e6530f30e7315cdc57bd43c1b862457c4aeb02eb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T06:48:43Z\\\",\\\"message\\\":\\\"nshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0128 06:48:43.781118 6303 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-controller\\\\\\\"}\\\\nI0128 06:48:43.781450 6303 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0128 06:48:43.781451 6303 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nF0128 06:48:43.781452 6303 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-7fdwx_openshift-ovn-kubernetes(0f5d2a3f-25d8-4051-8000-30ec01a14eb0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cdaa92914e736d4130cee0be4f11381b0edec756ca70235c6a4205f04777874\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecf4d023197544de3b0fe0f541dc75332bb08d0f44046e46fd468e3c5a0f251e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecf4d023197544de3b0fe0f541dc75332bb08d0f44046e46fd468e3c5a0f251e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7fdwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:07Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:07 crc kubenswrapper[4642]: I0128 06:49:07.236723 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"286f1a08-e4be-475e-a3ff-c37c99c41ea6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6769a5284fe4dae7df7948532cabc41d1f4a27a7e61ab818d61b26eb8165a750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75be46ee251716ce59f7d4e31811114c790e07ff2068465e1bba6ee4abc6909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1c4b797e1cd78fa6a850c8c7491a690dab16a3c06eeb71b2dc7e6799cf285a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7181241cbadd0b8edc9a6ff7f035813a1b2a06cdec451dc94d869bb56424a485\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70d9e422e29c4563585ee85b03f4d3e5705c7de8c388d337a52b30a7f0ccc096\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0128 06:48:09.026722 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 06:48:09.029824 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3147749855/tls.crt::/tmp/serving-cert-3147749855/tls.key\\\\\\\"\\\\nI0128 06:48:14.183760 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 06:48:14.185955 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 06:48:14.185992 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 06:48:14.186011 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 06:48:14.186020 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 06:48:14.192438 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0128 06:48:14.192454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0128 06:48:14.192477 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 06:48:14.192484 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 06:48:14.192488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 06:48:14.192490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 06:48:14.192493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 06:48:14.192496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0128 06:48:14.194350 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6d564af60cb16fea7167aea68703adced71f83b8ff6a2270e2e6d109f6e9c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d544adc85b9e84ad7685c6ce4e1ae6e7679fb24a21b7d44eb402b613ef6be7ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d544adc85b9e84ad7685c6ce4e1ae6e7679fb24a21b7d44eb402b613ef6be7ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:07Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:07 crc kubenswrapper[4642]: I0128 06:49:07.245758 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:07Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:07 crc kubenswrapper[4642]: I0128 06:49:07.255301 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zwkn6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e8cd657-e170-4331-9f82-7b84d122c8e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a285511bdaad491d745092714ce6a3d6b08414c40333226f4c7a0a1176ea7260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd2d14172912918c88948764f41aa4b5f2f97fa49f9edf2007a478b7c06b807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fd2d14172912918c88948764f41aa4b5f2f97fa49f9edf2007a478b7c06b807\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa815d145b52bbd51ede4a9e3c88571f861f70a6f5dc998910e506e358f73af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7aa815d145b52bbd51ede4a9e3c88571f861f70a6f5dc998910e506e358f73af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e6b7369b4926f78f6257e3f166c6540e93281a880c2a21ae3f87ecf32aaca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e6b7369b4926f78f6257e3f166c6540e93281a880c2a21ae3f87ecf32aaca0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce747d5ffc2f77c768c9e7a472ef0dbf11387e6c11215622ffceef721c9f677\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce747d5ffc2f77c768c9e7a472ef0dbf11387e6c11215622ffceef721c9f677\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a45118261ada0368155ab783800af3a71afe445370f06759b98928518999151d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a45118261ada0368155ab783800af3a71afe445370f06759b98928518999151d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7d61a01712de033907e9fec425f948bcf708899adea8b125ff47bc3b9953f3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7d61a01712de033907e9fec425f948bcf708899adea8b125ff47bc3b9953f3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zwkn6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:07Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:07 crc kubenswrapper[4642]: I0128 06:49:07.263993 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-28n48" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d569b7c-8a0e-4074-b61f-4139413b9849\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04752968302ea040e4a64b015290addc260974a7243137d55948c3e1a3e3a849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kp8zj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-28n48\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:07Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:07 crc kubenswrapper[4642]: I0128 06:49:07.271426 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wcwkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0008ccce-c71a-484f-9df7-02d5d92e8b02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04f0fbcd26ac8f7340450ee1edb0990146e1fee5ac1a5de78611d4a3c6ca7516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s57nl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wcwkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:07Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:07 crc kubenswrapper[4642]: I0128 06:49:07.326631 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:07 crc kubenswrapper[4642]: I0128 06:49:07.326672 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:07 crc kubenswrapper[4642]: I0128 06:49:07.326684 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:07 crc kubenswrapper[4642]: I0128 06:49:07.326705 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:07 crc kubenswrapper[4642]: I0128 06:49:07.326718 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:07Z","lastTransitionTime":"2026-01-28T06:49:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:07 crc kubenswrapper[4642]: I0128 06:49:07.422712 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-28n48_3d569b7c-8a0e-4074-b61f-4139413b9849/kube-multus/0.log" Jan 28 06:49:07 crc kubenswrapper[4642]: I0128 06:49:07.422762 4642 generic.go:334] "Generic (PLEG): container finished" podID="3d569b7c-8a0e-4074-b61f-4139413b9849" containerID="04752968302ea040e4a64b015290addc260974a7243137d55948c3e1a3e3a849" exitCode=1 Jan 28 06:49:07 crc kubenswrapper[4642]: I0128 06:49:07.422805 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-28n48" event={"ID":"3d569b7c-8a0e-4074-b61f-4139413b9849","Type":"ContainerDied","Data":"04752968302ea040e4a64b015290addc260974a7243137d55948c3e1a3e3a849"} Jan 28 06:49:07 crc kubenswrapper[4642]: I0128 06:49:07.423207 4642 scope.go:117] "RemoveContainer" containerID="04752968302ea040e4a64b015290addc260974a7243137d55948c3e1a3e3a849" Jan 28 06:49:07 crc kubenswrapper[4642]: I0128 06:49:07.433165 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:07 crc kubenswrapper[4642]: I0128 06:49:07.433227 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:07 crc kubenswrapper[4642]: I0128 06:49:07.433238 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:07 crc kubenswrapper[4642]: I0128 06:49:07.433254 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:07 crc kubenswrapper[4642]: I0128 06:49:07.433264 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:07Z","lastTransitionTime":"2026-01-28T06:49:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:07 crc kubenswrapper[4642]: I0128 06:49:07.439201 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"286f1a08-e4be-475e-a3ff-c37c99c41ea6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6769a5284fe4dae7df7948532cabc41d1f4a27a7e61ab818d61b26eb8165a750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75be46ee251716ce59f7d4e31811114c790e07ff2068465e1bba6ee4abc6909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1c4b797e1cd78fa6a850c8c7491a690dab16a3c06eeb71b2dc7e6799cf285a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7181241cbadd0b8edc9a6ff7f035813a1b2a06cdec451dc94d869bb56424a485\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70d9e422e29c4563585ee85b03f4d3e5705c7de8c388d337a52b30a7f0ccc096\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0128 06:48:09.026722 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 06:48:09.029824 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3147749855/tls.crt::/tmp/serving-cert-3147749855/tls.key\\\\\\\"\\\\nI0128 06:48:14.183760 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 06:48:14.185955 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 06:48:14.185992 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 06:48:14.186011 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 06:48:14.186020 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 06:48:14.192438 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0128 06:48:14.192454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0128 06:48:14.192477 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 06:48:14.192484 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 06:48:14.192488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 06:48:14.192490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 06:48:14.192493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 06:48:14.192496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0128 06:48:14.194350 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6d564af60cb16fea7167aea68703adced71f83b8ff6a2270e2e6d109f6e9c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d544adc85b9e84ad7685c6ce4e1ae6e7679fb24a21b7d44eb402b613ef6be7ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d544adc85b9e84ad7685c6ce4e1ae6e7679fb24a21b7d44eb402b613ef6be7ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:07Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:07 crc kubenswrapper[4642]: I0128 06:49:07.448591 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:07Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:07 crc kubenswrapper[4642]: I0128 06:49:07.458357 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zwkn6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e8cd657-e170-4331-9f82-7b84d122c8e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a285511bdaad491d745092714ce6a3d6b08414c40333226f4c7a0a1176ea7260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd2d14172912918c88948764f41aa4b5f2f97fa49f9edf2007a478b7c06b807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fd2d14172912918c88948764f41aa4b5f2f97fa49f9edf2007a478b7c06b807\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa815d145b52bbd51ede4a9e3c88571f861f70a6f5dc998910e506e358f73af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7aa815d145b52bbd51ede4a9e3c88571f861f70a6f5dc998910e506e358f73af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e6b7369b4926f78f6257e3f166c6540e93281a880c2a21ae3f87ecf32aaca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e6b7369b4926f78f6257e3f166c6540e93281a880c2a21ae3f87ecf32aaca0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce747d5ffc2f77c768c9e7a472ef0dbf11387e6c11215622ffceef721c9f677\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce747d5ffc2f77c768c9e7a472ef0dbf11387e6c11215622ffceef721c9f677\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a45118261ada0368155ab783800af3a71afe445370f06759b98928518999151d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a45118261ada0368155ab783800af3a71afe445370f06759b98928518999151d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7d61a01712de033907e9fec425f948bcf708899adea8b125ff47bc3b9953f3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7d61a01712de033907e9fec425f948bcf708899adea8b125ff47bc3b9953f3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zwkn6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:07Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:07 crc kubenswrapper[4642]: I0128 06:49:07.467465 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-28n48" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d569b7c-8a0e-4074-b61f-4139413b9849\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:49:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:49:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04752968302ea040e4a64b015290addc260974a7243137d55948c3e1a3e3a849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04752968302ea040e4a64b015290addc260974a7243137d55948c3e1a3e3a849\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T06:49:07Z\\\",\\\"message\\\":\\\"2026-01-28T06:48:22+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_8833352e-4ecc-4aab-baeb-af22fcd68e4b\\\\n2026-01-28T06:48:22+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_8833352e-4ecc-4aab-baeb-af22fcd68e4b to /host/opt/cni/bin/\\\\n2026-01-28T06:48:22Z [verbose] multus-daemon started\\\\n2026-01-28T06:48:22Z [verbose] Readiness Indicator file check\\\\n2026-01-28T06:49:07Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kp8zj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-28n48\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:07Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:07 crc kubenswrapper[4642]: I0128 06:49:07.474803 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wcwkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0008ccce-c71a-484f-9df7-02d5d92e8b02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04f0fbcd26ac8f7340450ee1edb0990146e1fee5ac1a5de78611d4a3c6ca7516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s57nl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wcwkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:07Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:07 crc kubenswrapper[4642]: I0128 06:49:07.482975 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"338ae955-434d-40bd-8519-580badf3e175\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8c176f35c846bbf16f17d93e6c30e3708e24e70060ebc0e78f9714fc8c402f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdf4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f517e5ecc6bf060062365392506b3ee3a09354a4f5cfaa980e0e3a2507f298c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdf4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hdsmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:07Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:07 crc kubenswrapper[4642]: I0128 06:49:07.490449 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bpz6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ad39da-99cf-4851-be79-a7d38df54055\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7dl8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7dl8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bpz6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:07Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:07 crc kubenswrapper[4642]: I0128 06:49:07.498139 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed93b88d-b3ec-4cd8-be12-bc48b0c33702\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aec208f2eb87996ab0f1883e6b216ded8157366b59ee57759273141e3e12d243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25a19a0551b29145aabbd9238cf56dafbbbdc28ad9fc6454753375eb5871265b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ce5d04a42dc4cfb6547e7bc0e2a9957b5eba820b37ad9f1c50dd412b28aec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3159057a36a02a3782786be649812ee0be2e1af5df8aeda45327dcee067a08dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3159057a36a02a3782786be649812ee0be2e1af5df8aeda45327dcee067a08dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:47:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:07Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:07 crc kubenswrapper[4642]: I0128 06:49:07.508125 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:07Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:07 crc kubenswrapper[4642]: I0128 06:49:07.516031 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:07Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:07 crc kubenswrapper[4642]: I0128 06:49:07.525768 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4acb8b3be7913130bfe0acca1f8ed39f30ee2121eec228e4878f5645899f53cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfb4620ab8d587f286964f74ee575b4837fe55979a2f70767d30709cdb4fb2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:07Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:07 crc kubenswrapper[4642]: I0128 06:49:07.532765 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tzmpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e61b4710-6f7c-4ab1-b7bb-50d445aeda93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51dac6b178ad3961488d10a37458f5d40c3345f0091f7050a4c3d310ea46704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rfx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tzmpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:07Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:07 crc kubenswrapper[4642]: I0128 06:49:07.535100 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:07 crc kubenswrapper[4642]: I0128 06:49:07.535128 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:07 crc kubenswrapper[4642]: I0128 06:49:07.535139 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:07 crc kubenswrapper[4642]: I0128 06:49:07.535155 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:07 crc kubenswrapper[4642]: I0128 06:49:07.535166 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:07Z","lastTransitionTime":"2026-01-28T06:49:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:07 crc kubenswrapper[4642]: I0128 06:49:07.540433 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e3e00c82a4c8d33e89ddc75f7c6c393e11f7a98c88c8e3e160fd8f206ece474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:07Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:07 crc kubenswrapper[4642]: I0128 06:49:07.548049 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7js4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c649375a-21a3-43f5-bd77-fbc87de527fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56f22af3d5ead6bcc96f5a78901579a4c02637186393613dad0006ecc09e4731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjkkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ad8af253ff523788331637fe108b083ad990eea4334fdea34d2e3b299de83eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjkkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-f7js4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:07Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:07 crc kubenswrapper[4642]: I0128 06:49:07.560884 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d609634-f051-46d5-9a1d-97785c7d8c67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76f1f4a4bf66b6661b0b869ac209c793c9066518674be305037372a22141c1d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f31fe6b99be2f9edd215e88aadd39790ce7f144061336772561af3eab969fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f826dab9c51a49c5b54e934e0be64c809ff09247f8aa766b9a20734a3f1277da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf1949a0768be6ba1b7cfc9107df62005fa50466747e22440e5ea8cd04ec908c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:07Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:07 crc kubenswrapper[4642]: I0128 06:49:07.570610 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5886e382e4852d848554f98099a7cbdd4d0bc3f161f96fd6f34f300a748b0172\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:07Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:07 crc kubenswrapper[4642]: I0128 06:49:07.587160 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c33d9fa76f5feb29f150ff38354d292c6cb68dedb78eb8496754c23272d137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9c679b07a820cca68800f672c74f133d871d41e4cfe6c18e101febd7110b497\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba5c27390ef97be2917d1b375ca741d7a74118ba523043b1c826e4fccaee05d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a597a1b6d4b61efc1b46773965ad598626e298815da9ceffaf97965d72286d3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e87e992d302acbefdcab2e332757e1f6d6535a0acdbe268b1e2eb1d49a615940\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6187660025657deea7e0cd68d1341136261c5e66c9e32c3b00b181df029843b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b55c1b9cd344284bee0378e9e6530f30e7315cdc57bd43c1b862457c4aeb02eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b55c1b9cd344284bee0378e9e6530f30e7315cdc57bd43c1b862457c4aeb02eb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T06:48:43Z\\\",\\\"message\\\":\\\"nshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0128 06:48:43.781118 6303 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-controller\\\\\\\"}\\\\nI0128 06:48:43.781450 6303 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0128 06:48:43.781451 6303 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nF0128 06:48:43.781452 6303 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-7fdwx_openshift-ovn-kubernetes(0f5d2a3f-25d8-4051-8000-30ec01a14eb0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cdaa92914e736d4130cee0be4f11381b0edec756ca70235c6a4205f04777874\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecf4d023197544de3b0fe0f541dc75332bb08d0f44046e46fd468e3c5a0f251e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecf4d023197544de3b0fe0f541dc75332bb08d0f44046e46fd468e3c5a0f251e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7fdwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:07Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:07 crc kubenswrapper[4642]: I0128 06:49:07.637794 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:07 crc kubenswrapper[4642]: I0128 06:49:07.637834 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:07 crc kubenswrapper[4642]: I0128 06:49:07.637844 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:07 crc kubenswrapper[4642]: I0128 06:49:07.637860 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:07 crc kubenswrapper[4642]: I0128 06:49:07.637870 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:07Z","lastTransitionTime":"2026-01-28T06:49:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:07 crc kubenswrapper[4642]: I0128 06:49:07.739953 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:07 crc kubenswrapper[4642]: I0128 06:49:07.739996 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:07 crc kubenswrapper[4642]: I0128 06:49:07.740006 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:07 crc kubenswrapper[4642]: I0128 06:49:07.740025 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:07 crc kubenswrapper[4642]: I0128 06:49:07.740035 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:07Z","lastTransitionTime":"2026-01-28T06:49:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:07 crc kubenswrapper[4642]: I0128 06:49:07.843413 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:07 crc kubenswrapper[4642]: I0128 06:49:07.843463 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:07 crc kubenswrapper[4642]: I0128 06:49:07.843481 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:07 crc kubenswrapper[4642]: I0128 06:49:07.843498 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:07 crc kubenswrapper[4642]: I0128 06:49:07.843507 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:07Z","lastTransitionTime":"2026-01-28T06:49:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:07 crc kubenswrapper[4642]: I0128 06:49:07.945435 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:07 crc kubenswrapper[4642]: I0128 06:49:07.945472 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:07 crc kubenswrapper[4642]: I0128 06:49:07.945481 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:07 crc kubenswrapper[4642]: I0128 06:49:07.945498 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:07 crc kubenswrapper[4642]: I0128 06:49:07.945508 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:07Z","lastTransitionTime":"2026-01-28T06:49:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:08 crc kubenswrapper[4642]: I0128 06:49:08.047482 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:08 crc kubenswrapper[4642]: I0128 06:49:08.048796 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:08 crc kubenswrapper[4642]: I0128 06:49:08.048831 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:08 crc kubenswrapper[4642]: I0128 06:49:08.048849 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:08 crc kubenswrapper[4642]: I0128 06:49:08.048861 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:08Z","lastTransitionTime":"2026-01-28T06:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:08 crc kubenswrapper[4642]: I0128 06:49:08.098113 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bpz6r" Jan 28 06:49:08 crc kubenswrapper[4642]: E0128 06:49:08.098338 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bpz6r" podUID="e7ad39da-99cf-4851-be79-a7d38df54055" Jan 28 06:49:08 crc kubenswrapper[4642]: I0128 06:49:08.104732 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 06:00:17.15450897 +0000 UTC Jan 28 06:49:08 crc kubenswrapper[4642]: I0128 06:49:08.152862 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:08 crc kubenswrapper[4642]: I0128 06:49:08.152905 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:08 crc kubenswrapper[4642]: I0128 06:49:08.152921 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:08 crc kubenswrapper[4642]: I0128 06:49:08.152942 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:08 crc kubenswrapper[4642]: I0128 06:49:08.152955 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:08Z","lastTransitionTime":"2026-01-28T06:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:08 crc kubenswrapper[4642]: I0128 06:49:08.255394 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:08 crc kubenswrapper[4642]: I0128 06:49:08.255424 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:08 crc kubenswrapper[4642]: I0128 06:49:08.255432 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:08 crc kubenswrapper[4642]: I0128 06:49:08.255446 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:08 crc kubenswrapper[4642]: I0128 06:49:08.255456 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:08Z","lastTransitionTime":"2026-01-28T06:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:08 crc kubenswrapper[4642]: I0128 06:49:08.358159 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:08 crc kubenswrapper[4642]: I0128 06:49:08.358232 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:08 crc kubenswrapper[4642]: I0128 06:49:08.358243 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:08 crc kubenswrapper[4642]: I0128 06:49:08.358260 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:08 crc kubenswrapper[4642]: I0128 06:49:08.358274 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:08Z","lastTransitionTime":"2026-01-28T06:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:08 crc kubenswrapper[4642]: I0128 06:49:08.427306 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-28n48_3d569b7c-8a0e-4074-b61f-4139413b9849/kube-multus/0.log" Jan 28 06:49:08 crc kubenswrapper[4642]: I0128 06:49:08.427366 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-28n48" event={"ID":"3d569b7c-8a0e-4074-b61f-4139413b9849","Type":"ContainerStarted","Data":"0226cb06c2fc831da7dadb94e0ca448e1b610cc146da17dd286aae90a38aa7c5"} Jan 28 06:49:08 crc kubenswrapper[4642]: I0128 06:49:08.436055 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e3e00c82a4c8d33e89ddc75f7c6c393e11f7a98c88c8e3e160fd8f206ece474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:08Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:08 crc kubenswrapper[4642]: I0128 06:49:08.443734 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7js4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c649375a-21a3-43f5-bd77-fbc87de527fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56f22af3d5ead6bcc96f5a78901579a4c02637186393613dad0006ecc09e4731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjkkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ad8af253ff523788331637fe108b083ad990eea4334fdea34d2e3b299de83eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjkkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-f7js4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:08Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:08 crc kubenswrapper[4642]: I0128 06:49:08.453308 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d609634-f051-46d5-9a1d-97785c7d8c67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76f1f4a4bf66b6661b0b869ac209c793c9066518674be305037372a22141c1d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f31fe6b99be2f9edd215e88aadd39790ce7f144061336772561af3eab969fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f826dab9c51a49c5b54e934e0be64c809ff09247f8aa766b9a20734a3f1277da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf1949a0768be6ba1b7cfc9107df62005fa50466747e22440e5ea8cd04ec908c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:08Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:08 crc kubenswrapper[4642]: I0128 06:49:08.460121 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:08 crc kubenswrapper[4642]: I0128 06:49:08.460151 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:08 crc kubenswrapper[4642]: I0128 06:49:08.460159 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:08 crc kubenswrapper[4642]: I0128 06:49:08.460172 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:08 crc kubenswrapper[4642]: I0128 06:49:08.460182 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:08Z","lastTransitionTime":"2026-01-28T06:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:08 crc kubenswrapper[4642]: I0128 06:49:08.463840 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5886e382e4852d848554f98099a7cbdd4d0bc3f161f96fd6f34f300a748b0172\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:08Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:08 crc kubenswrapper[4642]: I0128 06:49:08.477667 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c33d9fa76f5feb29f150ff38354d292c6cb68dedb78eb8496754c23272d137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9c679b07a820cca68800f672c74f133d871d41e4cfe6c18e101febd7110b497\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba5c27390ef97be2917d1b375ca741d7a74118ba523043b1c826e4fccaee05d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a597a1b6d4b61efc1b46773965ad598626e298815da9ceffaf97965d72286d3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e87e992d302acbefdcab2e332757e1f6d6535a0acdbe268b1e2eb1d49a615940\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6187660025657deea7e0cd68d1341136261c5e66c9e32c3b00b181df029843b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b55c1b9cd344284bee0378e9e6530f30e7315cdc57bd43c1b862457c4aeb02eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b55c1b9cd344284bee0378e9e6530f30e7315cdc57bd43c1b862457c4aeb02eb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T06:48:43Z\\\",\\\"message\\\":\\\"nshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0128 06:48:43.781118 6303 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-controller\\\\\\\"}\\\\nI0128 06:48:43.781450 6303 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0128 06:48:43.781451 6303 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nF0128 06:48:43.781452 6303 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-7fdwx_openshift-ovn-kubernetes(0f5d2a3f-25d8-4051-8000-30ec01a14eb0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cdaa92914e736d4130cee0be4f11381b0edec756ca70235c6a4205f04777874\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecf4d023197544de3b0fe0f541dc75332bb08d0f44046e46fd468e3c5a0f251e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecf4d023197544de3b0fe0f541dc75332bb08d0f44046e46fd468e3c5a0f251e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7fdwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:08Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:08 crc kubenswrapper[4642]: I0128 06:49:08.487204 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wcwkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0008ccce-c71a-484f-9df7-02d5d92e8b02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04f0fbcd26ac8f7340450ee1edb0990146e1fee5ac1a5de78611d4a3c6ca7516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s57nl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wcwkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:08Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:08 crc kubenswrapper[4642]: I0128 06:49:08.497383 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"286f1a08-e4be-475e-a3ff-c37c99c41ea6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6769a5284fe4dae7df7948532cabc41d1f4a27a7e61ab818d61b26eb8165a750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75be46ee251716ce59f7d4e31811114c790e07ff2068465e1bba6ee4abc6909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1c4b797e1cd78fa6a850c8c7491a690dab16a3c06eeb71b2dc7e6799cf285a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7181241cbadd0b8edc9a6ff7f035813a1b2a06cdec451dc94d869bb56424a485\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70d9e422e29c4563585ee85b03f4d3e5705c7de8c388d337a52b30a7f0ccc096\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0128 06:48:09.026722 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 06:48:09.029824 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3147749855/tls.crt::/tmp/serving-cert-3147749855/tls.key\\\\\\\"\\\\nI0128 06:48:14.183760 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 06:48:14.185955 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 06:48:14.185992 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 06:48:14.186011 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 06:48:14.186020 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 06:48:14.192438 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0128 06:48:14.192454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0128 06:48:14.192477 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 06:48:14.192484 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 06:48:14.192488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 06:48:14.192490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 06:48:14.192493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 06:48:14.192496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0128 06:48:14.194350 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6d564af60cb16fea7167aea68703adced71f83b8ff6a2270e2e6d109f6e9c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d544adc85b9e84ad7685c6ce4e1ae6e7679fb24a21b7d44eb402b613ef6be7ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d544adc85b9e84ad7685c6ce4e1ae6e7679fb24a21b7d44eb402b613ef6be7ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:08Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:08 crc kubenswrapper[4642]: I0128 06:49:08.507064 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:08Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:08 crc kubenswrapper[4642]: I0128 06:49:08.518398 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zwkn6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e8cd657-e170-4331-9f82-7b84d122c8e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a285511bdaad491d745092714ce6a3d6b08414c40333226f4c7a0a1176ea7260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd2d14172912918c88948764f41aa4b5f2f97fa49f9edf2007a478b7c06b807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fd2d14172912918c88948764f41aa4b5f2f97fa49f9edf2007a478b7c06b807\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa815d145b52bbd51ede4a9e3c88571f861f70a6f5dc998910e506e358f73af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7aa815d145b52bbd51ede4a9e3c88571f861f70a6f5dc998910e506e358f73af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e6b7369b4926f78f6257e3f166c6540e93281a880c2a21ae3f87ecf32aaca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e6b7369b4926f78f6257e3f166c6540e93281a880c2a21ae3f87ecf32aaca0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce747d5ffc2f77c768c9e7a472ef0dbf11387e6c11215622ffceef721c9f677\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce747d5ffc2f77c768c9e7a472ef0dbf11387e6c11215622ffceef721c9f677\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a45118261ada0368155ab783800af3a71afe445370f06759b98928518999151d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a45118261ada0368155ab783800af3a71afe445370f06759b98928518999151d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7d61a01712de033907e9fec425f948bcf708899adea8b125ff47bc3b9953f3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7d61a01712de033907e9fec425f948bcf708899adea8b125ff47bc3b9953f3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zwkn6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:08Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:08 crc kubenswrapper[4642]: I0128 06:49:08.529421 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-28n48" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d569b7c-8a0e-4074-b61f-4139413b9849\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:49:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:49:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0226cb06c2fc831da7dadb94e0ca448e1b610cc146da17dd286aae90a38aa7c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04752968302ea040e4a64b015290addc260974a7243137d55948c3e1a3e3a849\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T06:49:07Z\\\",\\\"message\\\":\\\"2026-01-28T06:48:22+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_8833352e-4ecc-4aab-baeb-af22fcd68e4b\\\\n2026-01-28T06:48:22+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_8833352e-4ecc-4aab-baeb-af22fcd68e4b to /host/opt/cni/bin/\\\\n2026-01-28T06:48:22Z [verbose] multus-daemon started\\\\n2026-01-28T06:48:22Z [verbose] Readiness Indicator file check\\\\n2026-01-28T06:49:07Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kp8zj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-28n48\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:08Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:08 crc kubenswrapper[4642]: I0128 06:49:08.539775 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4acb8b3be7913130bfe0acca1f8ed39f30ee2121eec228e4878f5645899f53cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfb4620ab8d587f286964f74ee575b4837fe55979a2f70767d30709cdb4fb2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:08Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:08 crc kubenswrapper[4642]: I0128 06:49:08.548042 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tzmpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e61b4710-6f7c-4ab1-b7bb-50d445aeda93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51dac6b178ad3961488d10a37458f5d40c3345f0091f7050a4c3d310ea46704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rfx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tzmpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:08Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:08 crc kubenswrapper[4642]: I0128 06:49:08.557302 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"338ae955-434d-40bd-8519-580badf3e175\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8c176f35c846bbf16f17d93e6c30e3708e24e70060ebc0e78f9714fc8c402f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdf4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f517e5ecc6bf060062365392506b3ee3a09354a4f5cfaa980e0e3a2507f298c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdf4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hdsmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:08Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:08 crc kubenswrapper[4642]: I0128 06:49:08.562888 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:08 crc kubenswrapper[4642]: I0128 06:49:08.562927 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:08 crc kubenswrapper[4642]: I0128 06:49:08.562938 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:08 crc kubenswrapper[4642]: I0128 06:49:08.562956 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:08 crc kubenswrapper[4642]: I0128 06:49:08.562967 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:08Z","lastTransitionTime":"2026-01-28T06:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:08 crc kubenswrapper[4642]: I0128 06:49:08.566016 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bpz6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ad39da-99cf-4851-be79-a7d38df54055\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7dl8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7dl8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bpz6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:08Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:08 crc kubenswrapper[4642]: I0128 06:49:08.575669 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed93b88d-b3ec-4cd8-be12-bc48b0c33702\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aec208f2eb87996ab0f1883e6b216ded8157366b59ee57759273141e3e12d243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25a19a0551b29145aabbd9238cf56dafbbbdc28ad9fc6454753375eb5871265b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ce5d04a42dc4cfb6547e7bc0e2a9957b5eba820b37ad9f1c50dd412b28aec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3159057a36a02a3782786be649812ee0be2e1af5df8aeda45327dcee067a08dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3159057a36a02a3782786be649812ee0be2e1af5df8aeda45327dcee067a08dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:47:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:08Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:08 crc kubenswrapper[4642]: I0128 06:49:08.584945 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:08Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:08 crc kubenswrapper[4642]: I0128 06:49:08.595044 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:08Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:08 crc kubenswrapper[4642]: I0128 06:49:08.645541 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:08 crc kubenswrapper[4642]: I0128 06:49:08.645582 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:08 crc kubenswrapper[4642]: I0128 06:49:08.645596 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:08 crc kubenswrapper[4642]: I0128 06:49:08.645618 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:08 crc kubenswrapper[4642]: I0128 06:49:08.645629 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:08Z","lastTransitionTime":"2026-01-28T06:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:08 crc kubenswrapper[4642]: E0128 06:49:08.656238 4642 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:49:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:49:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:49:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:49:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:49:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:49:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:49:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:49:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"af9fb491-28d4-46e5-857c-727aaf9d83d0\\\",\\\"systemUUID\\\":\\\"6d7f2c45-295c-4dcd-b97d-d5c383274b44\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:08Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:08 crc kubenswrapper[4642]: I0128 06:49:08.659222 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:08 crc kubenswrapper[4642]: I0128 06:49:08.659262 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:08 crc kubenswrapper[4642]: I0128 06:49:08.659275 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:08 crc kubenswrapper[4642]: I0128 06:49:08.659293 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:08 crc kubenswrapper[4642]: I0128 06:49:08.659309 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:08Z","lastTransitionTime":"2026-01-28T06:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:08 crc kubenswrapper[4642]: E0128 06:49:08.668942 4642 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:49:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:49:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:49:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:49:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:49:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:49:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:49:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:49:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"af9fb491-28d4-46e5-857c-727aaf9d83d0\\\",\\\"systemUUID\\\":\\\"6d7f2c45-295c-4dcd-b97d-d5c383274b44\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:08Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:08 crc kubenswrapper[4642]: I0128 06:49:08.672287 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:08 crc kubenswrapper[4642]: I0128 06:49:08.672355 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:08 crc kubenswrapper[4642]: I0128 06:49:08.672367 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:08 crc kubenswrapper[4642]: I0128 06:49:08.672383 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:08 crc kubenswrapper[4642]: I0128 06:49:08.672393 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:08Z","lastTransitionTime":"2026-01-28T06:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:08 crc kubenswrapper[4642]: E0128 06:49:08.681610 4642 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:49:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:49:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:49:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:49:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:49:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:49:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:49:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:49:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"af9fb491-28d4-46e5-857c-727aaf9d83d0\\\",\\\"systemUUID\\\":\\\"6d7f2c45-295c-4dcd-b97d-d5c383274b44\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:08Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:08 crc kubenswrapper[4642]: I0128 06:49:08.686768 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:08 crc kubenswrapper[4642]: I0128 06:49:08.686808 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:08 crc kubenswrapper[4642]: I0128 06:49:08.686834 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:08 crc kubenswrapper[4642]: I0128 06:49:08.686855 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:08 crc kubenswrapper[4642]: I0128 06:49:08.686869 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:08Z","lastTransitionTime":"2026-01-28T06:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:08 crc kubenswrapper[4642]: E0128 06:49:08.700204 4642 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:49:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:49:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:49:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:49:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:49:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:49:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:49:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:49:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"af9fb491-28d4-46e5-857c-727aaf9d83d0\\\",\\\"systemUUID\\\":\\\"6d7f2c45-295c-4dcd-b97d-d5c383274b44\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:08Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:08 crc kubenswrapper[4642]: I0128 06:49:08.703824 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:08 crc kubenswrapper[4642]: I0128 06:49:08.703860 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:08 crc kubenswrapper[4642]: I0128 06:49:08.703872 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:08 crc kubenswrapper[4642]: I0128 06:49:08.703886 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:08 crc kubenswrapper[4642]: I0128 06:49:08.703897 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:08Z","lastTransitionTime":"2026-01-28T06:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:08 crc kubenswrapper[4642]: E0128 06:49:08.712904 4642 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:49:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:49:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:49:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:49:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:49:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:49:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:49:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:49:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"af9fb491-28d4-46e5-857c-727aaf9d83d0\\\",\\\"systemUUID\\\":\\\"6d7f2c45-295c-4dcd-b97d-d5c383274b44\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:08Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:08 crc kubenswrapper[4642]: E0128 06:49:08.713018 4642 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 06:49:08 crc kubenswrapper[4642]: I0128 06:49:08.714746 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:08 crc kubenswrapper[4642]: I0128 06:49:08.714799 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:08 crc kubenswrapper[4642]: I0128 06:49:08.714811 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:08 crc kubenswrapper[4642]: I0128 06:49:08.714833 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:08 crc kubenswrapper[4642]: I0128 06:49:08.714847 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:08Z","lastTransitionTime":"2026-01-28T06:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:08 crc kubenswrapper[4642]: I0128 06:49:08.818128 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:08 crc kubenswrapper[4642]: I0128 06:49:08.818164 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:08 crc kubenswrapper[4642]: I0128 06:49:08.818173 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:08 crc kubenswrapper[4642]: I0128 06:49:08.818205 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:08 crc kubenswrapper[4642]: I0128 06:49:08.818218 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:08Z","lastTransitionTime":"2026-01-28T06:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:08 crc kubenswrapper[4642]: I0128 06:49:08.920792 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:08 crc kubenswrapper[4642]: I0128 06:49:08.920821 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:08 crc kubenswrapper[4642]: I0128 06:49:08.920830 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:08 crc kubenswrapper[4642]: I0128 06:49:08.920845 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:08 crc kubenswrapper[4642]: I0128 06:49:08.920858 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:08Z","lastTransitionTime":"2026-01-28T06:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:09 crc kubenswrapper[4642]: I0128 06:49:09.023374 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:09 crc kubenswrapper[4642]: I0128 06:49:09.023416 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:09 crc kubenswrapper[4642]: I0128 06:49:09.023427 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:09 crc kubenswrapper[4642]: I0128 06:49:09.023442 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:09 crc kubenswrapper[4642]: I0128 06:49:09.023454 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:09Z","lastTransitionTime":"2026-01-28T06:49:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:09 crc kubenswrapper[4642]: I0128 06:49:09.097611 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 06:49:09 crc kubenswrapper[4642]: I0128 06:49:09.097670 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 06:49:09 crc kubenswrapper[4642]: E0128 06:49:09.097735 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 06:49:09 crc kubenswrapper[4642]: I0128 06:49:09.097777 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 06:49:09 crc kubenswrapper[4642]: E0128 06:49:09.097822 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 06:49:09 crc kubenswrapper[4642]: E0128 06:49:09.097864 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 06:49:09 crc kubenswrapper[4642]: I0128 06:49:09.105602 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 10:10:17.750847368 +0000 UTC Jan 28 06:49:09 crc kubenswrapper[4642]: I0128 06:49:09.126849 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:09 crc kubenswrapper[4642]: I0128 06:49:09.126905 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:09 crc kubenswrapper[4642]: I0128 06:49:09.126917 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:09 crc kubenswrapper[4642]: I0128 06:49:09.126936 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:09 crc kubenswrapper[4642]: I0128 06:49:09.126948 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:09Z","lastTransitionTime":"2026-01-28T06:49:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:09 crc kubenswrapper[4642]: I0128 06:49:09.230298 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:09 crc kubenswrapper[4642]: I0128 06:49:09.230377 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:09 crc kubenswrapper[4642]: I0128 06:49:09.230392 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:09 crc kubenswrapper[4642]: I0128 06:49:09.230422 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:09 crc kubenswrapper[4642]: I0128 06:49:09.230441 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:09Z","lastTransitionTime":"2026-01-28T06:49:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:09 crc kubenswrapper[4642]: I0128 06:49:09.332544 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:09 crc kubenswrapper[4642]: I0128 06:49:09.332580 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:09 crc kubenswrapper[4642]: I0128 06:49:09.332591 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:09 crc kubenswrapper[4642]: I0128 06:49:09.332606 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:09 crc kubenswrapper[4642]: I0128 06:49:09.332617 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:09Z","lastTransitionTime":"2026-01-28T06:49:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:09 crc kubenswrapper[4642]: I0128 06:49:09.434639 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:09 crc kubenswrapper[4642]: I0128 06:49:09.434691 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:09 crc kubenswrapper[4642]: I0128 06:49:09.434701 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:09 crc kubenswrapper[4642]: I0128 06:49:09.434718 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:09 crc kubenswrapper[4642]: I0128 06:49:09.434733 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:09Z","lastTransitionTime":"2026-01-28T06:49:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:09 crc kubenswrapper[4642]: I0128 06:49:09.537275 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:09 crc kubenswrapper[4642]: I0128 06:49:09.537330 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:09 crc kubenswrapper[4642]: I0128 06:49:09.537340 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:09 crc kubenswrapper[4642]: I0128 06:49:09.537369 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:09 crc kubenswrapper[4642]: I0128 06:49:09.537383 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:09Z","lastTransitionTime":"2026-01-28T06:49:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:09 crc kubenswrapper[4642]: I0128 06:49:09.639972 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:09 crc kubenswrapper[4642]: I0128 06:49:09.640027 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:09 crc kubenswrapper[4642]: I0128 06:49:09.640038 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:09 crc kubenswrapper[4642]: I0128 06:49:09.640065 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:09 crc kubenswrapper[4642]: I0128 06:49:09.640077 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:09Z","lastTransitionTime":"2026-01-28T06:49:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:09 crc kubenswrapper[4642]: I0128 06:49:09.742259 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:09 crc kubenswrapper[4642]: I0128 06:49:09.742324 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:09 crc kubenswrapper[4642]: I0128 06:49:09.742336 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:09 crc kubenswrapper[4642]: I0128 06:49:09.742368 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:09 crc kubenswrapper[4642]: I0128 06:49:09.742382 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:09Z","lastTransitionTime":"2026-01-28T06:49:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:09 crc kubenswrapper[4642]: I0128 06:49:09.844387 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:09 crc kubenswrapper[4642]: I0128 06:49:09.844433 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:09 crc kubenswrapper[4642]: I0128 06:49:09.844443 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:09 crc kubenswrapper[4642]: I0128 06:49:09.844466 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:09 crc kubenswrapper[4642]: I0128 06:49:09.844476 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:09Z","lastTransitionTime":"2026-01-28T06:49:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:09 crc kubenswrapper[4642]: I0128 06:49:09.947141 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:09 crc kubenswrapper[4642]: I0128 06:49:09.947214 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:09 crc kubenswrapper[4642]: I0128 06:49:09.947226 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:09 crc kubenswrapper[4642]: I0128 06:49:09.947250 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:09 crc kubenswrapper[4642]: I0128 06:49:09.947263 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:09Z","lastTransitionTime":"2026-01-28T06:49:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:10 crc kubenswrapper[4642]: I0128 06:49:10.049950 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:10 crc kubenswrapper[4642]: I0128 06:49:10.050005 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:10 crc kubenswrapper[4642]: I0128 06:49:10.050014 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:10 crc kubenswrapper[4642]: I0128 06:49:10.050032 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:10 crc kubenswrapper[4642]: I0128 06:49:10.050043 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:10Z","lastTransitionTime":"2026-01-28T06:49:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:10 crc kubenswrapper[4642]: I0128 06:49:10.098381 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bpz6r" Jan 28 06:49:10 crc kubenswrapper[4642]: E0128 06:49:10.098544 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bpz6r" podUID="e7ad39da-99cf-4851-be79-a7d38df54055" Jan 28 06:49:10 crc kubenswrapper[4642]: I0128 06:49:10.106585 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 23:53:34.547512603 +0000 UTC Jan 28 06:49:10 crc kubenswrapper[4642]: I0128 06:49:10.152926 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:10 crc kubenswrapper[4642]: I0128 06:49:10.152979 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:10 crc kubenswrapper[4642]: I0128 06:49:10.152990 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:10 crc kubenswrapper[4642]: I0128 06:49:10.153010 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:10 crc kubenswrapper[4642]: I0128 06:49:10.153024 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:10Z","lastTransitionTime":"2026-01-28T06:49:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:10 crc kubenswrapper[4642]: I0128 06:49:10.255417 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:10 crc kubenswrapper[4642]: I0128 06:49:10.255457 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:10 crc kubenswrapper[4642]: I0128 06:49:10.255472 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:10 crc kubenswrapper[4642]: I0128 06:49:10.255493 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:10 crc kubenswrapper[4642]: I0128 06:49:10.255503 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:10Z","lastTransitionTime":"2026-01-28T06:49:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:10 crc kubenswrapper[4642]: I0128 06:49:10.357658 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:10 crc kubenswrapper[4642]: I0128 06:49:10.357693 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:10 crc kubenswrapper[4642]: I0128 06:49:10.357703 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:10 crc kubenswrapper[4642]: I0128 06:49:10.357717 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:10 crc kubenswrapper[4642]: I0128 06:49:10.357726 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:10Z","lastTransitionTime":"2026-01-28T06:49:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:10 crc kubenswrapper[4642]: I0128 06:49:10.460146 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:10 crc kubenswrapper[4642]: I0128 06:49:10.460212 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:10 crc kubenswrapper[4642]: I0128 06:49:10.460224 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:10 crc kubenswrapper[4642]: I0128 06:49:10.460244 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:10 crc kubenswrapper[4642]: I0128 06:49:10.460258 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:10Z","lastTransitionTime":"2026-01-28T06:49:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:10 crc kubenswrapper[4642]: I0128 06:49:10.562130 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:10 crc kubenswrapper[4642]: I0128 06:49:10.562201 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:10 crc kubenswrapper[4642]: I0128 06:49:10.562217 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:10 crc kubenswrapper[4642]: I0128 06:49:10.562237 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:10 crc kubenswrapper[4642]: I0128 06:49:10.562249 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:10Z","lastTransitionTime":"2026-01-28T06:49:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:10 crc kubenswrapper[4642]: I0128 06:49:10.664147 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:10 crc kubenswrapper[4642]: I0128 06:49:10.664223 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:10 crc kubenswrapper[4642]: I0128 06:49:10.664239 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:10 crc kubenswrapper[4642]: I0128 06:49:10.664257 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:10 crc kubenswrapper[4642]: I0128 06:49:10.664271 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:10Z","lastTransitionTime":"2026-01-28T06:49:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:10 crc kubenswrapper[4642]: I0128 06:49:10.766477 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:10 crc kubenswrapper[4642]: I0128 06:49:10.766524 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:10 crc kubenswrapper[4642]: I0128 06:49:10.766534 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:10 crc kubenswrapper[4642]: I0128 06:49:10.766550 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:10 crc kubenswrapper[4642]: I0128 06:49:10.766561 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:10Z","lastTransitionTime":"2026-01-28T06:49:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:10 crc kubenswrapper[4642]: I0128 06:49:10.869127 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:10 crc kubenswrapper[4642]: I0128 06:49:10.869179 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:10 crc kubenswrapper[4642]: I0128 06:49:10.869208 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:10 crc kubenswrapper[4642]: I0128 06:49:10.869231 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:10 crc kubenswrapper[4642]: I0128 06:49:10.869242 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:10Z","lastTransitionTime":"2026-01-28T06:49:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:10 crc kubenswrapper[4642]: I0128 06:49:10.971149 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:10 crc kubenswrapper[4642]: I0128 06:49:10.971208 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:10 crc kubenswrapper[4642]: I0128 06:49:10.971218 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:10 crc kubenswrapper[4642]: I0128 06:49:10.971234 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:10 crc kubenswrapper[4642]: I0128 06:49:10.971246 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:10Z","lastTransitionTime":"2026-01-28T06:49:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:11 crc kubenswrapper[4642]: I0128 06:49:11.074108 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:11 crc kubenswrapper[4642]: I0128 06:49:11.074156 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:11 crc kubenswrapper[4642]: I0128 06:49:11.074165 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:11 crc kubenswrapper[4642]: I0128 06:49:11.074203 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:11 crc kubenswrapper[4642]: I0128 06:49:11.074214 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:11Z","lastTransitionTime":"2026-01-28T06:49:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:11 crc kubenswrapper[4642]: I0128 06:49:11.097708 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 06:49:11 crc kubenswrapper[4642]: E0128 06:49:11.097851 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 06:49:11 crc kubenswrapper[4642]: I0128 06:49:11.098013 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 06:49:11 crc kubenswrapper[4642]: I0128 06:49:11.098088 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 06:49:11 crc kubenswrapper[4642]: E0128 06:49:11.098241 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 06:49:11 crc kubenswrapper[4642]: E0128 06:49:11.098467 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 06:49:11 crc kubenswrapper[4642]: I0128 06:49:11.107175 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 08:29:43.651706042 +0000 UTC Jan 28 06:49:11 crc kubenswrapper[4642]: I0128 06:49:11.176503 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:11 crc kubenswrapper[4642]: I0128 06:49:11.176531 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:11 crc kubenswrapper[4642]: I0128 06:49:11.176541 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:11 crc kubenswrapper[4642]: I0128 06:49:11.176553 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:11 crc kubenswrapper[4642]: I0128 06:49:11.176561 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:11Z","lastTransitionTime":"2026-01-28T06:49:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:11 crc kubenswrapper[4642]: I0128 06:49:11.278394 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:11 crc kubenswrapper[4642]: I0128 06:49:11.278451 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:11 crc kubenswrapper[4642]: I0128 06:49:11.278462 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:11 crc kubenswrapper[4642]: I0128 06:49:11.278483 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:11 crc kubenswrapper[4642]: I0128 06:49:11.278499 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:11Z","lastTransitionTime":"2026-01-28T06:49:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:11 crc kubenswrapper[4642]: I0128 06:49:11.380944 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:11 crc kubenswrapper[4642]: I0128 06:49:11.380982 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:11 crc kubenswrapper[4642]: I0128 06:49:11.380995 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:11 crc kubenswrapper[4642]: I0128 06:49:11.381010 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:11 crc kubenswrapper[4642]: I0128 06:49:11.381019 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:11Z","lastTransitionTime":"2026-01-28T06:49:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:11 crc kubenswrapper[4642]: I0128 06:49:11.483285 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:11 crc kubenswrapper[4642]: I0128 06:49:11.483312 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:11 crc kubenswrapper[4642]: I0128 06:49:11.483321 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:11 crc kubenswrapper[4642]: I0128 06:49:11.483335 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:11 crc kubenswrapper[4642]: I0128 06:49:11.483362 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:11Z","lastTransitionTime":"2026-01-28T06:49:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:11 crc kubenswrapper[4642]: I0128 06:49:11.585486 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:11 crc kubenswrapper[4642]: I0128 06:49:11.585531 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:11 crc kubenswrapper[4642]: I0128 06:49:11.585540 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:11 crc kubenswrapper[4642]: I0128 06:49:11.585554 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:11 crc kubenswrapper[4642]: I0128 06:49:11.585564 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:11Z","lastTransitionTime":"2026-01-28T06:49:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:11 crc kubenswrapper[4642]: I0128 06:49:11.688149 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:11 crc kubenswrapper[4642]: I0128 06:49:11.688204 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:11 crc kubenswrapper[4642]: I0128 06:49:11.688214 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:11 crc kubenswrapper[4642]: I0128 06:49:11.688229 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:11 crc kubenswrapper[4642]: I0128 06:49:11.688241 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:11Z","lastTransitionTime":"2026-01-28T06:49:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:11 crc kubenswrapper[4642]: I0128 06:49:11.790005 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:11 crc kubenswrapper[4642]: I0128 06:49:11.790037 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:11 crc kubenswrapper[4642]: I0128 06:49:11.790046 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:11 crc kubenswrapper[4642]: I0128 06:49:11.790058 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:11 crc kubenswrapper[4642]: I0128 06:49:11.790068 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:11Z","lastTransitionTime":"2026-01-28T06:49:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:11 crc kubenswrapper[4642]: I0128 06:49:11.892502 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:11 crc kubenswrapper[4642]: I0128 06:49:11.892644 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:11 crc kubenswrapper[4642]: I0128 06:49:11.892724 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:11 crc kubenswrapper[4642]: I0128 06:49:11.892801 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:11 crc kubenswrapper[4642]: I0128 06:49:11.892868 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:11Z","lastTransitionTime":"2026-01-28T06:49:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:11 crc kubenswrapper[4642]: I0128 06:49:11.995239 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:11 crc kubenswrapper[4642]: I0128 06:49:11.995295 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:11 crc kubenswrapper[4642]: I0128 06:49:11.995307 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:11 crc kubenswrapper[4642]: I0128 06:49:11.995329 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:11 crc kubenswrapper[4642]: I0128 06:49:11.995342 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:11Z","lastTransitionTime":"2026-01-28T06:49:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:12 crc kubenswrapper[4642]: I0128 06:49:12.097088 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:12 crc kubenswrapper[4642]: I0128 06:49:12.097129 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:12 crc kubenswrapper[4642]: I0128 06:49:12.097141 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:12 crc kubenswrapper[4642]: I0128 06:49:12.097157 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:12 crc kubenswrapper[4642]: I0128 06:49:12.097172 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:12Z","lastTransitionTime":"2026-01-28T06:49:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:12 crc kubenswrapper[4642]: I0128 06:49:12.097320 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bpz6r" Jan 28 06:49:12 crc kubenswrapper[4642]: E0128 06:49:12.097441 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bpz6r" podUID="e7ad39da-99cf-4851-be79-a7d38df54055" Jan 28 06:49:12 crc kubenswrapper[4642]: I0128 06:49:12.097890 4642 scope.go:117] "RemoveContainer" containerID="b55c1b9cd344284bee0378e9e6530f30e7315cdc57bd43c1b862457c4aeb02eb" Jan 28 06:49:12 crc kubenswrapper[4642]: I0128 06:49:12.107482 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 10:57:49.490238495 +0000 UTC Jan 28 06:49:12 crc kubenswrapper[4642]: I0128 06:49:12.200134 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:12 crc kubenswrapper[4642]: I0128 06:49:12.200228 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:12 crc kubenswrapper[4642]: I0128 06:49:12.200382 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:12 crc kubenswrapper[4642]: I0128 06:49:12.200408 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:12 crc kubenswrapper[4642]: I0128 06:49:12.200419 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:12Z","lastTransitionTime":"2026-01-28T06:49:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:12 crc kubenswrapper[4642]: I0128 06:49:12.302976 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:12 crc kubenswrapper[4642]: I0128 06:49:12.303024 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:12 crc kubenswrapper[4642]: I0128 06:49:12.303038 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:12 crc kubenswrapper[4642]: I0128 06:49:12.303068 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:12 crc kubenswrapper[4642]: I0128 06:49:12.303088 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:12Z","lastTransitionTime":"2026-01-28T06:49:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:12 crc kubenswrapper[4642]: I0128 06:49:12.404747 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:12 crc kubenswrapper[4642]: I0128 06:49:12.404802 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:12 crc kubenswrapper[4642]: I0128 06:49:12.404811 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:12 crc kubenswrapper[4642]: I0128 06:49:12.404830 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:12 crc kubenswrapper[4642]: I0128 06:49:12.404857 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:12Z","lastTransitionTime":"2026-01-28T06:49:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:12 crc kubenswrapper[4642]: I0128 06:49:12.440567 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7fdwx_0f5d2a3f-25d8-4051-8000-30ec01a14eb0/ovnkube-controller/2.log" Jan 28 06:49:12 crc kubenswrapper[4642]: I0128 06:49:12.442979 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" event={"ID":"0f5d2a3f-25d8-4051-8000-30ec01a14eb0","Type":"ContainerStarted","Data":"f946a3d181059c560eee8b44f77e584a9b2169a8a7c6ee76a78036779954e5ec"} Jan 28 06:49:12 crc kubenswrapper[4642]: I0128 06:49:12.443383 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" Jan 28 06:49:12 crc kubenswrapper[4642]: I0128 06:49:12.453921 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e3e00c82a4c8d33e89ddc75f7c6c393e11f7a98c88c8e3e160fd8f206ece474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:12Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:12 crc kubenswrapper[4642]: I0128 06:49:12.463913 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7js4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c649375a-21a3-43f5-bd77-fbc87de527fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56f22af3d5ead6bcc96f5a78901579a4c02637186393613dad0006ecc09e4731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjkkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ad8af253ff523788331637fe108b083ad990eea4334fdea34d2e3b299de83eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjkkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-f7js4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:12Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:12 crc kubenswrapper[4642]: I0128 06:49:12.473745 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d609634-f051-46d5-9a1d-97785c7d8c67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76f1f4a4bf66b6661b0b869ac209c793c9066518674be305037372a22141c1d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f31fe6b99be2f9edd215e88aadd39790ce7f144061336772561af3eab969fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f826dab9c51a49c5b54e934e0be64c809ff09247f8aa766b9a20734a3f1277da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf1949a0768be6ba1b7cfc9107df62005fa50466747e22440e5ea8cd04ec908c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:12Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:12 crc kubenswrapper[4642]: I0128 06:49:12.482663 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5886e382e4852d848554f98099a7cbdd4d0bc3f161f96fd6f34f300a748b0172\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:12Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:12 crc kubenswrapper[4642]: I0128 06:49:12.496712 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c33d9fa76f5feb29f150ff38354d292c6cb68dedb78eb8496754c23272d137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9c679b07a820cca68800f672c74f133d871d41e4cfe6c18e101febd7110b497\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba5c27390ef97be2917d1b375ca741d7a74118ba523043b1c826e4fccaee05d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a597a1b6d4b61efc1b46773965ad598626e298815da9ceffaf97965d72286d3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e87e992d302acbefdcab2e332757e1f6d6535a0acdbe268b1e2eb1d49a615940\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6187660025657deea7e0cd68d1341136261c5e66c9e32c3b00b181df029843b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f946a3d181059c560eee8b44f77e584a9b2169a8a7c6ee76a78036779954e5ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b55c1b9cd344284bee0378e9e6530f30e7315cdc57bd43c1b862457c4aeb02eb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T06:48:43Z\\\",\\\"message\\\":\\\"nshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0128 06:48:43.781118 6303 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-controller\\\\\\\"}\\\\nI0128 06:48:43.781450 6303 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0128 06:48:43.781451 6303 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nF0128 06:48:43.781452 6303 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:49:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cdaa92914e736d4130cee0be4f11381b0edec756ca70235c6a4205f04777874\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecf4d023197544de3b0fe0f541dc75332bb08d0f44046e46fd468e3c5a0f251e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecf4d023197544de3b0fe0f541dc75332bb08d0f44046e46fd468e3c5a0f251e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7fdwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:12Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:12 crc kubenswrapper[4642]: I0128 06:49:12.506711 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:12 crc kubenswrapper[4642]: I0128 06:49:12.506744 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:12 crc kubenswrapper[4642]: I0128 06:49:12.506754 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:12 crc kubenswrapper[4642]: I0128 06:49:12.506769 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:12 crc kubenswrapper[4642]: I0128 06:49:12.506777 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:12Z","lastTransitionTime":"2026-01-28T06:49:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:12 crc kubenswrapper[4642]: I0128 06:49:12.509445 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-28n48" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d569b7c-8a0e-4074-b61f-4139413b9849\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:49:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:49:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0226cb06c2fc831da7dadb94e0ca448e1b610cc146da17dd286aae90a38aa7c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04752968302ea040e4a64b015290addc260974a7243137d55948c3e1a3e3a849\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T06:49:07Z\\\",\\\"message\\\":\\\"2026-01-28T06:48:22+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_8833352e-4ecc-4aab-baeb-af22fcd68e4b\\\\n2026-01-28T06:48:22+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_8833352e-4ecc-4aab-baeb-af22fcd68e4b to /host/opt/cni/bin/\\\\n2026-01-28T06:48:22Z [verbose] multus-daemon started\\\\n2026-01-28T06:48:22Z [verbose] Readiness Indicator file check\\\\n2026-01-28T06:49:07Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kp8zj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-28n48\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:12Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:12 crc kubenswrapper[4642]: I0128 06:49:12.516810 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wcwkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0008ccce-c71a-484f-9df7-02d5d92e8b02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04f0fbcd26ac8f7340450ee1edb0990146e1fee5ac1a5de78611d4a3c6ca7516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s57nl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wcwkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:12Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:12 crc kubenswrapper[4642]: I0128 06:49:12.544082 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"286f1a08-e4be-475e-a3ff-c37c99c41ea6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6769a5284fe4dae7df7948532cabc41d1f4a27a7e61ab818d61b26eb8165a750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75be46ee251716ce59f7d4e31811114c790e07ff2068465e1bba6ee4abc6909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1c4b797e1cd78fa6a850c8c7491a690dab16a3c06eeb71b2dc7e6799cf285a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7181241cbadd0b8edc9a6ff7f035813a1b2a06cdec451dc94d869bb56424a485\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70d9e422e29c4563585ee85b03f4d3e5705c7de8c388d337a52b30a7f0ccc096\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0128 06:48:09.026722 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 06:48:09.029824 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3147749855/tls.crt::/tmp/serving-cert-3147749855/tls.key\\\\\\\"\\\\nI0128 06:48:14.183760 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 06:48:14.185955 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 06:48:14.185992 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 06:48:14.186011 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 06:48:14.186020 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 06:48:14.192438 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0128 06:48:14.192454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0128 06:48:14.192477 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 06:48:14.192484 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 06:48:14.192488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 06:48:14.192490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 06:48:14.192493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 06:48:14.192496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0128 06:48:14.194350 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6d564af60cb16fea7167aea68703adced71f83b8ff6a2270e2e6d109f6e9c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d544adc85b9e84ad7685c6ce4e1ae6e7679fb24a21b7d44eb402b613ef6be7ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d544adc85b9e84ad7685c6ce4e1ae6e7679fb24a21b7d44eb402b613ef6be7ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:12Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:12 crc kubenswrapper[4642]: I0128 06:49:12.570154 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:12Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:12 crc kubenswrapper[4642]: I0128 06:49:12.582831 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zwkn6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e8cd657-e170-4331-9f82-7b84d122c8e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a285511bdaad491d745092714ce6a3d6b08414c40333226f4c7a0a1176ea7260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd2d14172912918c88948764f41aa4b5f2f97fa49f9edf2007a478b7c06b807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fd2d14172912918c88948764f41aa4b5f2f97fa49f9edf2007a478b7c06b807\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa815d145b52bbd51ede4a9e3c88571f861f70a6f5dc998910e506e358f73af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7aa815d145b52bbd51ede4a9e3c88571f861f70a6f5dc998910e506e358f73af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e6b7369b4926f78f6257e3f166c6540e93281a880c2a21ae3f87ecf32aaca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e6b7369b4926f78f6257e3f166c6540e93281a880c2a21ae3f87ecf32aaca0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce747d5ffc2f77c768c9e7a472ef0dbf11387e6c11215622ffceef721c9f677\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce747d5ffc2f77c768c9e7a472ef0dbf11387e6c11215622ffceef721c9f677\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a45118261ada0368155ab783800af3a71afe445370f06759b98928518999151d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a45118261ada0368155ab783800af3a71afe445370f06759b98928518999151d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7d61a01712de033907e9fec425f948bcf708899adea8b125ff47bc3b9953f3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7d61a01712de033907e9fec425f948bcf708899adea8b125ff47bc3b9953f3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zwkn6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:12Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:12 crc kubenswrapper[4642]: I0128 06:49:12.594128 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:12Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:12 crc kubenswrapper[4642]: I0128 06:49:12.604601 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4acb8b3be7913130bfe0acca1f8ed39f30ee2121eec228e4878f5645899f53cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfb4620ab8d587f286964f74ee575b4837fe55979a2f70767d30709cdb4fb2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:12Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:12 crc kubenswrapper[4642]: I0128 06:49:12.608928 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:12 crc kubenswrapper[4642]: I0128 06:49:12.608964 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:12 crc kubenswrapper[4642]: I0128 06:49:12.608976 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:12 crc kubenswrapper[4642]: I0128 06:49:12.608992 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:12 crc kubenswrapper[4642]: I0128 06:49:12.609001 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:12Z","lastTransitionTime":"2026-01-28T06:49:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:12 crc kubenswrapper[4642]: I0128 06:49:12.613132 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tzmpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e61b4710-6f7c-4ab1-b7bb-50d445aeda93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51dac6b178ad3961488d10a37458f5d40c3345f0091f7050a4c3d310ea46704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rfx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tzmpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:12Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:12 crc kubenswrapper[4642]: I0128 06:49:12.621960 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"338ae955-434d-40bd-8519-580badf3e175\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8c176f35c846bbf16f17d93e6c30e3708e24e70060ebc0e78f9714fc8c402f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdf4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f517e5ecc6bf060062365392506b3ee3a09354a4f5cfaa980e0e3a2507f298c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdf4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hdsmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:12Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:12 crc kubenswrapper[4642]: I0128 06:49:12.629112 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bpz6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ad39da-99cf-4851-be79-a7d38df54055\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7dl8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7dl8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bpz6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:12Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:12 crc kubenswrapper[4642]: I0128 06:49:12.636953 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed93b88d-b3ec-4cd8-be12-bc48b0c33702\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aec208f2eb87996ab0f1883e6b216ded8157366b59ee57759273141e3e12d243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25a19a0551b29145aabbd9238cf56dafbbbdc28ad9fc6454753375eb5871265b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ce5d04a42dc4cfb6547e7bc0e2a9957b5eba820b37ad9f1c50dd412b28aec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3159057a36a02a3782786be649812ee0be2e1af5df8aeda45327dcee067a08dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3159057a36a02a3782786be649812ee0be2e1af5df8aeda45327dcee067a08dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:47:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:12Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:12 crc kubenswrapper[4642]: I0128 06:49:12.646302 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:12Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:12 crc kubenswrapper[4642]: I0128 06:49:12.711274 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:12 crc kubenswrapper[4642]: I0128 06:49:12.711308 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:12 crc kubenswrapper[4642]: I0128 06:49:12.711317 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:12 crc kubenswrapper[4642]: I0128 06:49:12.711333 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:12 crc kubenswrapper[4642]: I0128 06:49:12.711342 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:12Z","lastTransitionTime":"2026-01-28T06:49:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:12 crc kubenswrapper[4642]: I0128 06:49:12.813656 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:12 crc kubenswrapper[4642]: I0128 06:49:12.813693 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:12 crc kubenswrapper[4642]: I0128 06:49:12.813702 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:12 crc kubenswrapper[4642]: I0128 06:49:12.813717 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:12 crc kubenswrapper[4642]: I0128 06:49:12.813728 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:12Z","lastTransitionTime":"2026-01-28T06:49:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:12 crc kubenswrapper[4642]: I0128 06:49:12.915751 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:12 crc kubenswrapper[4642]: I0128 06:49:12.915788 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:12 crc kubenswrapper[4642]: I0128 06:49:12.915798 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:12 crc kubenswrapper[4642]: I0128 06:49:12.915814 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:12 crc kubenswrapper[4642]: I0128 06:49:12.915824 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:12Z","lastTransitionTime":"2026-01-28T06:49:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:13 crc kubenswrapper[4642]: I0128 06:49:13.017685 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:13 crc kubenswrapper[4642]: I0128 06:49:13.017737 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:13 crc kubenswrapper[4642]: I0128 06:49:13.017751 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:13 crc kubenswrapper[4642]: I0128 06:49:13.017770 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:13 crc kubenswrapper[4642]: I0128 06:49:13.017785 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:13Z","lastTransitionTime":"2026-01-28T06:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:13 crc kubenswrapper[4642]: I0128 06:49:13.098370 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 06:49:13 crc kubenswrapper[4642]: I0128 06:49:13.098380 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 06:49:13 crc kubenswrapper[4642]: E0128 06:49:13.098553 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 06:49:13 crc kubenswrapper[4642]: I0128 06:49:13.098379 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 06:49:13 crc kubenswrapper[4642]: E0128 06:49:13.098655 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 06:49:13 crc kubenswrapper[4642]: E0128 06:49:13.098743 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 06:49:13 crc kubenswrapper[4642]: I0128 06:49:13.108203 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 15:36:36.09323839 +0000 UTC Jan 28 06:49:13 crc kubenswrapper[4642]: I0128 06:49:13.120164 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:13 crc kubenswrapper[4642]: I0128 06:49:13.120233 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:13 crc kubenswrapper[4642]: I0128 06:49:13.120246 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:13 crc kubenswrapper[4642]: I0128 06:49:13.120265 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:13 crc kubenswrapper[4642]: I0128 06:49:13.120557 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:13Z","lastTransitionTime":"2026-01-28T06:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:13 crc kubenswrapper[4642]: I0128 06:49:13.222311 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:13 crc kubenswrapper[4642]: I0128 06:49:13.222381 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:13 crc kubenswrapper[4642]: I0128 06:49:13.222392 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:13 crc kubenswrapper[4642]: I0128 06:49:13.222415 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:13 crc kubenswrapper[4642]: I0128 06:49:13.222426 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:13Z","lastTransitionTime":"2026-01-28T06:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:13 crc kubenswrapper[4642]: I0128 06:49:13.324399 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:13 crc kubenswrapper[4642]: I0128 06:49:13.324447 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:13 crc kubenswrapper[4642]: I0128 06:49:13.324455 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:13 crc kubenswrapper[4642]: I0128 06:49:13.324502 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:13 crc kubenswrapper[4642]: I0128 06:49:13.324514 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:13Z","lastTransitionTime":"2026-01-28T06:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:13 crc kubenswrapper[4642]: I0128 06:49:13.427421 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:13 crc kubenswrapper[4642]: I0128 06:49:13.427487 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:13 crc kubenswrapper[4642]: I0128 06:49:13.427499 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:13 crc kubenswrapper[4642]: I0128 06:49:13.427522 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:13 crc kubenswrapper[4642]: I0128 06:49:13.427534 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:13Z","lastTransitionTime":"2026-01-28T06:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:13 crc kubenswrapper[4642]: I0128 06:49:13.448139 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7fdwx_0f5d2a3f-25d8-4051-8000-30ec01a14eb0/ovnkube-controller/3.log" Jan 28 06:49:13 crc kubenswrapper[4642]: I0128 06:49:13.448865 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7fdwx_0f5d2a3f-25d8-4051-8000-30ec01a14eb0/ovnkube-controller/2.log" Jan 28 06:49:13 crc kubenswrapper[4642]: I0128 06:49:13.451978 4642 generic.go:334] "Generic (PLEG): container finished" podID="0f5d2a3f-25d8-4051-8000-30ec01a14eb0" containerID="f946a3d181059c560eee8b44f77e584a9b2169a8a7c6ee76a78036779954e5ec" exitCode=1 Jan 28 06:49:13 crc kubenswrapper[4642]: I0128 06:49:13.452032 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" event={"ID":"0f5d2a3f-25d8-4051-8000-30ec01a14eb0","Type":"ContainerDied","Data":"f946a3d181059c560eee8b44f77e584a9b2169a8a7c6ee76a78036779954e5ec"} Jan 28 06:49:13 crc kubenswrapper[4642]: I0128 06:49:13.452086 4642 scope.go:117] "RemoveContainer" containerID="b55c1b9cd344284bee0378e9e6530f30e7315cdc57bd43c1b862457c4aeb02eb" Jan 28 06:49:13 crc kubenswrapper[4642]: I0128 06:49:13.452809 4642 scope.go:117] "RemoveContainer" containerID="f946a3d181059c560eee8b44f77e584a9b2169a8a7c6ee76a78036779954e5ec" Jan 28 06:49:13 crc kubenswrapper[4642]: E0128 06:49:13.453003 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-7fdwx_openshift-ovn-kubernetes(0f5d2a3f-25d8-4051-8000-30ec01a14eb0)\"" pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" podUID="0f5d2a3f-25d8-4051-8000-30ec01a14eb0" Jan 28 06:49:13 crc kubenswrapper[4642]: I0128 06:49:13.467602 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:13Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:13 crc kubenswrapper[4642]: I0128 06:49:13.481289 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zwkn6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e8cd657-e170-4331-9f82-7b84d122c8e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a285511bdaad491d745092714ce6a3d6b08414c40333226f4c7a0a1176ea7260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd2d14172912918c88948764f41aa4b5f2f97fa49f9edf2007a478b7c06b807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fd2d14172912918c88948764f41aa4b5f2f97fa49f9edf2007a478b7c06b807\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa815d145b52bbd51ede4a9e3c88571f861f70a6f5dc998910e506e358f73af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7aa815d145b52bbd51ede4a9e3c88571f861f70a6f5dc998910e506e358f73af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e6b7369b4926f78f6257e3f166c6540e93281a880c2a21ae3f87ecf32aaca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e6b7369b4926f78f6257e3f166c6540e93281a880c2a21ae3f87ecf32aaca0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce747d5ffc2f77c768c9e7a472ef0dbf11387e6c11215622ffceef721c9f677\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce747d5ffc2f77c768c9e7a472ef0dbf11387e6c11215622ffceef721c9f677\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a45118261ada0368155ab783800af3a71afe445370f06759b98928518999151d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a45118261ada0368155ab783800af3a71afe445370f06759b98928518999151d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7d61a01712de033907e9fec425f948bcf708899adea8b125ff47bc3b9953f3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7d61a01712de033907e9fec425f948bcf708899adea8b125ff47bc3b9953f3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zwkn6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:13Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:13 crc kubenswrapper[4642]: I0128 06:49:13.492802 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-28n48" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d569b7c-8a0e-4074-b61f-4139413b9849\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:49:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:49:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0226cb06c2fc831da7dadb94e0ca448e1b610cc146da17dd286aae90a38aa7c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04752968302ea040e4a64b015290addc260974a7243137d55948c3e1a3e3a849\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T06:49:07Z\\\",\\\"message\\\":\\\"2026-01-28T06:48:22+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_8833352e-4ecc-4aab-baeb-af22fcd68e4b\\\\n2026-01-28T06:48:22+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_8833352e-4ecc-4aab-baeb-af22fcd68e4b to /host/opt/cni/bin/\\\\n2026-01-28T06:48:22Z [verbose] multus-daemon started\\\\n2026-01-28T06:48:22Z [verbose] Readiness Indicator file check\\\\n2026-01-28T06:49:07Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kp8zj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-28n48\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:13Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:13 crc kubenswrapper[4642]: I0128 06:49:13.501305 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wcwkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0008ccce-c71a-484f-9df7-02d5d92e8b02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04f0fbcd26ac8f7340450ee1edb0990146e1fee5ac1a5de78611d4a3c6ca7516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s57nl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wcwkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:13Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:13 crc kubenswrapper[4642]: I0128 06:49:13.516106 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"286f1a08-e4be-475e-a3ff-c37c99c41ea6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6769a5284fe4dae7df7948532cabc41d1f4a27a7e61ab818d61b26eb8165a750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75be46ee251716ce59f7d4e31811114c790e07ff2068465e1bba6ee4abc6909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1c4b797e1cd78fa6a850c8c7491a690dab16a3c06eeb71b2dc7e6799cf285a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7181241cbadd0b8edc9a6ff7f035813a1b2a06cdec451dc94d869bb56424a485\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70d9e422e29c4563585ee85b03f4d3e5705c7de8c388d337a52b30a7f0ccc096\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0128 06:48:09.026722 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 06:48:09.029824 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3147749855/tls.crt::/tmp/serving-cert-3147749855/tls.key\\\\\\\"\\\\nI0128 06:48:14.183760 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 06:48:14.185955 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 06:48:14.185992 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 06:48:14.186011 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 06:48:14.186020 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 06:48:14.192438 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0128 06:48:14.192454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0128 06:48:14.192477 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 06:48:14.192484 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 06:48:14.192488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 06:48:14.192490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 06:48:14.192493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 06:48:14.192496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0128 06:48:14.194350 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6d564af60cb16fea7167aea68703adced71f83b8ff6a2270e2e6d109f6e9c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d544adc85b9e84ad7685c6ce4e1ae6e7679fb24a21b7d44eb402b613ef6be7ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d544adc85b9e84ad7685c6ce4e1ae6e7679fb24a21b7d44eb402b613ef6be7ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:13Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:13 crc kubenswrapper[4642]: I0128 06:49:13.525647 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed93b88d-b3ec-4cd8-be12-bc48b0c33702\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aec208f2eb87996ab0f1883e6b216ded8157366b59ee57759273141e3e12d243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25a19a0551b29145aabbd9238cf56dafbbbdc28ad9fc6454753375eb5871265b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ce5d04a42dc4cfb6547e7bc0e2a9957b5eba820b37ad9f1c50dd412b28aec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3159057a36a02a3782786be649812ee0be2e1af5df8aeda45327dcee067a08dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3159057a36a02a3782786be649812ee0be2e1af5df8aeda45327dcee067a08dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:47:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:13Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:13 crc kubenswrapper[4642]: I0128 06:49:13.533315 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:13 crc kubenswrapper[4642]: I0128 06:49:13.533418 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:13 crc kubenswrapper[4642]: I0128 06:49:13.533436 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:13 crc kubenswrapper[4642]: I0128 06:49:13.533464 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:13 crc kubenswrapper[4642]: I0128 06:49:13.533491 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:13Z","lastTransitionTime":"2026-01-28T06:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:13 crc kubenswrapper[4642]: I0128 06:49:13.541097 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:13Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:13 crc kubenswrapper[4642]: I0128 06:49:13.550020 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:13Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:13 crc kubenswrapper[4642]: I0128 06:49:13.558902 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4acb8b3be7913130bfe0acca1f8ed39f30ee2121eec228e4878f5645899f53cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfb4620ab8d587f286964f74ee575b4837fe55979a2f70767d30709cdb4fb2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:13Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:13 crc kubenswrapper[4642]: I0128 06:49:13.566201 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tzmpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e61b4710-6f7c-4ab1-b7bb-50d445aeda93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51dac6b178ad3961488d10a37458f5d40c3345f0091f7050a4c3d310ea46704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rfx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tzmpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:13Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:13 crc kubenswrapper[4642]: I0128 06:49:13.573976 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"338ae955-434d-40bd-8519-580badf3e175\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8c176f35c846bbf16f17d93e6c30e3708e24e70060ebc0e78f9714fc8c402f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdf4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f517e5ecc6bf060062365392506b3ee3a09354a4f5cfaa980e0e3a2507f298c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdf4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hdsmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:13Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:13 crc kubenswrapper[4642]: I0128 06:49:13.582650 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bpz6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ad39da-99cf-4851-be79-a7d38df54055\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7dl8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7dl8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bpz6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:13Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:13 crc kubenswrapper[4642]: I0128 06:49:13.590364 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e3e00c82a4c8d33e89ddc75f7c6c393e11f7a98c88c8e3e160fd8f206ece474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:13Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:13 crc kubenswrapper[4642]: I0128 06:49:13.597866 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7js4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c649375a-21a3-43f5-bd77-fbc87de527fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56f22af3d5ead6bcc96f5a78901579a4c02637186393613dad0006ecc09e4731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjkkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ad8af253ff523788331637fe108b083ad990eea4334fdea34d2e3b299de83eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjkkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-f7js4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:13Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:13 crc kubenswrapper[4642]: I0128 06:49:13.606436 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5886e382e4852d848554f98099a7cbdd4d0bc3f161f96fd6f34f300a748b0172\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:13Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:13 crc kubenswrapper[4642]: I0128 06:49:13.620588 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c33d9fa76f5feb29f150ff38354d292c6cb68dedb78eb8496754c23272d137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9c679b07a820cca68800f672c74f133d871d41e4cfe6c18e101febd7110b497\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba5c27390ef97be2917d1b375ca741d7a74118ba523043b1c826e4fccaee05d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a597a1b6d4b61efc1b46773965ad598626e298815da9ceffaf97965d72286d3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e87e992d302acbefdcab2e332757e1f6d6535a0acdbe268b1e2eb1d49a615940\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6187660025657deea7e0cd68d1341136261c5e66c9e32c3b00b181df029843b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f946a3d181059c560eee8b44f77e584a9b2169a8a7c6ee76a78036779954e5ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b55c1b9cd344284bee0378e9e6530f30e7315cdc57bd43c1b862457c4aeb02eb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T06:48:43Z\\\",\\\"message\\\":\\\"nshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0128 06:48:43.781118 6303 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-controller\\\\\\\"}\\\\nI0128 06:48:43.781450 6303 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0128 06:48:43.781451 6303 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nF0128 06:48:43.781452 6303 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:48\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f946a3d181059c560eee8b44f77e584a9b2169a8a7c6ee76a78036779954e5ec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T06:49:12Z\\\",\\\"message\\\":\\\".0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:12Z is after 2025-08-24T17:21:41Z]\\\\nI0128 06:49:12.834443 6707 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI0128 06:49:12.834438 6707 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-apiserver-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"8b82f026-5975-4a1b-bb18-08d5d51147ec\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-apiserver-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T06:49:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cdaa92914e736d4130cee0be4f11381b0edec756ca70235c6a4205f04777874\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecf4d023197544de3b0fe0f541dc75332bb08d0f44046e46fd468e3c5a0f251e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecf4d023197544de3b0fe0f541dc75332bb08d0f44046e46fd468e3c5a0f251e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7fdwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:13Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:13 crc kubenswrapper[4642]: I0128 06:49:13.630425 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d609634-f051-46d5-9a1d-97785c7d8c67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76f1f4a4bf66b6661b0b869ac209c793c9066518674be305037372a22141c1d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f31fe6b99be2f9edd215e88aadd39790ce7f144061336772561af3eab969fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f826dab9c51a49c5b54e934e0be64c809ff09247f8aa766b9a20734a3f1277da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf1949a0768be6ba1b7cfc9107df62005fa50466747e22440e5ea8cd04ec908c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:13Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:13 crc kubenswrapper[4642]: I0128 06:49:13.636855 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:13 crc kubenswrapper[4642]: I0128 06:49:13.636891 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:13 crc kubenswrapper[4642]: I0128 06:49:13.636906 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:13 crc kubenswrapper[4642]: I0128 06:49:13.636940 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:13 crc kubenswrapper[4642]: I0128 06:49:13.636952 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:13Z","lastTransitionTime":"2026-01-28T06:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:13 crc kubenswrapper[4642]: I0128 06:49:13.739905 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:13 crc kubenswrapper[4642]: I0128 06:49:13.739953 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:13 crc kubenswrapper[4642]: I0128 06:49:13.739965 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:13 crc kubenswrapper[4642]: I0128 06:49:13.739988 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:13 crc kubenswrapper[4642]: I0128 06:49:13.740001 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:13Z","lastTransitionTime":"2026-01-28T06:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:13 crc kubenswrapper[4642]: I0128 06:49:13.842890 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:13 crc kubenswrapper[4642]: I0128 06:49:13.842941 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:13 crc kubenswrapper[4642]: I0128 06:49:13.842953 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:13 crc kubenswrapper[4642]: I0128 06:49:13.842977 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:13 crc kubenswrapper[4642]: I0128 06:49:13.842992 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:13Z","lastTransitionTime":"2026-01-28T06:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:13 crc kubenswrapper[4642]: I0128 06:49:13.945474 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:13 crc kubenswrapper[4642]: I0128 06:49:13.945509 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:13 crc kubenswrapper[4642]: I0128 06:49:13.945519 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:13 crc kubenswrapper[4642]: I0128 06:49:13.945534 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:13 crc kubenswrapper[4642]: I0128 06:49:13.945544 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:13Z","lastTransitionTime":"2026-01-28T06:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:14 crc kubenswrapper[4642]: I0128 06:49:14.048400 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:14 crc kubenswrapper[4642]: I0128 06:49:14.048437 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:14 crc kubenswrapper[4642]: I0128 06:49:14.048446 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:14 crc kubenswrapper[4642]: I0128 06:49:14.048462 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:14 crc kubenswrapper[4642]: I0128 06:49:14.048474 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:14Z","lastTransitionTime":"2026-01-28T06:49:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:14 crc kubenswrapper[4642]: I0128 06:49:14.097969 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bpz6r" Jan 28 06:49:14 crc kubenswrapper[4642]: E0128 06:49:14.098103 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bpz6r" podUID="e7ad39da-99cf-4851-be79-a7d38df54055" Jan 28 06:49:14 crc kubenswrapper[4642]: I0128 06:49:14.109129 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 21:49:02.788073244 +0000 UTC Jan 28 06:49:14 crc kubenswrapper[4642]: I0128 06:49:14.150756 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:14 crc kubenswrapper[4642]: I0128 06:49:14.150784 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:14 crc kubenswrapper[4642]: I0128 06:49:14.150793 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:14 crc kubenswrapper[4642]: I0128 06:49:14.150808 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:14 crc kubenswrapper[4642]: I0128 06:49:14.150816 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:14Z","lastTransitionTime":"2026-01-28T06:49:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:14 crc kubenswrapper[4642]: I0128 06:49:14.252893 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:14 crc kubenswrapper[4642]: I0128 06:49:14.252943 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:14 crc kubenswrapper[4642]: I0128 06:49:14.252954 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:14 crc kubenswrapper[4642]: I0128 06:49:14.252974 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:14 crc kubenswrapper[4642]: I0128 06:49:14.252984 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:14Z","lastTransitionTime":"2026-01-28T06:49:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:14 crc kubenswrapper[4642]: I0128 06:49:14.355226 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:14 crc kubenswrapper[4642]: I0128 06:49:14.355263 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:14 crc kubenswrapper[4642]: I0128 06:49:14.355272 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:14 crc kubenswrapper[4642]: I0128 06:49:14.355286 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:14 crc kubenswrapper[4642]: I0128 06:49:14.355294 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:14Z","lastTransitionTime":"2026-01-28T06:49:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:14 crc kubenswrapper[4642]: I0128 06:49:14.457453 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:14 crc kubenswrapper[4642]: I0128 06:49:14.457536 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:14 crc kubenswrapper[4642]: I0128 06:49:14.457550 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:14 crc kubenswrapper[4642]: I0128 06:49:14.457571 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:14 crc kubenswrapper[4642]: I0128 06:49:14.457582 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:14Z","lastTransitionTime":"2026-01-28T06:49:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:14 crc kubenswrapper[4642]: I0128 06:49:14.458436 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7fdwx_0f5d2a3f-25d8-4051-8000-30ec01a14eb0/ovnkube-controller/3.log" Jan 28 06:49:14 crc kubenswrapper[4642]: I0128 06:49:14.462778 4642 scope.go:117] "RemoveContainer" containerID="f946a3d181059c560eee8b44f77e584a9b2169a8a7c6ee76a78036779954e5ec" Jan 28 06:49:14 crc kubenswrapper[4642]: E0128 06:49:14.463017 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-7fdwx_openshift-ovn-kubernetes(0f5d2a3f-25d8-4051-8000-30ec01a14eb0)\"" pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" podUID="0f5d2a3f-25d8-4051-8000-30ec01a14eb0" Jan 28 06:49:14 crc kubenswrapper[4642]: I0128 06:49:14.474246 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:14Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:14 crc kubenswrapper[4642]: I0128 06:49:14.485406 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4acb8b3be7913130bfe0acca1f8ed39f30ee2121eec228e4878f5645899f53cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfb4620ab8d587f286964f74ee575b4837fe55979a2f70767d30709cdb4fb2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:14Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:14 crc kubenswrapper[4642]: I0128 06:49:14.493611 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tzmpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e61b4710-6f7c-4ab1-b7bb-50d445aeda93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51dac6b178ad3961488d10a37458f5d40c3345f0091f7050a4c3d310ea46704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rfx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tzmpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:14Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:14 crc kubenswrapper[4642]: I0128 06:49:14.501553 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"338ae955-434d-40bd-8519-580badf3e175\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8c176f35c846bbf16f17d93e6c30e3708e24e70060ebc0e78f9714fc8c402f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdf4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f517e5ecc6bf060062365392506b3ee3a09354a4f5cfaa980e0e3a2507f298c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdf4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hdsmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:14Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:14 crc kubenswrapper[4642]: I0128 06:49:14.508888 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bpz6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ad39da-99cf-4851-be79-a7d38df54055\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7dl8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7dl8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bpz6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:14Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:14 crc kubenswrapper[4642]: I0128 06:49:14.517738 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed93b88d-b3ec-4cd8-be12-bc48b0c33702\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aec208f2eb87996ab0f1883e6b216ded8157366b59ee57759273141e3e12d243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25a19a0551b29145aabbd9238cf56dafbbbdc28ad9fc6454753375eb5871265b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ce5d04a42dc4cfb6547e7bc0e2a9957b5eba820b37ad9f1c50dd412b28aec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3159057a36a02a3782786be649812ee0be2e1af5df8aeda45327dcee067a08dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3159057a36a02a3782786be649812ee0be2e1af5df8aeda45327dcee067a08dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:47:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:14Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:14 crc kubenswrapper[4642]: I0128 06:49:14.529066 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:14Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:14 crc kubenswrapper[4642]: I0128 06:49:14.537786 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e3e00c82a4c8d33e89ddc75f7c6c393e11f7a98c88c8e3e160fd8f206ece474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:14Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:14 crc kubenswrapper[4642]: I0128 06:49:14.547094 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7js4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c649375a-21a3-43f5-bd77-fbc87de527fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56f22af3d5ead6bcc96f5a78901579a4c02637186393613dad0006ecc09e4731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjkkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ad8af253ff523788331637fe108b083ad990eea4334fdea34d2e3b299de83eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjkkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-f7js4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:14Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:14 crc kubenswrapper[4642]: I0128 06:49:14.556063 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d609634-f051-46d5-9a1d-97785c7d8c67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76f1f4a4bf66b6661b0b869ac209c793c9066518674be305037372a22141c1d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f31fe6b99be2f9edd215e88aadd39790ce7f144061336772561af3eab969fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f826dab9c51a49c5b54e934e0be64c809ff09247f8aa766b9a20734a3f1277da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf1949a0768be6ba1b7cfc9107df62005fa50466747e22440e5ea8cd04ec908c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:14Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:14 crc kubenswrapper[4642]: I0128 06:49:14.559920 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:14 crc kubenswrapper[4642]: I0128 06:49:14.559955 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:14 crc kubenswrapper[4642]: I0128 06:49:14.559967 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:14 crc kubenswrapper[4642]: I0128 06:49:14.559986 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:14 crc kubenswrapper[4642]: I0128 06:49:14.560000 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:14Z","lastTransitionTime":"2026-01-28T06:49:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:14 crc kubenswrapper[4642]: I0128 06:49:14.565479 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5886e382e4852d848554f98099a7cbdd4d0bc3f161f96fd6f34f300a748b0172\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:14Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:14 crc kubenswrapper[4642]: I0128 06:49:14.580823 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c33d9fa76f5feb29f150ff38354d292c6cb68dedb78eb8496754c23272d137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9c679b07a820cca68800f672c74f133d871d41e4cfe6c18e101febd7110b497\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba5c27390ef97be2917d1b375ca741d7a74118ba523043b1c826e4fccaee05d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a597a1b6d4b61efc1b46773965ad598626e298815da9ceffaf97965d72286d3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e87e992d302acbefdcab2e332757e1f6d6535a0acdbe268b1e2eb1d49a615940\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6187660025657deea7e0cd68d1341136261c5e66c9e32c3b00b181df029843b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f946a3d181059c560eee8b44f77e584a9b2169a8a7c6ee76a78036779954e5ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f946a3d181059c560eee8b44f77e584a9b2169a8a7c6ee76a78036779954e5ec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T06:49:12Z\\\",\\\"message\\\":\\\".0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:12Z is after 2025-08-24T17:21:41Z]\\\\nI0128 06:49:12.834443 6707 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI0128 06:49:12.834438 6707 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-apiserver-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"8b82f026-5975-4a1b-bb18-08d5d51147ec\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-apiserver-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T06:49:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-7fdwx_openshift-ovn-kubernetes(0f5d2a3f-25d8-4051-8000-30ec01a14eb0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cdaa92914e736d4130cee0be4f11381b0edec756ca70235c6a4205f04777874\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecf4d023197544de3b0fe0f541dc75332bb08d0f44046e46fd468e3c5a0f251e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecf4d023197544de3b0fe0f541dc75332bb08d0f44046e46fd468e3c5a0f251e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7fdwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:14Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:14 crc kubenswrapper[4642]: I0128 06:49:14.590274 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-28n48" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d569b7c-8a0e-4074-b61f-4139413b9849\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:49:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:49:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0226cb06c2fc831da7dadb94e0ca448e1b610cc146da17dd286aae90a38aa7c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04752968302ea040e4a64b015290addc260974a7243137d55948c3e1a3e3a849\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T06:49:07Z\\\",\\\"message\\\":\\\"2026-01-28T06:48:22+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_8833352e-4ecc-4aab-baeb-af22fcd68e4b\\\\n2026-01-28T06:48:22+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_8833352e-4ecc-4aab-baeb-af22fcd68e4b to /host/opt/cni/bin/\\\\n2026-01-28T06:48:22Z [verbose] multus-daemon started\\\\n2026-01-28T06:48:22Z [verbose] Readiness Indicator file check\\\\n2026-01-28T06:49:07Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kp8zj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-28n48\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:14Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:14 crc kubenswrapper[4642]: I0128 06:49:14.600308 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wcwkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0008ccce-c71a-484f-9df7-02d5d92e8b02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04f0fbcd26ac8f7340450ee1edb0990146e1fee5ac1a5de78611d4a3c6ca7516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s57nl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wcwkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:14Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:14 crc kubenswrapper[4642]: I0128 06:49:14.609210 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"286f1a08-e4be-475e-a3ff-c37c99c41ea6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6769a5284fe4dae7df7948532cabc41d1f4a27a7e61ab818d61b26eb8165a750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75be46ee251716ce59f7d4e31811114c790e07ff2068465e1bba6ee4abc6909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1c4b797e1cd78fa6a850c8c7491a690dab16a3c06eeb71b2dc7e6799cf285a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7181241cbadd0b8edc9a6ff7f035813a1b2a06cdec451dc94d869bb56424a485\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70d9e422e29c4563585ee85b03f4d3e5705c7de8c388d337a52b30a7f0ccc096\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0128 06:48:09.026722 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 06:48:09.029824 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3147749855/tls.crt::/tmp/serving-cert-3147749855/tls.key\\\\\\\"\\\\nI0128 06:48:14.183760 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 06:48:14.185955 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 06:48:14.185992 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 06:48:14.186011 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 06:48:14.186020 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 06:48:14.192438 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0128 06:48:14.192454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0128 06:48:14.192477 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 06:48:14.192484 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 06:48:14.192488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 06:48:14.192490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 06:48:14.192493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 06:48:14.192496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0128 06:48:14.194350 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6d564af60cb16fea7167aea68703adced71f83b8ff6a2270e2e6d109f6e9c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d544adc85b9e84ad7685c6ce4e1ae6e7679fb24a21b7d44eb402b613ef6be7ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d544adc85b9e84ad7685c6ce4e1ae6e7679fb24a21b7d44eb402b613ef6be7ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:14Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:14 crc kubenswrapper[4642]: I0128 06:49:14.617218 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:14Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:14 crc kubenswrapper[4642]: I0128 06:49:14.629257 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zwkn6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e8cd657-e170-4331-9f82-7b84d122c8e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a285511bdaad491d745092714ce6a3d6b08414c40333226f4c7a0a1176ea7260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd2d14172912918c88948764f41aa4b5f2f97fa49f9edf2007a478b7c06b807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fd2d14172912918c88948764f41aa4b5f2f97fa49f9edf2007a478b7c06b807\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa815d145b52bbd51ede4a9e3c88571f861f70a6f5dc998910e506e358f73af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7aa815d145b52bbd51ede4a9e3c88571f861f70a6f5dc998910e506e358f73af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e6b7369b4926f78f6257e3f166c6540e93281a880c2a21ae3f87ecf32aaca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e6b7369b4926f78f6257e3f166c6540e93281a880c2a21ae3f87ecf32aaca0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce747d5ffc2f77c768c9e7a472ef0dbf11387e6c11215622ffceef721c9f677\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce747d5ffc2f77c768c9e7a472ef0dbf11387e6c11215622ffceef721c9f677\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a45118261ada0368155ab783800af3a71afe445370f06759b98928518999151d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a45118261ada0368155ab783800af3a71afe445370f06759b98928518999151d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7d61a01712de033907e9fec425f948bcf708899adea8b125ff47bc3b9953f3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7d61a01712de033907e9fec425f948bcf708899adea8b125ff47bc3b9953f3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zwkn6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:14Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:14 crc kubenswrapper[4642]: I0128 06:49:14.663097 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:14 crc kubenswrapper[4642]: I0128 06:49:14.663138 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:14 crc kubenswrapper[4642]: I0128 06:49:14.663148 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:14 crc kubenswrapper[4642]: I0128 06:49:14.663163 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:14 crc kubenswrapper[4642]: I0128 06:49:14.663173 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:14Z","lastTransitionTime":"2026-01-28T06:49:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:14 crc kubenswrapper[4642]: I0128 06:49:14.765257 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:14 crc kubenswrapper[4642]: I0128 06:49:14.765293 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:14 crc kubenswrapper[4642]: I0128 06:49:14.765306 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:14 crc kubenswrapper[4642]: I0128 06:49:14.765327 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:14 crc kubenswrapper[4642]: I0128 06:49:14.765339 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:14Z","lastTransitionTime":"2026-01-28T06:49:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:14 crc kubenswrapper[4642]: I0128 06:49:14.867864 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:14 crc kubenswrapper[4642]: I0128 06:49:14.867905 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:14 crc kubenswrapper[4642]: I0128 06:49:14.867932 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:14 crc kubenswrapper[4642]: I0128 06:49:14.867951 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:14 crc kubenswrapper[4642]: I0128 06:49:14.867960 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:14Z","lastTransitionTime":"2026-01-28T06:49:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:14 crc kubenswrapper[4642]: I0128 06:49:14.969655 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:14 crc kubenswrapper[4642]: I0128 06:49:14.969682 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:14 crc kubenswrapper[4642]: I0128 06:49:14.969691 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:14 crc kubenswrapper[4642]: I0128 06:49:14.969707 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:14 crc kubenswrapper[4642]: I0128 06:49:14.969716 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:14Z","lastTransitionTime":"2026-01-28T06:49:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:15 crc kubenswrapper[4642]: I0128 06:49:15.072249 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:15 crc kubenswrapper[4642]: I0128 06:49:15.072309 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:15 crc kubenswrapper[4642]: I0128 06:49:15.072321 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:15 crc kubenswrapper[4642]: I0128 06:49:15.072340 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:15 crc kubenswrapper[4642]: I0128 06:49:15.072366 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:15Z","lastTransitionTime":"2026-01-28T06:49:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:15 crc kubenswrapper[4642]: I0128 06:49:15.097675 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 06:49:15 crc kubenswrapper[4642]: I0128 06:49:15.097706 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 06:49:15 crc kubenswrapper[4642]: I0128 06:49:15.097737 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 06:49:15 crc kubenswrapper[4642]: E0128 06:49:15.098007 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 06:49:15 crc kubenswrapper[4642]: E0128 06:49:15.098091 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 06:49:15 crc kubenswrapper[4642]: E0128 06:49:15.098254 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 06:49:15 crc kubenswrapper[4642]: I0128 06:49:15.109270 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 02:38:55.179734204 +0000 UTC Jan 28 06:49:15 crc kubenswrapper[4642]: I0128 06:49:15.110052 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 28 06:49:15 crc kubenswrapper[4642]: I0128 06:49:15.174311 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:15 crc kubenswrapper[4642]: I0128 06:49:15.174349 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:15 crc kubenswrapper[4642]: I0128 06:49:15.174388 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:15 crc kubenswrapper[4642]: I0128 06:49:15.174405 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:15 crc kubenswrapper[4642]: I0128 06:49:15.174416 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:15Z","lastTransitionTime":"2026-01-28T06:49:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:15 crc kubenswrapper[4642]: I0128 06:49:15.277087 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:15 crc kubenswrapper[4642]: I0128 06:49:15.277112 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:15 crc kubenswrapper[4642]: I0128 06:49:15.277139 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:15 crc kubenswrapper[4642]: I0128 06:49:15.277154 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:15 crc kubenswrapper[4642]: I0128 06:49:15.277163 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:15Z","lastTransitionTime":"2026-01-28T06:49:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:15 crc kubenswrapper[4642]: I0128 06:49:15.379264 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:15 crc kubenswrapper[4642]: I0128 06:49:15.379307 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:15 crc kubenswrapper[4642]: I0128 06:49:15.379318 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:15 crc kubenswrapper[4642]: I0128 06:49:15.379341 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:15 crc kubenswrapper[4642]: I0128 06:49:15.379363 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:15Z","lastTransitionTime":"2026-01-28T06:49:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:15 crc kubenswrapper[4642]: I0128 06:49:15.481446 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:15 crc kubenswrapper[4642]: I0128 06:49:15.481492 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:15 crc kubenswrapper[4642]: I0128 06:49:15.481502 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:15 crc kubenswrapper[4642]: I0128 06:49:15.481522 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:15 crc kubenswrapper[4642]: I0128 06:49:15.481532 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:15Z","lastTransitionTime":"2026-01-28T06:49:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:15 crc kubenswrapper[4642]: I0128 06:49:15.583694 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:15 crc kubenswrapper[4642]: I0128 06:49:15.583751 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:15 crc kubenswrapper[4642]: I0128 06:49:15.583760 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:15 crc kubenswrapper[4642]: I0128 06:49:15.583776 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:15 crc kubenswrapper[4642]: I0128 06:49:15.583785 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:15Z","lastTransitionTime":"2026-01-28T06:49:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:15 crc kubenswrapper[4642]: I0128 06:49:15.685733 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:15 crc kubenswrapper[4642]: I0128 06:49:15.685773 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:15 crc kubenswrapper[4642]: I0128 06:49:15.685782 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:15 crc kubenswrapper[4642]: I0128 06:49:15.685798 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:15 crc kubenswrapper[4642]: I0128 06:49:15.685810 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:15Z","lastTransitionTime":"2026-01-28T06:49:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:15 crc kubenswrapper[4642]: I0128 06:49:15.788159 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:15 crc kubenswrapper[4642]: I0128 06:49:15.788226 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:15 crc kubenswrapper[4642]: I0128 06:49:15.788238 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:15 crc kubenswrapper[4642]: I0128 06:49:15.788261 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:15 crc kubenswrapper[4642]: I0128 06:49:15.788270 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:15Z","lastTransitionTime":"2026-01-28T06:49:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:15 crc kubenswrapper[4642]: I0128 06:49:15.890389 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:15 crc kubenswrapper[4642]: I0128 06:49:15.890440 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:15 crc kubenswrapper[4642]: I0128 06:49:15.890449 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:15 crc kubenswrapper[4642]: I0128 06:49:15.890467 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:15 crc kubenswrapper[4642]: I0128 06:49:15.890476 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:15Z","lastTransitionTime":"2026-01-28T06:49:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:15 crc kubenswrapper[4642]: I0128 06:49:15.993383 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:15 crc kubenswrapper[4642]: I0128 06:49:15.993640 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:15 crc kubenswrapper[4642]: I0128 06:49:15.993723 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:15 crc kubenswrapper[4642]: I0128 06:49:15.993801 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:15 crc kubenswrapper[4642]: I0128 06:49:15.993859 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:15Z","lastTransitionTime":"2026-01-28T06:49:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:16 crc kubenswrapper[4642]: I0128 06:49:16.095702 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:16 crc kubenswrapper[4642]: I0128 06:49:16.095748 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:16 crc kubenswrapper[4642]: I0128 06:49:16.095757 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:16 crc kubenswrapper[4642]: I0128 06:49:16.095773 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:16 crc kubenswrapper[4642]: I0128 06:49:16.095787 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:16Z","lastTransitionTime":"2026-01-28T06:49:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:16 crc kubenswrapper[4642]: I0128 06:49:16.097923 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bpz6r" Jan 28 06:49:16 crc kubenswrapper[4642]: E0128 06:49:16.098102 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bpz6r" podUID="e7ad39da-99cf-4851-be79-a7d38df54055" Jan 28 06:49:16 crc kubenswrapper[4642]: I0128 06:49:16.110065 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 14:15:27.024748989 +0000 UTC Jan 28 06:49:16 crc kubenswrapper[4642]: I0128 06:49:16.198477 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:16 crc kubenswrapper[4642]: I0128 06:49:16.198617 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:16 crc kubenswrapper[4642]: I0128 06:49:16.198644 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:16 crc kubenswrapper[4642]: I0128 06:49:16.198704 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:16 crc kubenswrapper[4642]: I0128 06:49:16.198731 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:16Z","lastTransitionTime":"2026-01-28T06:49:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:16 crc kubenswrapper[4642]: I0128 06:49:16.302261 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:16 crc kubenswrapper[4642]: I0128 06:49:16.302307 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:16 crc kubenswrapper[4642]: I0128 06:49:16.302317 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:16 crc kubenswrapper[4642]: I0128 06:49:16.302340 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:16 crc kubenswrapper[4642]: I0128 06:49:16.302349 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:16Z","lastTransitionTime":"2026-01-28T06:49:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:16 crc kubenswrapper[4642]: I0128 06:49:16.404441 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:16 crc kubenswrapper[4642]: I0128 06:49:16.404480 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:16 crc kubenswrapper[4642]: I0128 06:49:16.404489 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:16 crc kubenswrapper[4642]: I0128 06:49:16.404504 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:16 crc kubenswrapper[4642]: I0128 06:49:16.404513 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:16Z","lastTransitionTime":"2026-01-28T06:49:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:16 crc kubenswrapper[4642]: I0128 06:49:16.506277 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:16 crc kubenswrapper[4642]: I0128 06:49:16.506312 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:16 crc kubenswrapper[4642]: I0128 06:49:16.506320 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:16 crc kubenswrapper[4642]: I0128 06:49:16.506334 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:16 crc kubenswrapper[4642]: I0128 06:49:16.506343 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:16Z","lastTransitionTime":"2026-01-28T06:49:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:16 crc kubenswrapper[4642]: I0128 06:49:16.609227 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:16 crc kubenswrapper[4642]: I0128 06:49:16.609303 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:16 crc kubenswrapper[4642]: I0128 06:49:16.609316 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:16 crc kubenswrapper[4642]: I0128 06:49:16.609336 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:16 crc kubenswrapper[4642]: I0128 06:49:16.609344 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:16Z","lastTransitionTime":"2026-01-28T06:49:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:16 crc kubenswrapper[4642]: I0128 06:49:16.711477 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:16 crc kubenswrapper[4642]: I0128 06:49:16.711516 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:16 crc kubenswrapper[4642]: I0128 06:49:16.711526 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:16 crc kubenswrapper[4642]: I0128 06:49:16.711541 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:16 crc kubenswrapper[4642]: I0128 06:49:16.711551 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:16Z","lastTransitionTime":"2026-01-28T06:49:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:16 crc kubenswrapper[4642]: I0128 06:49:16.813737 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:16 crc kubenswrapper[4642]: I0128 06:49:16.813804 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:16 crc kubenswrapper[4642]: I0128 06:49:16.813816 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:16 crc kubenswrapper[4642]: I0128 06:49:16.813840 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:16 crc kubenswrapper[4642]: I0128 06:49:16.813854 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:16Z","lastTransitionTime":"2026-01-28T06:49:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:16 crc kubenswrapper[4642]: I0128 06:49:16.916895 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:16 crc kubenswrapper[4642]: I0128 06:49:16.916943 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:16 crc kubenswrapper[4642]: I0128 06:49:16.916952 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:16 crc kubenswrapper[4642]: I0128 06:49:16.916969 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:16 crc kubenswrapper[4642]: I0128 06:49:16.916978 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:16Z","lastTransitionTime":"2026-01-28T06:49:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:17 crc kubenswrapper[4642]: I0128 06:49:17.018799 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:17 crc kubenswrapper[4642]: I0128 06:49:17.018839 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:17 crc kubenswrapper[4642]: I0128 06:49:17.018848 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:17 crc kubenswrapper[4642]: I0128 06:49:17.018863 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:17 crc kubenswrapper[4642]: I0128 06:49:17.018875 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:17Z","lastTransitionTime":"2026-01-28T06:49:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:17 crc kubenswrapper[4642]: I0128 06:49:17.097572 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 06:49:17 crc kubenswrapper[4642]: I0128 06:49:17.097622 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 06:49:17 crc kubenswrapper[4642]: I0128 06:49:17.097635 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 06:49:17 crc kubenswrapper[4642]: E0128 06:49:17.097704 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 06:49:17 crc kubenswrapper[4642]: E0128 06:49:17.097885 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 06:49:17 crc kubenswrapper[4642]: E0128 06:49:17.097919 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 06:49:17 crc kubenswrapper[4642]: I0128 06:49:17.109864 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d609634-f051-46d5-9a1d-97785c7d8c67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76f1f4a4bf66b6661b0b869ac209c793c9066518674be305037372a22141c1d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f31fe6b99be2f9edd215e88aadd39790ce7f144061336772561af3eab969fb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f826dab9c51a49c5b54e934e0be64c809ff09247f8aa766b9a20734a3f1277da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf1949a0768be6ba1b7cfc9107df62005fa50466747e22440e5ea8cd04ec908c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:17Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:17 crc kubenswrapper[4642]: I0128 06:49:17.111407 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 13:34:38.420974057 +0000 UTC Jan 28 06:49:17 crc kubenswrapper[4642]: I0128 06:49:17.120335 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:17 crc kubenswrapper[4642]: I0128 06:49:17.120396 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:17 crc kubenswrapper[4642]: I0128 06:49:17.120407 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:17 crc kubenswrapper[4642]: I0128 06:49:17.120418 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:17 crc kubenswrapper[4642]: I0128 06:49:17.120429 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:17Z","lastTransitionTime":"2026-01-28T06:49:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:17 crc kubenswrapper[4642]: I0128 06:49:17.120883 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5886e382e4852d848554f98099a7cbdd4d0bc3f161f96fd6f34f300a748b0172\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:17Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:17 crc kubenswrapper[4642]: I0128 06:49:17.133950 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c33d9fa76f5feb29f150ff38354d292c6cb68dedb78eb8496754c23272d137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9c679b07a820cca68800f672c74f133d871d41e4cfe6c18e101febd7110b497\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba5c27390ef97be2917d1b375ca741d7a74118ba523043b1c826e4fccaee05d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a597a1b6d4b61efc1b46773965ad598626e298815da9ceffaf97965d72286d3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e87e992d302acbefdcab2e332757e1f6d6535a0acdbe268b1e2eb1d49a615940\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6187660025657deea7e0cd68d1341136261c5e66c9e32c3b00b181df029843b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f946a3d181059c560eee8b44f77e584a9b2169a8a7c6ee76a78036779954e5ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f946a3d181059c560eee8b44f77e584a9b2169a8a7c6ee76a78036779954e5ec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T06:49:12Z\\\",\\\"message\\\":\\\".0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:12Z is after 2025-08-24T17:21:41Z]\\\\nI0128 06:49:12.834443 6707 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI0128 06:49:12.834438 6707 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-apiserver-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"8b82f026-5975-4a1b-bb18-08d5d51147ec\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-apiserver-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T06:49:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-7fdwx_openshift-ovn-kubernetes(0f5d2a3f-25d8-4051-8000-30ec01a14eb0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cdaa92914e736d4130cee0be4f11381b0edec756ca70235c6a4205f04777874\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecf4d023197544de3b0fe0f541dc75332bb08d0f44046e46fd468e3c5a0f251e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecf4d023197544de3b0fe0f541dc75332bb08d0f44046e46fd468e3c5a0f251e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9vq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7fdwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:17Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:17 crc kubenswrapper[4642]: I0128 06:49:17.143514 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"286f1a08-e4be-475e-a3ff-c37c99c41ea6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6769a5284fe4dae7df7948532cabc41d1f4a27a7e61ab818d61b26eb8165a750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75be46ee251716ce59f7d4e31811114c790e07ff2068465e1bba6ee4abc6909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee1c4b797e1cd78fa6a850c8c7491a690dab16a3c06eeb71b2dc7e6799cf285a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7181241cbadd0b8edc9a6ff7f035813a1b2a06cdec451dc94d869bb56424a485\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70d9e422e29c4563585ee85b03f4d3e5705c7de8c388d337a52b30a7f0ccc096\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T06:48:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0128 06:48:09.026722 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 06:48:09.029824 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3147749855/tls.crt::/tmp/serving-cert-3147749855/tls.key\\\\\\\"\\\\nI0128 06:48:14.183760 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 06:48:14.185955 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 06:48:14.185992 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 06:48:14.186011 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 06:48:14.186020 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 06:48:14.192438 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0128 06:48:14.192454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0128 06:48:14.192477 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 06:48:14.192484 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 06:48:14.192488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 06:48:14.192490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 06:48:14.192493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 06:48:14.192496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0128 06:48:14.194350 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6d564af60cb16fea7167aea68703adced71f83b8ff6a2270e2e6d109f6e9c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d544adc85b9e84ad7685c6ce4e1ae6e7679fb24a21b7d44eb402b613ef6be7ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d544adc85b9e84ad7685c6ce4e1ae6e7679fb24a21b7d44eb402b613ef6be7ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:17Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:17 crc kubenswrapper[4642]: I0128 06:49:17.153373 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:17Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:17 crc kubenswrapper[4642]: I0128 06:49:17.167864 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zwkn6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e8cd657-e170-4331-9f82-7b84d122c8e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a285511bdaad491d745092714ce6a3d6b08414c40333226f4c7a0a1176ea7260\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd2d14172912918c88948764f41aa4b5f2f97fa49f9edf2007a478b7c06b807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fd2d14172912918c88948764f41aa4b5f2f97fa49f9edf2007a478b7c06b807\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa815d145b52bbd51ede4a9e3c88571f861f70a6f5dc998910e506e358f73af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7aa815d145b52bbd51ede4a9e3c88571f861f70a6f5dc998910e506e358f73af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e6b7369b4926f78f6257e3f166c6540e93281a880c2a21ae3f87ecf32aaca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e6b7369b4926f78f6257e3f166c6540e93281a880c2a21ae3f87ecf32aaca0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce747d5ffc2f77c768c9e7a472ef0dbf11387e6c11215622ffceef721c9f677\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce747d5ffc2f77c768c9e7a472ef0dbf11387e6c11215622ffceef721c9f677\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a45118261ada0368155ab783800af3a71afe445370f06759b98928518999151d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a45118261ada0368155ab783800af3a71afe445370f06759b98928518999151d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7d61a01712de033907e9fec425f948bcf708899adea8b125ff47bc3b9953f3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7d61a01712de033907e9fec425f948bcf708899adea8b125ff47bc3b9953f3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:48:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hw2j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zwkn6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:17Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:17 crc kubenswrapper[4642]: I0128 06:49:17.176614 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-28n48" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d569b7c-8a0e-4074-b61f-4139413b9849\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:49:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:49:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0226cb06c2fc831da7dadb94e0ca448e1b610cc146da17dd286aae90a38aa7c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04752968302ea040e4a64b015290addc260974a7243137d55948c3e1a3e3a849\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T06:49:07Z\\\",\\\"message\\\":\\\"2026-01-28T06:48:22+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_8833352e-4ecc-4aab-baeb-af22fcd68e4b\\\\n2026-01-28T06:48:22+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_8833352e-4ecc-4aab-baeb-af22fcd68e4b to /host/opt/cni/bin/\\\\n2026-01-28T06:48:22Z [verbose] multus-daemon started\\\\n2026-01-28T06:48:22Z [verbose] Readiness Indicator file check\\\\n2026-01-28T06:49:07Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kp8zj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-28n48\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:17Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:17 crc kubenswrapper[4642]: I0128 06:49:17.184019 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wcwkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0008ccce-c71a-484f-9df7-02d5d92e8b02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04f0fbcd26ac8f7340450ee1edb0990146e1fee5ac1a5de78611d4a3c6ca7516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s57nl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wcwkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:17Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:17 crc kubenswrapper[4642]: I0128 06:49:17.197732 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"364a41f5-b7da-4543-8e45-f08af6a85704\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f33a0fcd4bb9f2c421ecbba53f0f91739300960dc39210ccfe09a7a34354da2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89aeff0a865741b52eac31ea0348bac1ad46e0ea2d02a9418bd3d54962ced4b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24b044de7f7defa8f02a1cbf7533140e1d7db48bacf8ba8f712ec06894884d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c24a6d04920c0bca6339c96dad720c43253f29d88dc8024142a54ed579c3be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4066ece05190c23d6f3616b681a6ffa95bba26a803aa7f382a268e64d428c07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd416d95b0a09e8e551a5d24bf2a55ae70ef5c4cffe697f53917be7358bbebed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd416d95b0a09e8e551a5d24bf2a55ae70ef5c4cffe697f53917be7358bbebed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:47:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://775f9843a2dd1e7cce276625e6be7ce3b4e51bbf902f33ddd44dc506f2d09f05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://775f9843a2dd1e7cce276625e6be7ce3b4e51bbf902f33ddd44dc506f2d09f05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://804bb38839933015daf31dfa5893c9ceb56641ff31ad96f3696c8421e4b30ba1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://804bb38839933015daf31dfa5893c9ceb56641ff31ad96f3696c8421e4b30ba1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:47:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:47:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:17Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:17 crc kubenswrapper[4642]: I0128 06:49:17.205492 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed93b88d-b3ec-4cd8-be12-bc48b0c33702\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aec208f2eb87996ab0f1883e6b216ded8157366b59ee57759273141e3e12d243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25a19a0551b29145aabbd9238cf56dafbbbdc28ad9fc6454753375eb5871265b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ce5d04a42dc4cfb6547e7bc0e2a9957b5eba820b37ad9f1c50dd412b28aec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3159057a36a02a3782786be649812ee0be2e1af5df8aeda45327dcee067a08dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3159057a36a02a3782786be649812ee0be2e1af5df8aeda45327dcee067a08dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T06:47:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T06:47:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:47:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:17Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:17 crc kubenswrapper[4642]: I0128 06:49:17.213989 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:17Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:17 crc kubenswrapper[4642]: I0128 06:49:17.222003 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:17Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:17 crc kubenswrapper[4642]: I0128 06:49:17.222801 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:17 crc kubenswrapper[4642]: I0128 06:49:17.222832 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:17 crc kubenswrapper[4642]: I0128 06:49:17.222840 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:17 crc kubenswrapper[4642]: I0128 06:49:17.222858 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:17 crc kubenswrapper[4642]: I0128 06:49:17.222868 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:17Z","lastTransitionTime":"2026-01-28T06:49:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:17 crc kubenswrapper[4642]: I0128 06:49:17.230429 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4acb8b3be7913130bfe0acca1f8ed39f30ee2121eec228e4878f5645899f53cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfb4620ab8d587f286964f74ee575b4837fe55979a2f70767d30709cdb4fb2a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:17Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:17 crc kubenswrapper[4642]: I0128 06:49:17.238435 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tzmpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e61b4710-6f7c-4ab1-b7bb-50d445aeda93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51dac6b178ad3961488d10a37458f5d40c3345f0091f7050a4c3d310ea46704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rfx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tzmpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:17Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:17 crc kubenswrapper[4642]: I0128 06:49:17.246253 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"338ae955-434d-40bd-8519-580badf3e175\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8c176f35c846bbf16f17d93e6c30e3708e24e70060ebc0e78f9714fc8c402f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdf4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f517e5ecc6bf060062365392506b3ee3a09354a4f5cfaa980e0e3a2507f298c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdf4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hdsmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:17Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:17 crc kubenswrapper[4642]: I0128 06:49:17.253654 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bpz6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ad39da-99cf-4851-be79-a7d38df54055\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7dl8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7dl8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bpz6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:17Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:17 crc kubenswrapper[4642]: I0128 06:49:17.260997 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e3e00c82a4c8d33e89ddc75f7c6c393e11f7a98c88c8e3e160fd8f206ece474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:17Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:17 crc kubenswrapper[4642]: I0128 06:49:17.268057 4642 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7js4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c649375a-21a3-43f5-bd77-fbc87de527fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T06:48:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56f22af3d5ead6bcc96f5a78901579a4c02637186393613dad0006ecc09e4731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjkkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ad8af253ff523788331637fe108b083ad990eea4334fdea34d2e3b299de83eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T06:48:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjkkw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T06:48:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-f7js4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:17Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:17 crc kubenswrapper[4642]: I0128 06:49:17.324753 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:17 crc kubenswrapper[4642]: I0128 06:49:17.324937 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:17 crc kubenswrapper[4642]: I0128 06:49:17.325014 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:17 crc kubenswrapper[4642]: I0128 06:49:17.325232 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:17 crc kubenswrapper[4642]: I0128 06:49:17.325429 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:17Z","lastTransitionTime":"2026-01-28T06:49:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:17 crc kubenswrapper[4642]: I0128 06:49:17.427340 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:17 crc kubenswrapper[4642]: I0128 06:49:17.427503 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:17 crc kubenswrapper[4642]: I0128 06:49:17.427560 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:17 crc kubenswrapper[4642]: I0128 06:49:17.427620 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:17 crc kubenswrapper[4642]: I0128 06:49:17.427670 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:17Z","lastTransitionTime":"2026-01-28T06:49:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:17 crc kubenswrapper[4642]: I0128 06:49:17.530101 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:17 crc kubenswrapper[4642]: I0128 06:49:17.530128 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:17 crc kubenswrapper[4642]: I0128 06:49:17.530137 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:17 crc kubenswrapper[4642]: I0128 06:49:17.530151 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:17 crc kubenswrapper[4642]: I0128 06:49:17.530161 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:17Z","lastTransitionTime":"2026-01-28T06:49:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:17 crc kubenswrapper[4642]: I0128 06:49:17.633045 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:17 crc kubenswrapper[4642]: I0128 06:49:17.633088 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:17 crc kubenswrapper[4642]: I0128 06:49:17.633099 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:17 crc kubenswrapper[4642]: I0128 06:49:17.633114 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:17 crc kubenswrapper[4642]: I0128 06:49:17.633127 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:17Z","lastTransitionTime":"2026-01-28T06:49:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:17 crc kubenswrapper[4642]: I0128 06:49:17.735524 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:17 crc kubenswrapper[4642]: I0128 06:49:17.735566 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:17 crc kubenswrapper[4642]: I0128 06:49:17.735577 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:17 crc kubenswrapper[4642]: I0128 06:49:17.735593 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:17 crc kubenswrapper[4642]: I0128 06:49:17.735607 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:17Z","lastTransitionTime":"2026-01-28T06:49:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:17 crc kubenswrapper[4642]: I0128 06:49:17.838176 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:17 crc kubenswrapper[4642]: I0128 06:49:17.838264 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:17 crc kubenswrapper[4642]: I0128 06:49:17.838273 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:17 crc kubenswrapper[4642]: I0128 06:49:17.838290 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:17 crc kubenswrapper[4642]: I0128 06:49:17.838301 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:17Z","lastTransitionTime":"2026-01-28T06:49:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:17 crc kubenswrapper[4642]: I0128 06:49:17.941020 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:17 crc kubenswrapper[4642]: I0128 06:49:17.941052 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:17 crc kubenswrapper[4642]: I0128 06:49:17.941062 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:17 crc kubenswrapper[4642]: I0128 06:49:17.941076 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:17 crc kubenswrapper[4642]: I0128 06:49:17.941086 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:17Z","lastTransitionTime":"2026-01-28T06:49:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:18 crc kubenswrapper[4642]: I0128 06:49:18.043433 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:18 crc kubenswrapper[4642]: I0128 06:49:18.043477 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:18 crc kubenswrapper[4642]: I0128 06:49:18.043488 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:18 crc kubenswrapper[4642]: I0128 06:49:18.043505 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:18 crc kubenswrapper[4642]: I0128 06:49:18.043517 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:18Z","lastTransitionTime":"2026-01-28T06:49:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:18 crc kubenswrapper[4642]: I0128 06:49:18.097966 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bpz6r" Jan 28 06:49:18 crc kubenswrapper[4642]: E0128 06:49:18.098144 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bpz6r" podUID="e7ad39da-99cf-4851-be79-a7d38df54055" Jan 28 06:49:18 crc kubenswrapper[4642]: I0128 06:49:18.112677 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 10:25:04.647911821 +0000 UTC Jan 28 06:49:18 crc kubenswrapper[4642]: I0128 06:49:18.146014 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:18 crc kubenswrapper[4642]: I0128 06:49:18.146067 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:18 crc kubenswrapper[4642]: I0128 06:49:18.146079 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:18 crc kubenswrapper[4642]: I0128 06:49:18.146092 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:18 crc kubenswrapper[4642]: I0128 06:49:18.146107 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:18Z","lastTransitionTime":"2026-01-28T06:49:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:18 crc kubenswrapper[4642]: I0128 06:49:18.248925 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:18 crc kubenswrapper[4642]: I0128 06:49:18.248969 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:18 crc kubenswrapper[4642]: I0128 06:49:18.248979 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:18 crc kubenswrapper[4642]: I0128 06:49:18.248996 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:18 crc kubenswrapper[4642]: I0128 06:49:18.249007 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:18Z","lastTransitionTime":"2026-01-28T06:49:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:18 crc kubenswrapper[4642]: I0128 06:49:18.351003 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:18 crc kubenswrapper[4642]: I0128 06:49:18.351110 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:18 crc kubenswrapper[4642]: I0128 06:49:18.351126 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:18 crc kubenswrapper[4642]: I0128 06:49:18.351143 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:18 crc kubenswrapper[4642]: I0128 06:49:18.351154 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:18Z","lastTransitionTime":"2026-01-28T06:49:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:18 crc kubenswrapper[4642]: I0128 06:49:18.453850 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:18 crc kubenswrapper[4642]: I0128 06:49:18.453902 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:18 crc kubenswrapper[4642]: I0128 06:49:18.453917 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:18 crc kubenswrapper[4642]: I0128 06:49:18.453941 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:18 crc kubenswrapper[4642]: I0128 06:49:18.453956 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:18Z","lastTransitionTime":"2026-01-28T06:49:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:18 crc kubenswrapper[4642]: I0128 06:49:18.556542 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:18 crc kubenswrapper[4642]: I0128 06:49:18.556596 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:18 crc kubenswrapper[4642]: I0128 06:49:18.556608 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:18 crc kubenswrapper[4642]: I0128 06:49:18.556632 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:18 crc kubenswrapper[4642]: I0128 06:49:18.556645 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:18Z","lastTransitionTime":"2026-01-28T06:49:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:18 crc kubenswrapper[4642]: I0128 06:49:18.659098 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:18 crc kubenswrapper[4642]: I0128 06:49:18.659157 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:18 crc kubenswrapper[4642]: I0128 06:49:18.659169 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:18 crc kubenswrapper[4642]: I0128 06:49:18.659208 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:18 crc kubenswrapper[4642]: I0128 06:49:18.659223 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:18Z","lastTransitionTime":"2026-01-28T06:49:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:18 crc kubenswrapper[4642]: I0128 06:49:18.762450 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:18 crc kubenswrapper[4642]: I0128 06:49:18.762488 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:18 crc kubenswrapper[4642]: I0128 06:49:18.762500 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:18 crc kubenswrapper[4642]: I0128 06:49:18.762519 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:18 crc kubenswrapper[4642]: I0128 06:49:18.762540 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:18Z","lastTransitionTime":"2026-01-28T06:49:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:18 crc kubenswrapper[4642]: I0128 06:49:18.763863 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:18 crc kubenswrapper[4642]: I0128 06:49:18.763927 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:18 crc kubenswrapper[4642]: I0128 06:49:18.763948 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:18 crc kubenswrapper[4642]: I0128 06:49:18.763975 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:18 crc kubenswrapper[4642]: I0128 06:49:18.763992 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:18Z","lastTransitionTime":"2026-01-28T06:49:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:18 crc kubenswrapper[4642]: E0128 06:49:18.773491 4642 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:49:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:49:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:49:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:49:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:49:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:49:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:49:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:49:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"af9fb491-28d4-46e5-857c-727aaf9d83d0\\\",\\\"systemUUID\\\":\\\"6d7f2c45-295c-4dcd-b97d-d5c383274b44\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:18Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:18 crc kubenswrapper[4642]: I0128 06:49:18.776965 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:18 crc kubenswrapper[4642]: I0128 06:49:18.777002 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:18 crc kubenswrapper[4642]: I0128 06:49:18.777012 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:18 crc kubenswrapper[4642]: I0128 06:49:18.777026 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:18 crc kubenswrapper[4642]: I0128 06:49:18.777035 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:18Z","lastTransitionTime":"2026-01-28T06:49:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:18 crc kubenswrapper[4642]: E0128 06:49:18.785346 4642 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:49:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:49:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:49:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:49:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:49:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:49:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:49:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:49:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"af9fb491-28d4-46e5-857c-727aaf9d83d0\\\",\\\"systemUUID\\\":\\\"6d7f2c45-295c-4dcd-b97d-d5c383274b44\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:18Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:18 crc kubenswrapper[4642]: I0128 06:49:18.788253 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:18 crc kubenswrapper[4642]: I0128 06:49:18.788293 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:18 crc kubenswrapper[4642]: I0128 06:49:18.788307 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:18 crc kubenswrapper[4642]: I0128 06:49:18.788323 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:18 crc kubenswrapper[4642]: I0128 06:49:18.788332 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:18Z","lastTransitionTime":"2026-01-28T06:49:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:18 crc kubenswrapper[4642]: E0128 06:49:18.796530 4642 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:49:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:49:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:49:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:49:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:49:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:49:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:49:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:49:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"af9fb491-28d4-46e5-857c-727aaf9d83d0\\\",\\\"systemUUID\\\":\\\"6d7f2c45-295c-4dcd-b97d-d5c383274b44\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:18Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:18 crc kubenswrapper[4642]: I0128 06:49:18.799000 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:18 crc kubenswrapper[4642]: I0128 06:49:18.799050 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:18 crc kubenswrapper[4642]: I0128 06:49:18.799062 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:18 crc kubenswrapper[4642]: I0128 06:49:18.799076 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:18 crc kubenswrapper[4642]: I0128 06:49:18.799104 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:18Z","lastTransitionTime":"2026-01-28T06:49:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:18 crc kubenswrapper[4642]: E0128 06:49:18.808661 4642 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:49:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:49:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:49:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:49:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:49:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:49:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:49:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:49:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"af9fb491-28d4-46e5-857c-727aaf9d83d0\\\",\\\"systemUUID\\\":\\\"6d7f2c45-295c-4dcd-b97d-d5c383274b44\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:18Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:18 crc kubenswrapper[4642]: I0128 06:49:18.811442 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:18 crc kubenswrapper[4642]: I0128 06:49:18.811468 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:18 crc kubenswrapper[4642]: I0128 06:49:18.811495 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:18 crc kubenswrapper[4642]: I0128 06:49:18.811506 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:18 crc kubenswrapper[4642]: I0128 06:49:18.811514 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:18Z","lastTransitionTime":"2026-01-28T06:49:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:18 crc kubenswrapper[4642]: I0128 06:49:18.820378 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 06:49:18 crc kubenswrapper[4642]: E0128 06:49:18.820329 4642 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:49:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:49:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:49:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:49:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:49:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:49:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T06:49:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T06:49:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"af9fb491-28d4-46e5-857c-727aaf9d83d0\\\",\\\"systemUUID\\\":\\\"6d7f2c45-295c-4dcd-b97d-d5c383274b44\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T06:49:18Z is after 2025-08-24T17:21:41Z" Jan 28 06:49:18 crc kubenswrapper[4642]: E0128 06:49:18.820448 4642 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 06:49:18 crc kubenswrapper[4642]: E0128 06:49:18.820547 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 06:50:22.820527641 +0000 UTC m=+146.052616450 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 06:49:18 crc kubenswrapper[4642]: I0128 06:49:18.820713 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 06:49:18 crc kubenswrapper[4642]: I0128 06:49:18.820750 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 06:49:18 crc kubenswrapper[4642]: E0128 06:49:18.820845 4642 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 06:49:18 crc kubenswrapper[4642]: E0128 06:49:18.820861 4642 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 06:49:18 crc kubenswrapper[4642]: E0128 06:49:18.820899 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 06:50:22.820892317 +0000 UTC m=+146.052981127 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 06:49:18 crc kubenswrapper[4642]: E0128 06:49:18.820917 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 06:50:22.820910843 +0000 UTC m=+146.052999651 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 06:49:18 crc kubenswrapper[4642]: I0128 06:49:18.865399 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:18 crc kubenswrapper[4642]: I0128 06:49:18.865443 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:18 crc kubenswrapper[4642]: I0128 06:49:18.865454 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:18 crc kubenswrapper[4642]: I0128 06:49:18.865474 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:18 crc kubenswrapper[4642]: I0128 06:49:18.865488 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:18Z","lastTransitionTime":"2026-01-28T06:49:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:18 crc kubenswrapper[4642]: I0128 06:49:18.922007 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 06:49:18 crc kubenswrapper[4642]: I0128 06:49:18.922081 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 06:49:18 crc kubenswrapper[4642]: E0128 06:49:18.922243 4642 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 06:49:18 crc kubenswrapper[4642]: E0128 06:49:18.922267 4642 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 06:49:18 crc kubenswrapper[4642]: E0128 06:49:18.922280 4642 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 06:49:18 crc kubenswrapper[4642]: E0128 06:49:18.922328 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 06:50:22.922315375 +0000 UTC m=+146.154404184 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 06:49:18 crc kubenswrapper[4642]: E0128 06:49:18.922243 4642 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 06:49:18 crc kubenswrapper[4642]: E0128 06:49:18.922386 4642 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 06:49:18 crc kubenswrapper[4642]: E0128 06:49:18.922400 4642 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 06:49:18 crc kubenswrapper[4642]: E0128 06:49:18.922448 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 06:50:22.922435121 +0000 UTC m=+146.154523940 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 06:49:18 crc kubenswrapper[4642]: I0128 06:49:18.967638 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:18 crc kubenswrapper[4642]: I0128 06:49:18.967680 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:18 crc kubenswrapper[4642]: I0128 06:49:18.967689 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:18 crc kubenswrapper[4642]: I0128 06:49:18.967708 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:18 crc kubenswrapper[4642]: I0128 06:49:18.967717 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:18Z","lastTransitionTime":"2026-01-28T06:49:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:19 crc kubenswrapper[4642]: I0128 06:49:19.070475 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:19 crc kubenswrapper[4642]: I0128 06:49:19.070531 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:19 crc kubenswrapper[4642]: I0128 06:49:19.070543 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:19 crc kubenswrapper[4642]: I0128 06:49:19.070563 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:19 crc kubenswrapper[4642]: I0128 06:49:19.070575 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:19Z","lastTransitionTime":"2026-01-28T06:49:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:19 crc kubenswrapper[4642]: I0128 06:49:19.097950 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 06:49:19 crc kubenswrapper[4642]: I0128 06:49:19.098080 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 06:49:19 crc kubenswrapper[4642]: I0128 06:49:19.097971 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 06:49:19 crc kubenswrapper[4642]: E0128 06:49:19.098336 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 06:49:19 crc kubenswrapper[4642]: E0128 06:49:19.098483 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 06:49:19 crc kubenswrapper[4642]: E0128 06:49:19.098613 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 06:49:19 crc kubenswrapper[4642]: I0128 06:49:19.113221 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 04:42:11.594662222 +0000 UTC Jan 28 06:49:19 crc kubenswrapper[4642]: I0128 06:49:19.173263 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:19 crc kubenswrapper[4642]: I0128 06:49:19.173308 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:19 crc kubenswrapper[4642]: I0128 06:49:19.173318 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:19 crc kubenswrapper[4642]: I0128 06:49:19.173337 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:19 crc kubenswrapper[4642]: I0128 06:49:19.173347 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:19Z","lastTransitionTime":"2026-01-28T06:49:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:19 crc kubenswrapper[4642]: I0128 06:49:19.276520 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:19 crc kubenswrapper[4642]: I0128 06:49:19.276633 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:19 crc kubenswrapper[4642]: I0128 06:49:19.276712 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:19 crc kubenswrapper[4642]: I0128 06:49:19.276812 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:19 crc kubenswrapper[4642]: I0128 06:49:19.276885 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:19Z","lastTransitionTime":"2026-01-28T06:49:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:19 crc kubenswrapper[4642]: I0128 06:49:19.380852 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:19 crc kubenswrapper[4642]: I0128 06:49:19.381009 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:19 crc kubenswrapper[4642]: I0128 06:49:19.381077 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:19 crc kubenswrapper[4642]: I0128 06:49:19.381149 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:19 crc kubenswrapper[4642]: I0128 06:49:19.381255 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:19Z","lastTransitionTime":"2026-01-28T06:49:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:19 crc kubenswrapper[4642]: I0128 06:49:19.484103 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:19 crc kubenswrapper[4642]: I0128 06:49:19.484142 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:19 crc kubenswrapper[4642]: I0128 06:49:19.484150 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:19 crc kubenswrapper[4642]: I0128 06:49:19.484166 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:19 crc kubenswrapper[4642]: I0128 06:49:19.484177 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:19Z","lastTransitionTime":"2026-01-28T06:49:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:19 crc kubenswrapper[4642]: I0128 06:49:19.586265 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:19 crc kubenswrapper[4642]: I0128 06:49:19.586318 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:19 crc kubenswrapper[4642]: I0128 06:49:19.586328 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:19 crc kubenswrapper[4642]: I0128 06:49:19.586344 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:19 crc kubenswrapper[4642]: I0128 06:49:19.586366 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:19Z","lastTransitionTime":"2026-01-28T06:49:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:19 crc kubenswrapper[4642]: I0128 06:49:19.688205 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:19 crc kubenswrapper[4642]: I0128 06:49:19.688248 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:19 crc kubenswrapper[4642]: I0128 06:49:19.688256 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:19 crc kubenswrapper[4642]: I0128 06:49:19.688270 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:19 crc kubenswrapper[4642]: I0128 06:49:19.688279 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:19Z","lastTransitionTime":"2026-01-28T06:49:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:19 crc kubenswrapper[4642]: I0128 06:49:19.789895 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:19 crc kubenswrapper[4642]: I0128 06:49:19.789927 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:19 crc kubenswrapper[4642]: I0128 06:49:19.789934 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:19 crc kubenswrapper[4642]: I0128 06:49:19.789944 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:19 crc kubenswrapper[4642]: I0128 06:49:19.789953 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:19Z","lastTransitionTime":"2026-01-28T06:49:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:19 crc kubenswrapper[4642]: I0128 06:49:19.891858 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:19 crc kubenswrapper[4642]: I0128 06:49:19.891892 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:19 crc kubenswrapper[4642]: I0128 06:49:19.891901 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:19 crc kubenswrapper[4642]: I0128 06:49:19.891917 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:19 crc kubenswrapper[4642]: I0128 06:49:19.891925 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:19Z","lastTransitionTime":"2026-01-28T06:49:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:19 crc kubenswrapper[4642]: I0128 06:49:19.994257 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:19 crc kubenswrapper[4642]: I0128 06:49:19.994316 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:19 crc kubenswrapper[4642]: I0128 06:49:19.994326 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:19 crc kubenswrapper[4642]: I0128 06:49:19.994348 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:19 crc kubenswrapper[4642]: I0128 06:49:19.994375 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:19Z","lastTransitionTime":"2026-01-28T06:49:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:20 crc kubenswrapper[4642]: I0128 06:49:20.096396 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:20 crc kubenswrapper[4642]: I0128 06:49:20.096436 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:20 crc kubenswrapper[4642]: I0128 06:49:20.096450 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:20 crc kubenswrapper[4642]: I0128 06:49:20.096470 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:20 crc kubenswrapper[4642]: I0128 06:49:20.096482 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:20Z","lastTransitionTime":"2026-01-28T06:49:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:20 crc kubenswrapper[4642]: I0128 06:49:20.097763 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bpz6r" Jan 28 06:49:20 crc kubenswrapper[4642]: E0128 06:49:20.097939 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bpz6r" podUID="e7ad39da-99cf-4851-be79-a7d38df54055" Jan 28 06:49:20 crc kubenswrapper[4642]: I0128 06:49:20.113959 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 04:33:54.702636531 +0000 UTC Jan 28 06:49:20 crc kubenswrapper[4642]: I0128 06:49:20.197899 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:20 crc kubenswrapper[4642]: I0128 06:49:20.197947 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:20 crc kubenswrapper[4642]: I0128 06:49:20.197959 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:20 crc kubenswrapper[4642]: I0128 06:49:20.197977 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:20 crc kubenswrapper[4642]: I0128 06:49:20.197990 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:20Z","lastTransitionTime":"2026-01-28T06:49:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:20 crc kubenswrapper[4642]: I0128 06:49:20.300066 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:20 crc kubenswrapper[4642]: I0128 06:49:20.300103 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:20 crc kubenswrapper[4642]: I0128 06:49:20.300114 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:20 crc kubenswrapper[4642]: I0128 06:49:20.300128 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:20 crc kubenswrapper[4642]: I0128 06:49:20.300136 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:20Z","lastTransitionTime":"2026-01-28T06:49:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:20 crc kubenswrapper[4642]: I0128 06:49:20.401926 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:20 crc kubenswrapper[4642]: I0128 06:49:20.401958 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:20 crc kubenswrapper[4642]: I0128 06:49:20.401966 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:20 crc kubenswrapper[4642]: I0128 06:49:20.401975 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:20 crc kubenswrapper[4642]: I0128 06:49:20.401984 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:20Z","lastTransitionTime":"2026-01-28T06:49:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:20 crc kubenswrapper[4642]: I0128 06:49:20.503762 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:20 crc kubenswrapper[4642]: I0128 06:49:20.503847 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:20 crc kubenswrapper[4642]: I0128 06:49:20.503857 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:20 crc kubenswrapper[4642]: I0128 06:49:20.503868 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:20 crc kubenswrapper[4642]: I0128 06:49:20.503881 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:20Z","lastTransitionTime":"2026-01-28T06:49:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:20 crc kubenswrapper[4642]: I0128 06:49:20.606057 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:20 crc kubenswrapper[4642]: I0128 06:49:20.606108 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:20 crc kubenswrapper[4642]: I0128 06:49:20.606117 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:20 crc kubenswrapper[4642]: I0128 06:49:20.606136 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:20 crc kubenswrapper[4642]: I0128 06:49:20.606151 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:20Z","lastTransitionTime":"2026-01-28T06:49:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:20 crc kubenswrapper[4642]: I0128 06:49:20.708665 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:20 crc kubenswrapper[4642]: I0128 06:49:20.708725 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:20 crc kubenswrapper[4642]: I0128 06:49:20.708734 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:20 crc kubenswrapper[4642]: I0128 06:49:20.708747 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:20 crc kubenswrapper[4642]: I0128 06:49:20.708756 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:20Z","lastTransitionTime":"2026-01-28T06:49:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:20 crc kubenswrapper[4642]: I0128 06:49:20.810621 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:20 crc kubenswrapper[4642]: I0128 06:49:20.810659 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:20 crc kubenswrapper[4642]: I0128 06:49:20.810668 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:20 crc kubenswrapper[4642]: I0128 06:49:20.810684 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:20 crc kubenswrapper[4642]: I0128 06:49:20.810695 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:20Z","lastTransitionTime":"2026-01-28T06:49:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:20 crc kubenswrapper[4642]: I0128 06:49:20.913277 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:20 crc kubenswrapper[4642]: I0128 06:49:20.913345 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:20 crc kubenswrapper[4642]: I0128 06:49:20.913356 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:20 crc kubenswrapper[4642]: I0128 06:49:20.913393 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:20 crc kubenswrapper[4642]: I0128 06:49:20.913408 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:20Z","lastTransitionTime":"2026-01-28T06:49:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:21 crc kubenswrapper[4642]: I0128 06:49:21.015703 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:21 crc kubenswrapper[4642]: I0128 06:49:21.015753 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:21 crc kubenswrapper[4642]: I0128 06:49:21.015763 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:21 crc kubenswrapper[4642]: I0128 06:49:21.015780 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:21 crc kubenswrapper[4642]: I0128 06:49:21.015795 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:21Z","lastTransitionTime":"2026-01-28T06:49:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:21 crc kubenswrapper[4642]: I0128 06:49:21.098016 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 06:49:21 crc kubenswrapper[4642]: I0128 06:49:21.098003 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 06:49:21 crc kubenswrapper[4642]: I0128 06:49:21.098264 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 06:49:21 crc kubenswrapper[4642]: E0128 06:49:21.098440 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 06:49:21 crc kubenswrapper[4642]: E0128 06:49:21.098629 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 06:49:21 crc kubenswrapper[4642]: E0128 06:49:21.098867 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 06:49:21 crc kubenswrapper[4642]: I0128 06:49:21.114534 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 23:42:51.934824896 +0000 UTC Jan 28 06:49:21 crc kubenswrapper[4642]: I0128 06:49:21.117903 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:21 crc kubenswrapper[4642]: I0128 06:49:21.117945 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:21 crc kubenswrapper[4642]: I0128 06:49:21.117956 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:21 crc kubenswrapper[4642]: I0128 06:49:21.117971 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:21 crc kubenswrapper[4642]: I0128 06:49:21.117983 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:21Z","lastTransitionTime":"2026-01-28T06:49:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:21 crc kubenswrapper[4642]: I0128 06:49:21.220175 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:21 crc kubenswrapper[4642]: I0128 06:49:21.220239 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:21 crc kubenswrapper[4642]: I0128 06:49:21.220251 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:21 crc kubenswrapper[4642]: I0128 06:49:21.220267 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:21 crc kubenswrapper[4642]: I0128 06:49:21.220279 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:21Z","lastTransitionTime":"2026-01-28T06:49:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:21 crc kubenswrapper[4642]: I0128 06:49:21.322894 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:21 crc kubenswrapper[4642]: I0128 06:49:21.322937 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:21 crc kubenswrapper[4642]: I0128 06:49:21.322947 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:21 crc kubenswrapper[4642]: I0128 06:49:21.322973 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:21 crc kubenswrapper[4642]: I0128 06:49:21.322985 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:21Z","lastTransitionTime":"2026-01-28T06:49:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:21 crc kubenswrapper[4642]: I0128 06:49:21.424804 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:21 crc kubenswrapper[4642]: I0128 06:49:21.424903 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:21 crc kubenswrapper[4642]: I0128 06:49:21.424917 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:21 crc kubenswrapper[4642]: I0128 06:49:21.424930 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:21 crc kubenswrapper[4642]: I0128 06:49:21.424939 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:21Z","lastTransitionTime":"2026-01-28T06:49:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:21 crc kubenswrapper[4642]: I0128 06:49:21.527502 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:21 crc kubenswrapper[4642]: I0128 06:49:21.527560 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:21 crc kubenswrapper[4642]: I0128 06:49:21.527575 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:21 crc kubenswrapper[4642]: I0128 06:49:21.527592 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:21 crc kubenswrapper[4642]: I0128 06:49:21.527607 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:21Z","lastTransitionTime":"2026-01-28T06:49:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:21 crc kubenswrapper[4642]: I0128 06:49:21.629937 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:21 crc kubenswrapper[4642]: I0128 06:49:21.629999 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:21 crc kubenswrapper[4642]: I0128 06:49:21.630011 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:21 crc kubenswrapper[4642]: I0128 06:49:21.630040 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:21 crc kubenswrapper[4642]: I0128 06:49:21.630054 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:21Z","lastTransitionTime":"2026-01-28T06:49:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:21 crc kubenswrapper[4642]: I0128 06:49:21.732316 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:21 crc kubenswrapper[4642]: I0128 06:49:21.732359 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:21 crc kubenswrapper[4642]: I0128 06:49:21.732378 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:21 crc kubenswrapper[4642]: I0128 06:49:21.732395 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:21 crc kubenswrapper[4642]: I0128 06:49:21.732407 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:21Z","lastTransitionTime":"2026-01-28T06:49:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:21 crc kubenswrapper[4642]: I0128 06:49:21.835148 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:21 crc kubenswrapper[4642]: I0128 06:49:21.835216 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:21 crc kubenswrapper[4642]: I0128 06:49:21.835230 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:21 crc kubenswrapper[4642]: I0128 06:49:21.835251 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:21 crc kubenswrapper[4642]: I0128 06:49:21.835263 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:21Z","lastTransitionTime":"2026-01-28T06:49:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:21 crc kubenswrapper[4642]: I0128 06:49:21.937971 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:21 crc kubenswrapper[4642]: I0128 06:49:21.938009 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:21 crc kubenswrapper[4642]: I0128 06:49:21.938020 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:21 crc kubenswrapper[4642]: I0128 06:49:21.938039 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:21 crc kubenswrapper[4642]: I0128 06:49:21.938050 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:21Z","lastTransitionTime":"2026-01-28T06:49:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:22 crc kubenswrapper[4642]: I0128 06:49:22.040527 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:22 crc kubenswrapper[4642]: I0128 06:49:22.040562 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:22 crc kubenswrapper[4642]: I0128 06:49:22.040578 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:22 crc kubenswrapper[4642]: I0128 06:49:22.040592 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:22 crc kubenswrapper[4642]: I0128 06:49:22.040602 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:22Z","lastTransitionTime":"2026-01-28T06:49:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:22 crc kubenswrapper[4642]: I0128 06:49:22.098404 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bpz6r" Jan 28 06:49:22 crc kubenswrapper[4642]: E0128 06:49:22.098617 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bpz6r" podUID="e7ad39da-99cf-4851-be79-a7d38df54055" Jan 28 06:49:22 crc kubenswrapper[4642]: I0128 06:49:22.108164 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 28 06:49:22 crc kubenswrapper[4642]: I0128 06:49:22.114750 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 03:38:35.598845868 +0000 UTC Jan 28 06:49:22 crc kubenswrapper[4642]: I0128 06:49:22.143519 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:22 crc kubenswrapper[4642]: I0128 06:49:22.143542 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:22 crc kubenswrapper[4642]: I0128 06:49:22.143552 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:22 crc kubenswrapper[4642]: I0128 06:49:22.143571 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:22 crc kubenswrapper[4642]: I0128 06:49:22.143582 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:22Z","lastTransitionTime":"2026-01-28T06:49:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:22 crc kubenswrapper[4642]: I0128 06:49:22.245402 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:22 crc kubenswrapper[4642]: I0128 06:49:22.245442 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:22 crc kubenswrapper[4642]: I0128 06:49:22.245453 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:22 crc kubenswrapper[4642]: I0128 06:49:22.245470 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:22 crc kubenswrapper[4642]: I0128 06:49:22.245483 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:22Z","lastTransitionTime":"2026-01-28T06:49:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:22 crc kubenswrapper[4642]: I0128 06:49:22.347025 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:22 crc kubenswrapper[4642]: I0128 06:49:22.347050 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:22 crc kubenswrapper[4642]: I0128 06:49:22.347059 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:22 crc kubenswrapper[4642]: I0128 06:49:22.347070 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:22 crc kubenswrapper[4642]: I0128 06:49:22.347078 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:22Z","lastTransitionTime":"2026-01-28T06:49:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:22 crc kubenswrapper[4642]: I0128 06:49:22.449212 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:22 crc kubenswrapper[4642]: I0128 06:49:22.449265 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:22 crc kubenswrapper[4642]: I0128 06:49:22.449277 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:22 crc kubenswrapper[4642]: I0128 06:49:22.449294 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:22 crc kubenswrapper[4642]: I0128 06:49:22.449306 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:22Z","lastTransitionTime":"2026-01-28T06:49:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:22 crc kubenswrapper[4642]: I0128 06:49:22.550993 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:22 crc kubenswrapper[4642]: I0128 06:49:22.551029 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:22 crc kubenswrapper[4642]: I0128 06:49:22.551041 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:22 crc kubenswrapper[4642]: I0128 06:49:22.551055 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:22 crc kubenswrapper[4642]: I0128 06:49:22.551071 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:22Z","lastTransitionTime":"2026-01-28T06:49:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:22 crc kubenswrapper[4642]: I0128 06:49:22.653440 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:22 crc kubenswrapper[4642]: I0128 06:49:22.653481 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:22 crc kubenswrapper[4642]: I0128 06:49:22.653491 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:22 crc kubenswrapper[4642]: I0128 06:49:22.653509 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:22 crc kubenswrapper[4642]: I0128 06:49:22.653524 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:22Z","lastTransitionTime":"2026-01-28T06:49:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:22 crc kubenswrapper[4642]: I0128 06:49:22.755960 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:22 crc kubenswrapper[4642]: I0128 06:49:22.756555 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:22 crc kubenswrapper[4642]: I0128 06:49:22.756650 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:22 crc kubenswrapper[4642]: I0128 06:49:22.756756 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:22 crc kubenswrapper[4642]: I0128 06:49:22.756850 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:22Z","lastTransitionTime":"2026-01-28T06:49:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:22 crc kubenswrapper[4642]: I0128 06:49:22.858978 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:22 crc kubenswrapper[4642]: I0128 06:49:22.859013 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:22 crc kubenswrapper[4642]: I0128 06:49:22.859023 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:22 crc kubenswrapper[4642]: I0128 06:49:22.859038 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:22 crc kubenswrapper[4642]: I0128 06:49:22.859048 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:22Z","lastTransitionTime":"2026-01-28T06:49:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:22 crc kubenswrapper[4642]: I0128 06:49:22.960465 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:22 crc kubenswrapper[4642]: I0128 06:49:22.960497 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:22 crc kubenswrapper[4642]: I0128 06:49:22.960507 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:22 crc kubenswrapper[4642]: I0128 06:49:22.960518 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:22 crc kubenswrapper[4642]: I0128 06:49:22.960535 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:22Z","lastTransitionTime":"2026-01-28T06:49:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:23 crc kubenswrapper[4642]: I0128 06:49:23.062225 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:23 crc kubenswrapper[4642]: I0128 06:49:23.062257 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:23 crc kubenswrapper[4642]: I0128 06:49:23.062266 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:23 crc kubenswrapper[4642]: I0128 06:49:23.062278 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:23 crc kubenswrapper[4642]: I0128 06:49:23.062286 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:23Z","lastTransitionTime":"2026-01-28T06:49:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:23 crc kubenswrapper[4642]: I0128 06:49:23.097911 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 06:49:23 crc kubenswrapper[4642]: I0128 06:49:23.097955 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 06:49:23 crc kubenswrapper[4642]: E0128 06:49:23.098022 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 06:49:23 crc kubenswrapper[4642]: I0128 06:49:23.097911 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 06:49:23 crc kubenswrapper[4642]: E0128 06:49:23.098121 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 06:49:23 crc kubenswrapper[4642]: E0128 06:49:23.098220 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 06:49:23 crc kubenswrapper[4642]: I0128 06:49:23.114901 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 14:53:21.503760179 +0000 UTC Jan 28 06:49:23 crc kubenswrapper[4642]: I0128 06:49:23.164917 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:23 crc kubenswrapper[4642]: I0128 06:49:23.164957 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:23 crc kubenswrapper[4642]: I0128 06:49:23.164969 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:23 crc kubenswrapper[4642]: I0128 06:49:23.164985 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:23 crc kubenswrapper[4642]: I0128 06:49:23.164995 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:23Z","lastTransitionTime":"2026-01-28T06:49:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:23 crc kubenswrapper[4642]: I0128 06:49:23.266947 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:23 crc kubenswrapper[4642]: I0128 06:49:23.266974 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:23 crc kubenswrapper[4642]: I0128 06:49:23.266983 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:23 crc kubenswrapper[4642]: I0128 06:49:23.266994 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:23 crc kubenswrapper[4642]: I0128 06:49:23.267002 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:23Z","lastTransitionTime":"2026-01-28T06:49:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:23 crc kubenswrapper[4642]: I0128 06:49:23.368747 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:23 crc kubenswrapper[4642]: I0128 06:49:23.368774 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:23 crc kubenswrapper[4642]: I0128 06:49:23.368782 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:23 crc kubenswrapper[4642]: I0128 06:49:23.368792 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:23 crc kubenswrapper[4642]: I0128 06:49:23.368800 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:23Z","lastTransitionTime":"2026-01-28T06:49:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:23 crc kubenswrapper[4642]: I0128 06:49:23.470516 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:23 crc kubenswrapper[4642]: I0128 06:49:23.470546 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:23 crc kubenswrapper[4642]: I0128 06:49:23.470556 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:23 crc kubenswrapper[4642]: I0128 06:49:23.470565 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:23 crc kubenswrapper[4642]: I0128 06:49:23.470572 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:23Z","lastTransitionTime":"2026-01-28T06:49:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:23 crc kubenswrapper[4642]: I0128 06:49:23.572571 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:23 crc kubenswrapper[4642]: I0128 06:49:23.572602 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:23 crc kubenswrapper[4642]: I0128 06:49:23.572611 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:23 crc kubenswrapper[4642]: I0128 06:49:23.572624 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:23 crc kubenswrapper[4642]: I0128 06:49:23.572632 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:23Z","lastTransitionTime":"2026-01-28T06:49:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:23 crc kubenswrapper[4642]: I0128 06:49:23.675109 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:23 crc kubenswrapper[4642]: I0128 06:49:23.675156 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:23 crc kubenswrapper[4642]: I0128 06:49:23.675169 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:23 crc kubenswrapper[4642]: I0128 06:49:23.675210 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:23 crc kubenswrapper[4642]: I0128 06:49:23.675231 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:23Z","lastTransitionTime":"2026-01-28T06:49:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:23 crc kubenswrapper[4642]: I0128 06:49:23.776917 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:23 crc kubenswrapper[4642]: I0128 06:49:23.776947 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:23 crc kubenswrapper[4642]: I0128 06:49:23.776956 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:23 crc kubenswrapper[4642]: I0128 06:49:23.776969 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:23 crc kubenswrapper[4642]: I0128 06:49:23.776980 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:23Z","lastTransitionTime":"2026-01-28T06:49:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:23 crc kubenswrapper[4642]: I0128 06:49:23.878964 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:23 crc kubenswrapper[4642]: I0128 06:49:23.878992 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:23 crc kubenswrapper[4642]: I0128 06:49:23.879003 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:23 crc kubenswrapper[4642]: I0128 06:49:23.879013 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:23 crc kubenswrapper[4642]: I0128 06:49:23.879021 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:23Z","lastTransitionTime":"2026-01-28T06:49:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:23 crc kubenswrapper[4642]: I0128 06:49:23.981201 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:23 crc kubenswrapper[4642]: I0128 06:49:23.981243 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:23 crc kubenswrapper[4642]: I0128 06:49:23.981257 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:23 crc kubenswrapper[4642]: I0128 06:49:23.981271 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:23 crc kubenswrapper[4642]: I0128 06:49:23.981279 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:23Z","lastTransitionTime":"2026-01-28T06:49:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:24 crc kubenswrapper[4642]: I0128 06:49:24.083843 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:24 crc kubenswrapper[4642]: I0128 06:49:24.083904 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:24 crc kubenswrapper[4642]: I0128 06:49:24.083915 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:24 crc kubenswrapper[4642]: I0128 06:49:24.083935 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:24 crc kubenswrapper[4642]: I0128 06:49:24.083966 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:24Z","lastTransitionTime":"2026-01-28T06:49:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:24 crc kubenswrapper[4642]: I0128 06:49:24.098240 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bpz6r" Jan 28 06:49:24 crc kubenswrapper[4642]: E0128 06:49:24.098376 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bpz6r" podUID="e7ad39da-99cf-4851-be79-a7d38df54055" Jan 28 06:49:24 crc kubenswrapper[4642]: I0128 06:49:24.115513 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 06:28:23.636400752 +0000 UTC Jan 28 06:49:24 crc kubenswrapper[4642]: I0128 06:49:24.186027 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:24 crc kubenswrapper[4642]: I0128 06:49:24.186055 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:24 crc kubenswrapper[4642]: I0128 06:49:24.186063 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:24 crc kubenswrapper[4642]: I0128 06:49:24.186079 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:24 crc kubenswrapper[4642]: I0128 06:49:24.186106 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:24Z","lastTransitionTime":"2026-01-28T06:49:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:24 crc kubenswrapper[4642]: I0128 06:49:24.287738 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:24 crc kubenswrapper[4642]: I0128 06:49:24.287766 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:24 crc kubenswrapper[4642]: I0128 06:49:24.287780 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:24 crc kubenswrapper[4642]: I0128 06:49:24.287791 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:24 crc kubenswrapper[4642]: I0128 06:49:24.287798 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:24Z","lastTransitionTime":"2026-01-28T06:49:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:24 crc kubenswrapper[4642]: I0128 06:49:24.389965 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:24 crc kubenswrapper[4642]: I0128 06:49:24.390018 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:24 crc kubenswrapper[4642]: I0128 06:49:24.390029 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:24 crc kubenswrapper[4642]: I0128 06:49:24.390041 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:24 crc kubenswrapper[4642]: I0128 06:49:24.390052 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:24Z","lastTransitionTime":"2026-01-28T06:49:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:24 crc kubenswrapper[4642]: I0128 06:49:24.491829 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:24 crc kubenswrapper[4642]: I0128 06:49:24.491893 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:24 crc kubenswrapper[4642]: I0128 06:49:24.491905 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:24 crc kubenswrapper[4642]: I0128 06:49:24.491921 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:24 crc kubenswrapper[4642]: I0128 06:49:24.491949 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:24Z","lastTransitionTime":"2026-01-28T06:49:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:24 crc kubenswrapper[4642]: I0128 06:49:24.594127 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:24 crc kubenswrapper[4642]: I0128 06:49:24.594163 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:24 crc kubenswrapper[4642]: I0128 06:49:24.594172 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:24 crc kubenswrapper[4642]: I0128 06:49:24.594223 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:24 crc kubenswrapper[4642]: I0128 06:49:24.594233 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:24Z","lastTransitionTime":"2026-01-28T06:49:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:24 crc kubenswrapper[4642]: I0128 06:49:24.696789 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:24 crc kubenswrapper[4642]: I0128 06:49:24.696820 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:24 crc kubenswrapper[4642]: I0128 06:49:24.696831 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:24 crc kubenswrapper[4642]: I0128 06:49:24.696847 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:24 crc kubenswrapper[4642]: I0128 06:49:24.696879 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:24Z","lastTransitionTime":"2026-01-28T06:49:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:24 crc kubenswrapper[4642]: I0128 06:49:24.798425 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:24 crc kubenswrapper[4642]: I0128 06:49:24.798457 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:24 crc kubenswrapper[4642]: I0128 06:49:24.798467 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:24 crc kubenswrapper[4642]: I0128 06:49:24.798479 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:24 crc kubenswrapper[4642]: I0128 06:49:24.798486 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:24Z","lastTransitionTime":"2026-01-28T06:49:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:24 crc kubenswrapper[4642]: I0128 06:49:24.900859 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:24 crc kubenswrapper[4642]: I0128 06:49:24.900891 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:24 crc kubenswrapper[4642]: I0128 06:49:24.900901 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:24 crc kubenswrapper[4642]: I0128 06:49:24.900912 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:24 crc kubenswrapper[4642]: I0128 06:49:24.900919 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:24Z","lastTransitionTime":"2026-01-28T06:49:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:25 crc kubenswrapper[4642]: I0128 06:49:25.003115 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:25 crc kubenswrapper[4642]: I0128 06:49:25.003140 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:25 crc kubenswrapper[4642]: I0128 06:49:25.003149 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:25 crc kubenswrapper[4642]: I0128 06:49:25.003160 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:25 crc kubenswrapper[4642]: I0128 06:49:25.003170 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:25Z","lastTransitionTime":"2026-01-28T06:49:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:25 crc kubenswrapper[4642]: I0128 06:49:25.097765 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 06:49:25 crc kubenswrapper[4642]: I0128 06:49:25.097846 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 06:49:25 crc kubenswrapper[4642]: I0128 06:49:25.097890 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 06:49:25 crc kubenswrapper[4642]: E0128 06:49:25.098176 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 06:49:25 crc kubenswrapper[4642]: E0128 06:49:25.098272 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 06:49:25 crc kubenswrapper[4642]: E0128 06:49:25.098329 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 06:49:25 crc kubenswrapper[4642]: I0128 06:49:25.099267 4642 scope.go:117] "RemoveContainer" containerID="f946a3d181059c560eee8b44f77e584a9b2169a8a7c6ee76a78036779954e5ec" Jan 28 06:49:25 crc kubenswrapper[4642]: E0128 06:49:25.099511 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-7fdwx_openshift-ovn-kubernetes(0f5d2a3f-25d8-4051-8000-30ec01a14eb0)\"" pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" podUID="0f5d2a3f-25d8-4051-8000-30ec01a14eb0" Jan 28 06:49:25 crc kubenswrapper[4642]: I0128 06:49:25.104660 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:25 crc kubenswrapper[4642]: I0128 06:49:25.104701 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:25 crc kubenswrapper[4642]: I0128 06:49:25.104712 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:25 crc kubenswrapper[4642]: I0128 06:49:25.104727 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:25 crc kubenswrapper[4642]: I0128 06:49:25.104741 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:25Z","lastTransitionTime":"2026-01-28T06:49:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:25 crc kubenswrapper[4642]: I0128 06:49:25.115926 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 15:35:53.209632381 +0000 UTC Jan 28 06:49:25 crc kubenswrapper[4642]: I0128 06:49:25.206338 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:25 crc kubenswrapper[4642]: I0128 06:49:25.206378 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:25 crc kubenswrapper[4642]: I0128 06:49:25.206388 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:25 crc kubenswrapper[4642]: I0128 06:49:25.206399 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:25 crc kubenswrapper[4642]: I0128 06:49:25.206410 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:25Z","lastTransitionTime":"2026-01-28T06:49:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:25 crc kubenswrapper[4642]: I0128 06:49:25.308838 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:25 crc kubenswrapper[4642]: I0128 06:49:25.308875 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:25 crc kubenswrapper[4642]: I0128 06:49:25.308886 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:25 crc kubenswrapper[4642]: I0128 06:49:25.308901 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:25 crc kubenswrapper[4642]: I0128 06:49:25.308915 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:25Z","lastTransitionTime":"2026-01-28T06:49:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:25 crc kubenswrapper[4642]: I0128 06:49:25.411086 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:25 crc kubenswrapper[4642]: I0128 06:49:25.411119 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:25 crc kubenswrapper[4642]: I0128 06:49:25.411129 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:25 crc kubenswrapper[4642]: I0128 06:49:25.411142 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:25 crc kubenswrapper[4642]: I0128 06:49:25.411155 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:25Z","lastTransitionTime":"2026-01-28T06:49:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:25 crc kubenswrapper[4642]: I0128 06:49:25.513125 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:25 crc kubenswrapper[4642]: I0128 06:49:25.513170 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:25 crc kubenswrapper[4642]: I0128 06:49:25.513181 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:25 crc kubenswrapper[4642]: I0128 06:49:25.513221 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:25 crc kubenswrapper[4642]: I0128 06:49:25.513236 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:25Z","lastTransitionTime":"2026-01-28T06:49:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:25 crc kubenswrapper[4642]: I0128 06:49:25.615265 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:25 crc kubenswrapper[4642]: I0128 06:49:25.615305 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:25 crc kubenswrapper[4642]: I0128 06:49:25.615314 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:25 crc kubenswrapper[4642]: I0128 06:49:25.615330 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:25 crc kubenswrapper[4642]: I0128 06:49:25.615341 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:25Z","lastTransitionTime":"2026-01-28T06:49:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:25 crc kubenswrapper[4642]: I0128 06:49:25.717665 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:25 crc kubenswrapper[4642]: I0128 06:49:25.717707 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:25 crc kubenswrapper[4642]: I0128 06:49:25.717718 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:25 crc kubenswrapper[4642]: I0128 06:49:25.717733 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:25 crc kubenswrapper[4642]: I0128 06:49:25.717745 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:25Z","lastTransitionTime":"2026-01-28T06:49:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:25 crc kubenswrapper[4642]: I0128 06:49:25.819684 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:25 crc kubenswrapper[4642]: I0128 06:49:25.819739 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:25 crc kubenswrapper[4642]: I0128 06:49:25.819750 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:25 crc kubenswrapper[4642]: I0128 06:49:25.819769 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:25 crc kubenswrapper[4642]: I0128 06:49:25.819786 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:25Z","lastTransitionTime":"2026-01-28T06:49:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:25 crc kubenswrapper[4642]: I0128 06:49:25.921710 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:25 crc kubenswrapper[4642]: I0128 06:49:25.921760 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:25 crc kubenswrapper[4642]: I0128 06:49:25.921771 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:25 crc kubenswrapper[4642]: I0128 06:49:25.921789 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:25 crc kubenswrapper[4642]: I0128 06:49:25.921822 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:25Z","lastTransitionTime":"2026-01-28T06:49:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:26 crc kubenswrapper[4642]: I0128 06:49:26.023983 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:26 crc kubenswrapper[4642]: I0128 06:49:26.024020 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:26 crc kubenswrapper[4642]: I0128 06:49:26.024029 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:26 crc kubenswrapper[4642]: I0128 06:49:26.024045 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:26 crc kubenswrapper[4642]: I0128 06:49:26.024054 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:26Z","lastTransitionTime":"2026-01-28T06:49:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:26 crc kubenswrapper[4642]: I0128 06:49:26.098441 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bpz6r" Jan 28 06:49:26 crc kubenswrapper[4642]: E0128 06:49:26.098608 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bpz6r" podUID="e7ad39da-99cf-4851-be79-a7d38df54055" Jan 28 06:49:26 crc kubenswrapper[4642]: I0128 06:49:26.116230 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 11:37:40.059136339 +0000 UTC Jan 28 06:49:26 crc kubenswrapper[4642]: I0128 06:49:26.127668 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:26 crc kubenswrapper[4642]: I0128 06:49:26.127716 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:26 crc kubenswrapper[4642]: I0128 06:49:26.127731 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:26 crc kubenswrapper[4642]: I0128 06:49:26.127759 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:26 crc kubenswrapper[4642]: I0128 06:49:26.127771 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:26Z","lastTransitionTime":"2026-01-28T06:49:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:26 crc kubenswrapper[4642]: I0128 06:49:26.230271 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:26 crc kubenswrapper[4642]: I0128 06:49:26.230326 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:26 crc kubenswrapper[4642]: I0128 06:49:26.230338 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:26 crc kubenswrapper[4642]: I0128 06:49:26.230359 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:26 crc kubenswrapper[4642]: I0128 06:49:26.230381 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:26Z","lastTransitionTime":"2026-01-28T06:49:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:26 crc kubenswrapper[4642]: I0128 06:49:26.332898 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:26 crc kubenswrapper[4642]: I0128 06:49:26.332950 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:26 crc kubenswrapper[4642]: I0128 06:49:26.332960 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:26 crc kubenswrapper[4642]: I0128 06:49:26.332979 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:26 crc kubenswrapper[4642]: I0128 06:49:26.332999 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:26Z","lastTransitionTime":"2026-01-28T06:49:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:26 crc kubenswrapper[4642]: I0128 06:49:26.435171 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:26 crc kubenswrapper[4642]: I0128 06:49:26.435224 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:26 crc kubenswrapper[4642]: I0128 06:49:26.435234 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:26 crc kubenswrapper[4642]: I0128 06:49:26.435251 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:26 crc kubenswrapper[4642]: I0128 06:49:26.435264 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:26Z","lastTransitionTime":"2026-01-28T06:49:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:26 crc kubenswrapper[4642]: I0128 06:49:26.537640 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:26 crc kubenswrapper[4642]: I0128 06:49:26.537690 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:26 crc kubenswrapper[4642]: I0128 06:49:26.537699 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:26 crc kubenswrapper[4642]: I0128 06:49:26.537715 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:26 crc kubenswrapper[4642]: I0128 06:49:26.537728 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:26Z","lastTransitionTime":"2026-01-28T06:49:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:26 crc kubenswrapper[4642]: I0128 06:49:26.640027 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:26 crc kubenswrapper[4642]: I0128 06:49:26.640083 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:26 crc kubenswrapper[4642]: I0128 06:49:26.640095 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:26 crc kubenswrapper[4642]: I0128 06:49:26.640118 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:26 crc kubenswrapper[4642]: I0128 06:49:26.640131 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:26Z","lastTransitionTime":"2026-01-28T06:49:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:26 crc kubenswrapper[4642]: I0128 06:49:26.741943 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:26 crc kubenswrapper[4642]: I0128 06:49:26.741988 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:26 crc kubenswrapper[4642]: I0128 06:49:26.741998 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:26 crc kubenswrapper[4642]: I0128 06:49:26.742025 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:26 crc kubenswrapper[4642]: I0128 06:49:26.742037 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:26Z","lastTransitionTime":"2026-01-28T06:49:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:26 crc kubenswrapper[4642]: I0128 06:49:26.844599 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:26 crc kubenswrapper[4642]: I0128 06:49:26.844648 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:26 crc kubenswrapper[4642]: I0128 06:49:26.844658 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:26 crc kubenswrapper[4642]: I0128 06:49:26.844676 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:26 crc kubenswrapper[4642]: I0128 06:49:26.844689 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:26Z","lastTransitionTime":"2026-01-28T06:49:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:26 crc kubenswrapper[4642]: I0128 06:49:26.947800 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:26 crc kubenswrapper[4642]: I0128 06:49:26.947854 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:26 crc kubenswrapper[4642]: I0128 06:49:26.947865 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:26 crc kubenswrapper[4642]: I0128 06:49:26.947888 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:26 crc kubenswrapper[4642]: I0128 06:49:26.947903 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:26Z","lastTransitionTime":"2026-01-28T06:49:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:27 crc kubenswrapper[4642]: I0128 06:49:27.049990 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:27 crc kubenswrapper[4642]: I0128 06:49:27.050028 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:27 crc kubenswrapper[4642]: I0128 06:49:27.050038 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:27 crc kubenswrapper[4642]: I0128 06:49:27.050054 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:27 crc kubenswrapper[4642]: I0128 06:49:27.050066 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:27Z","lastTransitionTime":"2026-01-28T06:49:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:27 crc kubenswrapper[4642]: I0128 06:49:27.097910 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 06:49:27 crc kubenswrapper[4642]: E0128 06:49:27.098080 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 06:49:27 crc kubenswrapper[4642]: I0128 06:49:27.098129 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 06:49:27 crc kubenswrapper[4642]: I0128 06:49:27.098269 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 06:49:27 crc kubenswrapper[4642]: E0128 06:49:27.098299 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 06:49:27 crc kubenswrapper[4642]: E0128 06:49:27.098491 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 06:49:27 crc kubenswrapper[4642]: I0128 06:49:27.116465 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 12:45:10.718578427 +0000 UTC Jan 28 06:49:27 crc kubenswrapper[4642]: I0128 06:49:27.129002 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=5.128989898 podStartE2EDuration="5.128989898s" podCreationTimestamp="2026-01-28 06:49:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 06:49:27.117452367 +0000 UTC m=+90.349541177" watchObservedRunningTime="2026-01-28 06:49:27.128989898 +0000 UTC m=+90.361078707" Jan 28 06:49:27 crc kubenswrapper[4642]: I0128 06:49:27.144015 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7js4" podStartSLOduration=67.143998629 podStartE2EDuration="1m7.143998629s" podCreationTimestamp="2026-01-28 06:48:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 06:49:27.143776091 +0000 UTC m=+90.375864910" watchObservedRunningTime="2026-01-28 06:49:27.143998629 +0000 UTC m=+90.376087438" Jan 28 06:49:27 crc kubenswrapper[4642]: I0128 06:49:27.152146 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:27 crc kubenswrapper[4642]: I0128 06:49:27.152179 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:27 crc kubenswrapper[4642]: I0128 06:49:27.152325 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:27 crc kubenswrapper[4642]: I0128 06:49:27.152341 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:27 crc kubenswrapper[4642]: I0128 06:49:27.152353 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:27Z","lastTransitionTime":"2026-01-28T06:49:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:27 crc kubenswrapper[4642]: I0128 06:49:27.172918 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=73.17289317 podStartE2EDuration="1m13.17289317s" podCreationTimestamp="2026-01-28 06:48:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 06:49:27.156341926 +0000 UTC m=+90.388430735" watchObservedRunningTime="2026-01-28 06:49:27.17289317 +0000 UTC m=+90.404981979" Jan 28 06:49:27 crc kubenswrapper[4642]: I0128 06:49:27.212347 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=72.212327474 podStartE2EDuration="1m12.212327474s" podCreationTimestamp="2026-01-28 06:48:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 06:49:27.202347985 +0000 UTC m=+90.434436805" watchObservedRunningTime="2026-01-28 06:49:27.212327474 +0000 UTC m=+90.444416283" Jan 28 06:49:27 crc kubenswrapper[4642]: I0128 06:49:27.228301 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-zwkn6" podStartSLOduration=67.22827923 podStartE2EDuration="1m7.22827923s" podCreationTimestamp="2026-01-28 06:48:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 06:49:27.227925845 +0000 UTC m=+90.460014654" watchObservedRunningTime="2026-01-28 06:49:27.22827923 +0000 UTC m=+90.460368049" Jan 28 06:49:27 crc kubenswrapper[4642]: I0128 06:49:27.242053 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-28n48" podStartSLOduration=67.242036076 podStartE2EDuration="1m7.242036076s" podCreationTimestamp="2026-01-28 06:48:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 06:49:27.241712408 +0000 UTC m=+90.473801217" watchObservedRunningTime="2026-01-28 06:49:27.242036076 +0000 UTC m=+90.474124885" Jan 28 06:49:27 crc kubenswrapper[4642]: I0128 06:49:27.249831 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-wcwkw" podStartSLOduration=67.249809945 podStartE2EDuration="1m7.249809945s" podCreationTimestamp="2026-01-28 06:48:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 06:49:27.249602053 +0000 UTC m=+90.481690852" watchObservedRunningTime="2026-01-28 06:49:27.249809945 +0000 UTC m=+90.481898754" Jan 28 06:49:27 crc kubenswrapper[4642]: I0128 06:49:27.254960 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:27 crc kubenswrapper[4642]: I0128 06:49:27.255006 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:27 crc kubenswrapper[4642]: I0128 06:49:27.255019 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:27 crc kubenswrapper[4642]: I0128 06:49:27.255035 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:27 crc kubenswrapper[4642]: I0128 06:49:27.255046 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:27Z","lastTransitionTime":"2026-01-28T06:49:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:27 crc kubenswrapper[4642]: I0128 06:49:27.267150 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podStartSLOduration=67.267131088 podStartE2EDuration="1m7.267131088s" podCreationTimestamp="2026-01-28 06:48:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 06:49:27.257841728 +0000 UTC m=+90.489930537" watchObservedRunningTime="2026-01-28 06:49:27.267131088 +0000 UTC m=+90.499219897" Jan 28 06:49:27 crc kubenswrapper[4642]: I0128 06:49:27.285791 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=12.285777836 podStartE2EDuration="12.285777836s" podCreationTimestamp="2026-01-28 06:49:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 06:49:27.28485518 +0000 UTC m=+90.516943989" watchObservedRunningTime="2026-01-28 06:49:27.285777836 +0000 UTC m=+90.517866645" Jan 28 06:49:27 crc kubenswrapper[4642]: I0128 06:49:27.305342 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=40.305322553 podStartE2EDuration="40.305322553s" podCreationTimestamp="2026-01-28 06:48:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 06:49:27.295030166 +0000 UTC m=+90.527118975" watchObservedRunningTime="2026-01-28 06:49:27.305322553 +0000 UTC m=+90.537411362" Jan 28 06:49:27 crc kubenswrapper[4642]: I0128 06:49:27.357898 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:27 crc kubenswrapper[4642]: I0128 06:49:27.357933 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:27 crc kubenswrapper[4642]: I0128 06:49:27.357942 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:27 crc kubenswrapper[4642]: I0128 06:49:27.357956 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:27 crc kubenswrapper[4642]: I0128 06:49:27.357966 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:27Z","lastTransitionTime":"2026-01-28T06:49:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:27 crc kubenswrapper[4642]: I0128 06:49:27.460944 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:27 crc kubenswrapper[4642]: I0128 06:49:27.461000 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:27 crc kubenswrapper[4642]: I0128 06:49:27.461012 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:27 crc kubenswrapper[4642]: I0128 06:49:27.461026 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:27 crc kubenswrapper[4642]: I0128 06:49:27.461039 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:27Z","lastTransitionTime":"2026-01-28T06:49:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:27 crc kubenswrapper[4642]: I0128 06:49:27.566446 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:27 crc kubenswrapper[4642]: I0128 06:49:27.566498 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:27 crc kubenswrapper[4642]: I0128 06:49:27.566509 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:27 crc kubenswrapper[4642]: I0128 06:49:27.566526 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:27 crc kubenswrapper[4642]: I0128 06:49:27.566538 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:27Z","lastTransitionTime":"2026-01-28T06:49:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:27 crc kubenswrapper[4642]: I0128 06:49:27.669700 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:27 crc kubenswrapper[4642]: I0128 06:49:27.670068 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:27 crc kubenswrapper[4642]: I0128 06:49:27.670140 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:27 crc kubenswrapper[4642]: I0128 06:49:27.670302 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:27 crc kubenswrapper[4642]: I0128 06:49:27.670394 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:27Z","lastTransitionTime":"2026-01-28T06:49:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:27 crc kubenswrapper[4642]: I0128 06:49:27.773337 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:27 crc kubenswrapper[4642]: I0128 06:49:27.773441 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:27 crc kubenswrapper[4642]: I0128 06:49:27.773460 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:27 crc kubenswrapper[4642]: I0128 06:49:27.773483 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:27 crc kubenswrapper[4642]: I0128 06:49:27.773498 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:27Z","lastTransitionTime":"2026-01-28T06:49:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:27 crc kubenswrapper[4642]: I0128 06:49:27.877864 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:27 crc kubenswrapper[4642]: I0128 06:49:27.877914 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:27 crc kubenswrapper[4642]: I0128 06:49:27.877928 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:27 crc kubenswrapper[4642]: I0128 06:49:27.877947 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:27 crc kubenswrapper[4642]: I0128 06:49:27.877959 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:27Z","lastTransitionTime":"2026-01-28T06:49:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:27 crc kubenswrapper[4642]: I0128 06:49:27.981407 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:27 crc kubenswrapper[4642]: I0128 06:49:27.981466 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:27 crc kubenswrapper[4642]: I0128 06:49:27.981477 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:27 crc kubenswrapper[4642]: I0128 06:49:27.981498 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:27 crc kubenswrapper[4642]: I0128 06:49:27.981513 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:27Z","lastTransitionTime":"2026-01-28T06:49:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:28 crc kubenswrapper[4642]: I0128 06:49:28.085032 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:28 crc kubenswrapper[4642]: I0128 06:49:28.085092 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:28 crc kubenswrapper[4642]: I0128 06:49:28.085112 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:28 crc kubenswrapper[4642]: I0128 06:49:28.085137 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:28 crc kubenswrapper[4642]: I0128 06:49:28.085152 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:28Z","lastTransitionTime":"2026-01-28T06:49:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:28 crc kubenswrapper[4642]: I0128 06:49:28.098345 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bpz6r" Jan 28 06:49:28 crc kubenswrapper[4642]: E0128 06:49:28.098527 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bpz6r" podUID="e7ad39da-99cf-4851-be79-a7d38df54055" Jan 28 06:49:28 crc kubenswrapper[4642]: I0128 06:49:28.116629 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 04:37:11.599414123 +0000 UTC Jan 28 06:49:28 crc kubenswrapper[4642]: I0128 06:49:28.188685 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:28 crc kubenswrapper[4642]: I0128 06:49:28.188739 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:28 crc kubenswrapper[4642]: I0128 06:49:28.188753 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:28 crc kubenswrapper[4642]: I0128 06:49:28.188777 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:28 crc kubenswrapper[4642]: I0128 06:49:28.188791 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:28Z","lastTransitionTime":"2026-01-28T06:49:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:28 crc kubenswrapper[4642]: I0128 06:49:28.291565 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:28 crc kubenswrapper[4642]: I0128 06:49:28.291623 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:28 crc kubenswrapper[4642]: I0128 06:49:28.291641 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:28 crc kubenswrapper[4642]: I0128 06:49:28.291663 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:28 crc kubenswrapper[4642]: I0128 06:49:28.291677 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:28Z","lastTransitionTime":"2026-01-28T06:49:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:28 crc kubenswrapper[4642]: I0128 06:49:28.395031 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:28 crc kubenswrapper[4642]: I0128 06:49:28.395070 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:28 crc kubenswrapper[4642]: I0128 06:49:28.395079 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:28 crc kubenswrapper[4642]: I0128 06:49:28.395099 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:28 crc kubenswrapper[4642]: I0128 06:49:28.395113 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:28Z","lastTransitionTime":"2026-01-28T06:49:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:28 crc kubenswrapper[4642]: I0128 06:49:28.497609 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:28 crc kubenswrapper[4642]: I0128 06:49:28.497655 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:28 crc kubenswrapper[4642]: I0128 06:49:28.497666 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:28 crc kubenswrapper[4642]: I0128 06:49:28.497688 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:28 crc kubenswrapper[4642]: I0128 06:49:28.497705 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:28Z","lastTransitionTime":"2026-01-28T06:49:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:28 crc kubenswrapper[4642]: I0128 06:49:28.599623 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:28 crc kubenswrapper[4642]: I0128 06:49:28.599656 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:28 crc kubenswrapper[4642]: I0128 06:49:28.599663 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:28 crc kubenswrapper[4642]: I0128 06:49:28.599673 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:28 crc kubenswrapper[4642]: I0128 06:49:28.599711 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:28Z","lastTransitionTime":"2026-01-28T06:49:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:28 crc kubenswrapper[4642]: I0128 06:49:28.702596 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:28 crc kubenswrapper[4642]: I0128 06:49:28.702647 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:28 crc kubenswrapper[4642]: I0128 06:49:28.702657 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:28 crc kubenswrapper[4642]: I0128 06:49:28.702676 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:28 crc kubenswrapper[4642]: I0128 06:49:28.702689 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:28Z","lastTransitionTime":"2026-01-28T06:49:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:28 crc kubenswrapper[4642]: I0128 06:49:28.804826 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:28 crc kubenswrapper[4642]: I0128 06:49:28.804869 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:28 crc kubenswrapper[4642]: I0128 06:49:28.804879 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:28 crc kubenswrapper[4642]: I0128 06:49:28.804900 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:28 crc kubenswrapper[4642]: I0128 06:49:28.804911 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:28Z","lastTransitionTime":"2026-01-28T06:49:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:28 crc kubenswrapper[4642]: I0128 06:49:28.907331 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:28 crc kubenswrapper[4642]: I0128 06:49:28.907386 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:28 crc kubenswrapper[4642]: I0128 06:49:28.907396 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:28 crc kubenswrapper[4642]: I0128 06:49:28.907414 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:28 crc kubenswrapper[4642]: I0128 06:49:28.907424 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:28Z","lastTransitionTime":"2026-01-28T06:49:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:29 crc kubenswrapper[4642]: I0128 06:49:29.009827 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:29 crc kubenswrapper[4642]: I0128 06:49:29.009865 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:29 crc kubenswrapper[4642]: I0128 06:49:29.009874 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:29 crc kubenswrapper[4642]: I0128 06:49:29.009888 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:29 crc kubenswrapper[4642]: I0128 06:49:29.009898 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:29Z","lastTransitionTime":"2026-01-28T06:49:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:29 crc kubenswrapper[4642]: I0128 06:49:29.097865 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 06:49:29 crc kubenswrapper[4642]: I0128 06:49:29.097903 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 06:49:29 crc kubenswrapper[4642]: I0128 06:49:29.098027 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 06:49:29 crc kubenswrapper[4642]: E0128 06:49:29.098137 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 06:49:29 crc kubenswrapper[4642]: E0128 06:49:29.098238 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 06:49:29 crc kubenswrapper[4642]: E0128 06:49:29.098397 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 06:49:29 crc kubenswrapper[4642]: I0128 06:49:29.113064 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:29 crc kubenswrapper[4642]: I0128 06:49:29.113109 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:29 crc kubenswrapper[4642]: I0128 06:49:29.113122 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:29 crc kubenswrapper[4642]: I0128 06:49:29.113137 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:29 crc kubenswrapper[4642]: I0128 06:49:29.113152 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:29Z","lastTransitionTime":"2026-01-28T06:49:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:29 crc kubenswrapper[4642]: I0128 06:49:29.117267 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 10:32:06.478464227 +0000 UTC Jan 28 06:49:29 crc kubenswrapper[4642]: I0128 06:49:29.132143 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 06:49:29 crc kubenswrapper[4642]: I0128 06:49:29.132229 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 06:49:29 crc kubenswrapper[4642]: I0128 06:49:29.132243 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 06:49:29 crc kubenswrapper[4642]: I0128 06:49:29.132264 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 06:49:29 crc kubenswrapper[4642]: I0128 06:49:29.132278 4642 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T06:49:29Z","lastTransitionTime":"2026-01-28T06:49:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 06:49:29 crc kubenswrapper[4642]: I0128 06:49:29.170038 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-tzmpk" podStartSLOduration=69.170017673 podStartE2EDuration="1m9.170017673s" podCreationTimestamp="2026-01-28 06:48:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 06:49:27.331996726 +0000 UTC m=+90.564085534" watchObservedRunningTime="2026-01-28 06:49:29.170017673 +0000 UTC m=+92.402106482" Jan 28 06:49:29 crc kubenswrapper[4642]: I0128 06:49:29.170270 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-rch26"] Jan 28 06:49:29 crc kubenswrapper[4642]: I0128 06:49:29.170677 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rch26" Jan 28 06:49:29 crc kubenswrapper[4642]: I0128 06:49:29.172607 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 28 06:49:29 crc kubenswrapper[4642]: I0128 06:49:29.172636 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 28 06:49:29 crc kubenswrapper[4642]: I0128 06:49:29.172915 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 28 06:49:29 crc kubenswrapper[4642]: I0128 06:49:29.173367 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 28 06:49:29 crc kubenswrapper[4642]: I0128 06:49:29.322584 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/a1d41231-5430-460a-8d0f-aaadd3a3c46e-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-rch26\" (UID: \"a1d41231-5430-460a-8d0f-aaadd3a3c46e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rch26" Jan 28 06:49:29 crc kubenswrapper[4642]: I0128 06:49:29.322668 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a1d41231-5430-460a-8d0f-aaadd3a3c46e-service-ca\") pod \"cluster-version-operator-5c965bbfc6-rch26\" (UID: \"a1d41231-5430-460a-8d0f-aaadd3a3c46e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rch26" Jan 28 06:49:29 crc kubenswrapper[4642]: I0128 06:49:29.322701 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/a1d41231-5430-460a-8d0f-aaadd3a3c46e-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-rch26\" (UID: \"a1d41231-5430-460a-8d0f-aaadd3a3c46e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rch26" Jan 28 06:49:29 crc kubenswrapper[4642]: I0128 06:49:29.322749 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a1d41231-5430-460a-8d0f-aaadd3a3c46e-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-rch26\" (UID: \"a1d41231-5430-460a-8d0f-aaadd3a3c46e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rch26" Jan 28 06:49:29 crc kubenswrapper[4642]: I0128 06:49:29.322769 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1d41231-5430-460a-8d0f-aaadd3a3c46e-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-rch26\" (UID: \"a1d41231-5430-460a-8d0f-aaadd3a3c46e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rch26" Jan 28 06:49:29 crc kubenswrapper[4642]: I0128 06:49:29.423834 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a1d41231-5430-460a-8d0f-aaadd3a3c46e-service-ca\") pod \"cluster-version-operator-5c965bbfc6-rch26\" (UID: \"a1d41231-5430-460a-8d0f-aaadd3a3c46e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rch26" Jan 28 06:49:29 crc kubenswrapper[4642]: I0128 06:49:29.423879 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/a1d41231-5430-460a-8d0f-aaadd3a3c46e-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-rch26\" (UID: \"a1d41231-5430-460a-8d0f-aaadd3a3c46e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rch26" Jan 28 06:49:29 crc kubenswrapper[4642]: I0128 06:49:29.423915 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a1d41231-5430-460a-8d0f-aaadd3a3c46e-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-rch26\" (UID: \"a1d41231-5430-460a-8d0f-aaadd3a3c46e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rch26" Jan 28 06:49:29 crc kubenswrapper[4642]: I0128 06:49:29.423937 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1d41231-5430-460a-8d0f-aaadd3a3c46e-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-rch26\" (UID: \"a1d41231-5430-460a-8d0f-aaadd3a3c46e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rch26" Jan 28 06:49:29 crc kubenswrapper[4642]: I0128 06:49:29.423966 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/a1d41231-5430-460a-8d0f-aaadd3a3c46e-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-rch26\" (UID: \"a1d41231-5430-460a-8d0f-aaadd3a3c46e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rch26" Jan 28 06:49:29 crc kubenswrapper[4642]: I0128 06:49:29.424037 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/a1d41231-5430-460a-8d0f-aaadd3a3c46e-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-rch26\" (UID: \"a1d41231-5430-460a-8d0f-aaadd3a3c46e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rch26" Jan 28 06:49:29 crc kubenswrapper[4642]: I0128 06:49:29.424086 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/a1d41231-5430-460a-8d0f-aaadd3a3c46e-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-rch26\" (UID: \"a1d41231-5430-460a-8d0f-aaadd3a3c46e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rch26" Jan 28 06:49:29 crc kubenswrapper[4642]: I0128 06:49:29.425335 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a1d41231-5430-460a-8d0f-aaadd3a3c46e-service-ca\") pod \"cluster-version-operator-5c965bbfc6-rch26\" (UID: \"a1d41231-5430-460a-8d0f-aaadd3a3c46e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rch26" Jan 28 06:49:29 crc kubenswrapper[4642]: I0128 06:49:29.433859 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1d41231-5430-460a-8d0f-aaadd3a3c46e-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-rch26\" (UID: \"a1d41231-5430-460a-8d0f-aaadd3a3c46e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rch26" Jan 28 06:49:29 crc kubenswrapper[4642]: I0128 06:49:29.441132 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a1d41231-5430-460a-8d0f-aaadd3a3c46e-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-rch26\" (UID: \"a1d41231-5430-460a-8d0f-aaadd3a3c46e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rch26" Jan 28 06:49:29 crc kubenswrapper[4642]: I0128 06:49:29.482026 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rch26" Jan 28 06:49:29 crc kubenswrapper[4642]: I0128 06:49:29.509924 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rch26" event={"ID":"a1d41231-5430-460a-8d0f-aaadd3a3c46e","Type":"ContainerStarted","Data":"aaa3c5eac9cc762688358fedd84c416aba2ab29469c0eca291ef02b7b75842f8"} Jan 28 06:49:30 crc kubenswrapper[4642]: I0128 06:49:30.097421 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bpz6r" Jan 28 06:49:30 crc kubenswrapper[4642]: E0128 06:49:30.097584 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bpz6r" podUID="e7ad39da-99cf-4851-be79-a7d38df54055" Jan 28 06:49:30 crc kubenswrapper[4642]: I0128 06:49:30.117667 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 13:11:15.235666263 +0000 UTC Jan 28 06:49:30 crc kubenswrapper[4642]: I0128 06:49:30.117734 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 28 06:49:30 crc kubenswrapper[4642]: I0128 06:49:30.125094 4642 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 28 06:49:30 crc kubenswrapper[4642]: I0128 06:49:30.518624 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rch26" event={"ID":"a1d41231-5430-460a-8d0f-aaadd3a3c46e","Type":"ContainerStarted","Data":"2d28b0eed4c75114fe78cb3f1777b255702ade30d86bcd81696053fba155d067"} Jan 28 06:49:31 crc kubenswrapper[4642]: I0128 06:49:31.097452 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 06:49:31 crc kubenswrapper[4642]: E0128 06:49:31.097625 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 06:49:31 crc kubenswrapper[4642]: I0128 06:49:31.097890 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 06:49:31 crc kubenswrapper[4642]: E0128 06:49:31.097982 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 06:49:31 crc kubenswrapper[4642]: I0128 06:49:31.098173 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 06:49:31 crc kubenswrapper[4642]: E0128 06:49:31.098453 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 06:49:32 crc kubenswrapper[4642]: I0128 06:49:32.097476 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bpz6r" Jan 28 06:49:32 crc kubenswrapper[4642]: E0128 06:49:32.097595 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bpz6r" podUID="e7ad39da-99cf-4851-be79-a7d38df54055" Jan 28 06:49:33 crc kubenswrapper[4642]: I0128 06:49:33.098083 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 06:49:33 crc kubenswrapper[4642]: I0128 06:49:33.098179 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 06:49:33 crc kubenswrapper[4642]: I0128 06:49:33.098102 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 06:49:33 crc kubenswrapper[4642]: E0128 06:49:33.098319 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 06:49:33 crc kubenswrapper[4642]: E0128 06:49:33.098497 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 06:49:33 crc kubenswrapper[4642]: E0128 06:49:33.098620 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 06:49:34 crc kubenswrapper[4642]: I0128 06:49:34.098225 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bpz6r" Jan 28 06:49:34 crc kubenswrapper[4642]: E0128 06:49:34.098830 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bpz6r" podUID="e7ad39da-99cf-4851-be79-a7d38df54055" Jan 28 06:49:35 crc kubenswrapper[4642]: I0128 06:49:35.098160 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 06:49:35 crc kubenswrapper[4642]: I0128 06:49:35.098176 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 06:49:35 crc kubenswrapper[4642]: I0128 06:49:35.098283 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 06:49:35 crc kubenswrapper[4642]: E0128 06:49:35.098865 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 06:49:35 crc kubenswrapper[4642]: E0128 06:49:35.099045 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 06:49:35 crc kubenswrapper[4642]: E0128 06:49:35.099288 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 06:49:36 crc kubenswrapper[4642]: I0128 06:49:36.097372 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bpz6r" Jan 28 06:49:36 crc kubenswrapper[4642]: E0128 06:49:36.097522 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bpz6r" podUID="e7ad39da-99cf-4851-be79-a7d38df54055" Jan 28 06:49:37 crc kubenswrapper[4642]: I0128 06:49:37.098097 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 06:49:37 crc kubenswrapper[4642]: I0128 06:49:37.098305 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 06:49:37 crc kubenswrapper[4642]: I0128 06:49:37.099368 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 06:49:37 crc kubenswrapper[4642]: E0128 06:49:37.099527 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 06:49:37 crc kubenswrapper[4642]: E0128 06:49:37.099650 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 06:49:37 crc kubenswrapper[4642]: E0128 06:49:37.099726 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 06:49:38 crc kubenswrapper[4642]: I0128 06:49:38.097654 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bpz6r" Jan 28 06:49:38 crc kubenswrapper[4642]: E0128 06:49:38.097816 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bpz6r" podUID="e7ad39da-99cf-4851-be79-a7d38df54055" Jan 28 06:49:38 crc kubenswrapper[4642]: I0128 06:49:38.098585 4642 scope.go:117] "RemoveContainer" containerID="f946a3d181059c560eee8b44f77e584a9b2169a8a7c6ee76a78036779954e5ec" Jan 28 06:49:38 crc kubenswrapper[4642]: E0128 06:49:38.098782 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-7fdwx_openshift-ovn-kubernetes(0f5d2a3f-25d8-4051-8000-30ec01a14eb0)\"" pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" podUID="0f5d2a3f-25d8-4051-8000-30ec01a14eb0" Jan 28 06:49:38 crc kubenswrapper[4642]: I0128 06:49:38.507009 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e7ad39da-99cf-4851-be79-a7d38df54055-metrics-certs\") pod \"network-metrics-daemon-bpz6r\" (UID: \"e7ad39da-99cf-4851-be79-a7d38df54055\") " pod="openshift-multus/network-metrics-daemon-bpz6r" Jan 28 06:49:38 crc kubenswrapper[4642]: E0128 06:49:38.507171 4642 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 06:49:38 crc kubenswrapper[4642]: E0128 06:49:38.507268 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e7ad39da-99cf-4851-be79-a7d38df54055-metrics-certs podName:e7ad39da-99cf-4851-be79-a7d38df54055 nodeName:}" failed. No retries permitted until 2026-01-28 06:50:42.507252479 +0000 UTC m=+165.739341289 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e7ad39da-99cf-4851-be79-a7d38df54055-metrics-certs") pod "network-metrics-daemon-bpz6r" (UID: "e7ad39da-99cf-4851-be79-a7d38df54055") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 06:49:39 crc kubenswrapper[4642]: I0128 06:49:39.098416 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 06:49:39 crc kubenswrapper[4642]: I0128 06:49:39.098490 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 06:49:39 crc kubenswrapper[4642]: I0128 06:49:39.098438 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 06:49:39 crc kubenswrapper[4642]: E0128 06:49:39.098542 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 06:49:39 crc kubenswrapper[4642]: E0128 06:49:39.098648 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 06:49:39 crc kubenswrapper[4642]: E0128 06:49:39.098697 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 06:49:40 crc kubenswrapper[4642]: I0128 06:49:40.097433 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bpz6r" Jan 28 06:49:40 crc kubenswrapper[4642]: E0128 06:49:40.097616 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bpz6r" podUID="e7ad39da-99cf-4851-be79-a7d38df54055" Jan 28 06:49:41 crc kubenswrapper[4642]: I0128 06:49:41.097460 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 06:49:41 crc kubenswrapper[4642]: I0128 06:49:41.097523 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 06:49:41 crc kubenswrapper[4642]: I0128 06:49:41.097554 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 06:49:41 crc kubenswrapper[4642]: E0128 06:49:41.097649 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 06:49:41 crc kubenswrapper[4642]: E0128 06:49:41.097747 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 06:49:41 crc kubenswrapper[4642]: E0128 06:49:41.097948 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 06:49:42 crc kubenswrapper[4642]: I0128 06:49:42.097875 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bpz6r" Jan 28 06:49:42 crc kubenswrapper[4642]: E0128 06:49:42.098078 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bpz6r" podUID="e7ad39da-99cf-4851-be79-a7d38df54055" Jan 28 06:49:43 crc kubenswrapper[4642]: I0128 06:49:43.098386 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 06:49:43 crc kubenswrapper[4642]: E0128 06:49:43.098695 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 06:49:43 crc kubenswrapper[4642]: I0128 06:49:43.098405 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 06:49:43 crc kubenswrapper[4642]: E0128 06:49:43.098787 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 06:49:43 crc kubenswrapper[4642]: I0128 06:49:43.098403 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 06:49:43 crc kubenswrapper[4642]: E0128 06:49:43.098839 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 06:49:44 crc kubenswrapper[4642]: I0128 06:49:44.098260 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bpz6r" Jan 28 06:49:44 crc kubenswrapper[4642]: E0128 06:49:44.098411 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bpz6r" podUID="e7ad39da-99cf-4851-be79-a7d38df54055" Jan 28 06:49:45 crc kubenswrapper[4642]: I0128 06:49:45.098038 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 06:49:45 crc kubenswrapper[4642]: I0128 06:49:45.098118 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 06:49:45 crc kubenswrapper[4642]: I0128 06:49:45.098155 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 06:49:45 crc kubenswrapper[4642]: E0128 06:49:45.098288 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 06:49:45 crc kubenswrapper[4642]: E0128 06:49:45.098528 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 06:49:45 crc kubenswrapper[4642]: E0128 06:49:45.098435 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 06:49:46 crc kubenswrapper[4642]: I0128 06:49:46.097762 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bpz6r" Jan 28 06:49:46 crc kubenswrapper[4642]: E0128 06:49:46.097877 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bpz6r" podUID="e7ad39da-99cf-4851-be79-a7d38df54055" Jan 28 06:49:47 crc kubenswrapper[4642]: I0128 06:49:47.098378 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 06:49:47 crc kubenswrapper[4642]: I0128 06:49:47.098494 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 06:49:47 crc kubenswrapper[4642]: I0128 06:49:47.098631 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 06:49:47 crc kubenswrapper[4642]: E0128 06:49:47.098647 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 06:49:47 crc kubenswrapper[4642]: E0128 06:49:47.098693 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 06:49:47 crc kubenswrapper[4642]: E0128 06:49:47.098744 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 06:49:48 crc kubenswrapper[4642]: I0128 06:49:48.097374 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bpz6r" Jan 28 06:49:48 crc kubenswrapper[4642]: E0128 06:49:48.097529 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bpz6r" podUID="e7ad39da-99cf-4851-be79-a7d38df54055" Jan 28 06:49:49 crc kubenswrapper[4642]: I0128 06:49:49.097997 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 06:49:49 crc kubenswrapper[4642]: I0128 06:49:49.098248 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 06:49:49 crc kubenswrapper[4642]: E0128 06:49:49.098339 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 06:49:49 crc kubenswrapper[4642]: E0128 06:49:49.098446 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 06:49:49 crc kubenswrapper[4642]: I0128 06:49:49.098591 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 06:49:49 crc kubenswrapper[4642]: E0128 06:49:49.098856 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 06:49:50 crc kubenswrapper[4642]: I0128 06:49:50.097701 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bpz6r" Jan 28 06:49:50 crc kubenswrapper[4642]: E0128 06:49:50.098139 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bpz6r" podUID="e7ad39da-99cf-4851-be79-a7d38df54055" Jan 28 06:49:50 crc kubenswrapper[4642]: I0128 06:49:50.098453 4642 scope.go:117] "RemoveContainer" containerID="f946a3d181059c560eee8b44f77e584a9b2169a8a7c6ee76a78036779954e5ec" Jan 28 06:49:50 crc kubenswrapper[4642]: E0128 06:49:50.098657 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-7fdwx_openshift-ovn-kubernetes(0f5d2a3f-25d8-4051-8000-30ec01a14eb0)\"" pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" podUID="0f5d2a3f-25d8-4051-8000-30ec01a14eb0" Jan 28 06:49:51 crc kubenswrapper[4642]: I0128 06:49:51.097669 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 06:49:51 crc kubenswrapper[4642]: E0128 06:49:51.097816 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 06:49:51 crc kubenswrapper[4642]: I0128 06:49:51.097893 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 06:49:51 crc kubenswrapper[4642]: I0128 06:49:51.098153 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 06:49:51 crc kubenswrapper[4642]: E0128 06:49:51.098319 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 06:49:51 crc kubenswrapper[4642]: E0128 06:49:51.098452 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 06:49:52 crc kubenswrapper[4642]: I0128 06:49:52.097846 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bpz6r" Jan 28 06:49:52 crc kubenswrapper[4642]: E0128 06:49:52.098104 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bpz6r" podUID="e7ad39da-99cf-4851-be79-a7d38df54055" Jan 28 06:49:53 crc kubenswrapper[4642]: I0128 06:49:53.097796 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 06:49:53 crc kubenswrapper[4642]: E0128 06:49:53.097949 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 06:49:53 crc kubenswrapper[4642]: I0128 06:49:53.098008 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 06:49:53 crc kubenswrapper[4642]: I0128 06:49:53.097796 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 06:49:53 crc kubenswrapper[4642]: E0128 06:49:53.098227 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 06:49:53 crc kubenswrapper[4642]: E0128 06:49:53.098445 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 06:49:53 crc kubenswrapper[4642]: I0128 06:49:53.582391 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-28n48_3d569b7c-8a0e-4074-b61f-4139413b9849/kube-multus/1.log" Jan 28 06:49:53 crc kubenswrapper[4642]: I0128 06:49:53.582961 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-28n48_3d569b7c-8a0e-4074-b61f-4139413b9849/kube-multus/0.log" Jan 28 06:49:53 crc kubenswrapper[4642]: I0128 06:49:53.583011 4642 generic.go:334] "Generic (PLEG): container finished" podID="3d569b7c-8a0e-4074-b61f-4139413b9849" containerID="0226cb06c2fc831da7dadb94e0ca448e1b610cc146da17dd286aae90a38aa7c5" exitCode=1 Jan 28 06:49:53 crc kubenswrapper[4642]: I0128 06:49:53.583052 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-28n48" event={"ID":"3d569b7c-8a0e-4074-b61f-4139413b9849","Type":"ContainerDied","Data":"0226cb06c2fc831da7dadb94e0ca448e1b610cc146da17dd286aae90a38aa7c5"} Jan 28 06:49:53 crc kubenswrapper[4642]: I0128 06:49:53.583103 4642 scope.go:117] "RemoveContainer" containerID="04752968302ea040e4a64b015290addc260974a7243137d55948c3e1a3e3a849" Jan 28 06:49:53 crc kubenswrapper[4642]: I0128 06:49:53.583871 4642 scope.go:117] "RemoveContainer" containerID="0226cb06c2fc831da7dadb94e0ca448e1b610cc146da17dd286aae90a38aa7c5" Jan 28 06:49:53 crc kubenswrapper[4642]: E0128 06:49:53.584097 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-28n48_openshift-multus(3d569b7c-8a0e-4074-b61f-4139413b9849)\"" pod="openshift-multus/multus-28n48" podUID="3d569b7c-8a0e-4074-b61f-4139413b9849" Jan 28 06:49:53 crc kubenswrapper[4642]: I0128 06:49:53.602162 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rch26" podStartSLOduration=93.602144393 podStartE2EDuration="1m33.602144393s" podCreationTimestamp="2026-01-28 06:48:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 06:49:30.530206523 +0000 UTC m=+93.762295332" watchObservedRunningTime="2026-01-28 06:49:53.602144393 +0000 UTC m=+116.834233202" Jan 28 06:49:54 crc kubenswrapper[4642]: I0128 06:49:54.097813 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bpz6r" Jan 28 06:49:54 crc kubenswrapper[4642]: E0128 06:49:54.097967 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bpz6r" podUID="e7ad39da-99cf-4851-be79-a7d38df54055" Jan 28 06:49:54 crc kubenswrapper[4642]: I0128 06:49:54.590446 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-28n48_3d569b7c-8a0e-4074-b61f-4139413b9849/kube-multus/1.log" Jan 28 06:49:55 crc kubenswrapper[4642]: I0128 06:49:55.097722 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 06:49:55 crc kubenswrapper[4642]: I0128 06:49:55.097722 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 06:49:55 crc kubenswrapper[4642]: E0128 06:49:55.097831 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 06:49:55 crc kubenswrapper[4642]: E0128 06:49:55.098082 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 06:49:55 crc kubenswrapper[4642]: I0128 06:49:55.098316 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 06:49:55 crc kubenswrapper[4642]: E0128 06:49:55.098409 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 06:49:56 crc kubenswrapper[4642]: I0128 06:49:56.097591 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bpz6r" Jan 28 06:49:56 crc kubenswrapper[4642]: E0128 06:49:56.097717 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bpz6r" podUID="e7ad39da-99cf-4851-be79-a7d38df54055" Jan 28 06:49:57 crc kubenswrapper[4642]: E0128 06:49:57.096710 4642 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 28 06:49:57 crc kubenswrapper[4642]: I0128 06:49:57.098063 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 06:49:57 crc kubenswrapper[4642]: I0128 06:49:57.098099 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 06:49:57 crc kubenswrapper[4642]: I0128 06:49:57.098082 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 06:49:57 crc kubenswrapper[4642]: E0128 06:49:57.098915 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 06:49:57 crc kubenswrapper[4642]: E0128 06:49:57.099069 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 06:49:57 crc kubenswrapper[4642]: E0128 06:49:57.099197 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 06:49:57 crc kubenswrapper[4642]: E0128 06:49:57.162578 4642 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 06:49:58 crc kubenswrapper[4642]: I0128 06:49:58.097506 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bpz6r" Jan 28 06:49:58 crc kubenswrapper[4642]: E0128 06:49:58.097687 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bpz6r" podUID="e7ad39da-99cf-4851-be79-a7d38df54055" Jan 28 06:49:59 crc kubenswrapper[4642]: I0128 06:49:59.098333 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 06:49:59 crc kubenswrapper[4642]: I0128 06:49:59.098478 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 06:49:59 crc kubenswrapper[4642]: I0128 06:49:59.098351 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 06:49:59 crc kubenswrapper[4642]: E0128 06:49:59.098623 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 06:49:59 crc kubenswrapper[4642]: E0128 06:49:59.098517 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 06:49:59 crc kubenswrapper[4642]: E0128 06:49:59.098785 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 06:50:00 crc kubenswrapper[4642]: I0128 06:50:00.098088 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bpz6r" Jan 28 06:50:00 crc kubenswrapper[4642]: E0128 06:50:00.098293 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bpz6r" podUID="e7ad39da-99cf-4851-be79-a7d38df54055" Jan 28 06:50:01 crc kubenswrapper[4642]: I0128 06:50:01.098443 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 06:50:01 crc kubenswrapper[4642]: I0128 06:50:01.098560 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 06:50:01 crc kubenswrapper[4642]: E0128 06:50:01.098608 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 06:50:01 crc kubenswrapper[4642]: E0128 06:50:01.098742 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 06:50:01 crc kubenswrapper[4642]: I0128 06:50:01.098755 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 06:50:01 crc kubenswrapper[4642]: E0128 06:50:01.098967 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 06:50:02 crc kubenswrapper[4642]: I0128 06:50:02.098013 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bpz6r" Jan 28 06:50:02 crc kubenswrapper[4642]: E0128 06:50:02.098132 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bpz6r" podUID="e7ad39da-99cf-4851-be79-a7d38df54055" Jan 28 06:50:02 crc kubenswrapper[4642]: E0128 06:50:02.163667 4642 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 06:50:03 crc kubenswrapper[4642]: I0128 06:50:03.097377 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 06:50:03 crc kubenswrapper[4642]: I0128 06:50:03.097401 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 06:50:03 crc kubenswrapper[4642]: E0128 06:50:03.097548 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 06:50:03 crc kubenswrapper[4642]: E0128 06:50:03.097677 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 06:50:03 crc kubenswrapper[4642]: I0128 06:50:03.098039 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 06:50:03 crc kubenswrapper[4642]: E0128 06:50:03.098178 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 06:50:03 crc kubenswrapper[4642]: I0128 06:50:03.098415 4642 scope.go:117] "RemoveContainer" containerID="f946a3d181059c560eee8b44f77e584a9b2169a8a7c6ee76a78036779954e5ec" Jan 28 06:50:03 crc kubenswrapper[4642]: I0128 06:50:03.621471 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7fdwx_0f5d2a3f-25d8-4051-8000-30ec01a14eb0/ovnkube-controller/3.log" Jan 28 06:50:03 crc kubenswrapper[4642]: I0128 06:50:03.624133 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" event={"ID":"0f5d2a3f-25d8-4051-8000-30ec01a14eb0","Type":"ContainerStarted","Data":"eeed5a705967acb120d93dcd944360096cb0ce38a2ba8309391a39f2f8b33c60"} Jan 28 06:50:03 crc kubenswrapper[4642]: I0128 06:50:03.624538 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" Jan 28 06:50:03 crc kubenswrapper[4642]: I0128 06:50:03.648562 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" podStartSLOduration=103.648543579 podStartE2EDuration="1m43.648543579s" podCreationTimestamp="2026-01-28 06:48:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 06:50:03.648412383 +0000 UTC m=+126.880501191" watchObservedRunningTime="2026-01-28 06:50:03.648543579 +0000 UTC m=+126.880632389" Jan 28 06:50:03 crc kubenswrapper[4642]: I0128 06:50:03.790249 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-bpz6r"] Jan 28 06:50:03 crc kubenswrapper[4642]: I0128 06:50:03.790345 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bpz6r" Jan 28 06:50:03 crc kubenswrapper[4642]: E0128 06:50:03.790442 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bpz6r" podUID="e7ad39da-99cf-4851-be79-a7d38df54055" Jan 28 06:50:05 crc kubenswrapper[4642]: I0128 06:50:05.097837 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 06:50:05 crc kubenswrapper[4642]: I0128 06:50:05.097888 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 06:50:05 crc kubenswrapper[4642]: I0128 06:50:05.098022 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 06:50:05 crc kubenswrapper[4642]: E0128 06:50:05.098231 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 06:50:05 crc kubenswrapper[4642]: E0128 06:50:05.098302 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 06:50:05 crc kubenswrapper[4642]: E0128 06:50:05.098376 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 06:50:06 crc kubenswrapper[4642]: I0128 06:50:06.098389 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bpz6r" Jan 28 06:50:06 crc kubenswrapper[4642]: E0128 06:50:06.098563 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bpz6r" podUID="e7ad39da-99cf-4851-be79-a7d38df54055" Jan 28 06:50:07 crc kubenswrapper[4642]: I0128 06:50:07.097394 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 06:50:07 crc kubenswrapper[4642]: I0128 06:50:07.097455 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 06:50:07 crc kubenswrapper[4642]: I0128 06:50:07.097419 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 06:50:07 crc kubenswrapper[4642]: E0128 06:50:07.098304 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 06:50:07 crc kubenswrapper[4642]: E0128 06:50:07.098392 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 06:50:07 crc kubenswrapper[4642]: E0128 06:50:07.098544 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 06:50:07 crc kubenswrapper[4642]: E0128 06:50:07.164529 4642 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 06:50:08 crc kubenswrapper[4642]: I0128 06:50:08.098025 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bpz6r" Jan 28 06:50:08 crc kubenswrapper[4642]: I0128 06:50:08.098307 4642 scope.go:117] "RemoveContainer" containerID="0226cb06c2fc831da7dadb94e0ca448e1b610cc146da17dd286aae90a38aa7c5" Jan 28 06:50:08 crc kubenswrapper[4642]: E0128 06:50:08.098309 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bpz6r" podUID="e7ad39da-99cf-4851-be79-a7d38df54055" Jan 28 06:50:08 crc kubenswrapper[4642]: I0128 06:50:08.641830 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-28n48_3d569b7c-8a0e-4074-b61f-4139413b9849/kube-multus/1.log" Jan 28 06:50:08 crc kubenswrapper[4642]: I0128 06:50:08.641887 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-28n48" event={"ID":"3d569b7c-8a0e-4074-b61f-4139413b9849","Type":"ContainerStarted","Data":"284b63834bf5eec6caab472794a1dfb8ec01f2a5fa5e3807db375880b7b556eb"} Jan 28 06:50:09 crc kubenswrapper[4642]: I0128 06:50:09.098031 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 06:50:09 crc kubenswrapper[4642]: I0128 06:50:09.098082 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 06:50:09 crc kubenswrapper[4642]: I0128 06:50:09.098127 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 06:50:09 crc kubenswrapper[4642]: E0128 06:50:09.098218 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 06:50:09 crc kubenswrapper[4642]: E0128 06:50:09.098291 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 06:50:09 crc kubenswrapper[4642]: E0128 06:50:09.098405 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 06:50:10 crc kubenswrapper[4642]: I0128 06:50:10.097415 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bpz6r" Jan 28 06:50:10 crc kubenswrapper[4642]: E0128 06:50:10.097564 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bpz6r" podUID="e7ad39da-99cf-4851-be79-a7d38df54055" Jan 28 06:50:11 crc kubenswrapper[4642]: I0128 06:50:11.097413 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 06:50:11 crc kubenswrapper[4642]: I0128 06:50:11.097531 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 06:50:11 crc kubenswrapper[4642]: E0128 06:50:11.098223 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 06:50:11 crc kubenswrapper[4642]: I0128 06:50:11.097558 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 06:50:11 crc kubenswrapper[4642]: E0128 06:50:11.098322 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 06:50:11 crc kubenswrapper[4642]: E0128 06:50:11.098353 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 06:50:12 crc kubenswrapper[4642]: I0128 06:50:12.098028 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bpz6r" Jan 28 06:50:12 crc kubenswrapper[4642]: E0128 06:50:12.098156 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bpz6r" podUID="e7ad39da-99cf-4851-be79-a7d38df54055" Jan 28 06:50:13 crc kubenswrapper[4642]: I0128 06:50:13.097357 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 06:50:13 crc kubenswrapper[4642]: I0128 06:50:13.097404 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 06:50:13 crc kubenswrapper[4642]: I0128 06:50:13.097492 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 06:50:13 crc kubenswrapper[4642]: I0128 06:50:13.099170 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 28 06:50:13 crc kubenswrapper[4642]: I0128 06:50:13.099551 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 28 06:50:13 crc kubenswrapper[4642]: I0128 06:50:13.099622 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 28 06:50:13 crc kubenswrapper[4642]: I0128 06:50:13.100544 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 28 06:50:14 crc kubenswrapper[4642]: I0128 06:50:14.097539 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bpz6r" Jan 28 06:50:14 crc kubenswrapper[4642]: I0128 06:50:14.099131 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 28 06:50:14 crc kubenswrapper[4642]: I0128 06:50:14.099655 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 28 06:50:17 crc kubenswrapper[4642]: I0128 06:50:17.179905 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.226520 4642 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.255705 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-p5lc2"] Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.256253 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p5lc2" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.256355 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-t78bf"] Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.256782 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-t78bf" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.257051 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-h6qvh"] Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.257564 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-h6qvh" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.257605 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-ttt2d"] Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.257997 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-ttt2d" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.259361 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.262259 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-tqfl8"] Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.267045 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.267230 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.269384 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-dj2m9"] Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.269905 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-dj2m9" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.270347 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-tqfl8" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.277493 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.277508 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.277905 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.277921 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.277969 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.277995 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.278081 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.278096 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.278153 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.278243 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.278630 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.278720 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.278741 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.278832 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.278945 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.278993 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.279094 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.279238 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.279261 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.279321 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.279410 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.279434 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.279412 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.279618 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.279670 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.279784 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.279866 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.279907 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.279941 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.279874 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.280011 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.280090 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.280635 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.281489 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-s9zdh"] Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.281851 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-s9zdh" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.283996 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-t9tps"] Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.284393 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-jwpbc"] Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.284462 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-t9tps" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.285008 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jwpbc" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.286572 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-k2t68"] Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.286889 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-k2t68" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.287598 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8xmnf"] Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.287854 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8xmnf" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.290079 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-xdv7f"] Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.290399 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hwwqg"] Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.290696 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-zrrr7"] Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.290944 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-zrrr7" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.293394 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-xdv7f" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.293521 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hwwqg" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.295362 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.295624 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.295661 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.295756 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8q4rl"] Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.296030 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.296128 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.296292 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.296082 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8q4rl" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.296581 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.296688 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.296856 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.297057 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.297271 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.297415 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.297621 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.297638 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.297927 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.297929 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.297930 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.298136 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.298565 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.298667 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.298746 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.299249 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.299299 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.299355 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.299392 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.299431 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.299482 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.299510 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.299520 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.299399 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.299250 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.299591 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.299622 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.299655 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.299662 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.299694 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.299594 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.299361 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.299805 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.299902 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.299976 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-5m9j5"] Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.300427 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5m9j5" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.301048 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-kjn9r"] Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.301528 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-kjn9r" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.302861 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.303249 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.303497 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.304115 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.304137 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.304310 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.305119 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b626n"] Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.309144 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b626n" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.309874 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.309911 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.310977 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.312420 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.312806 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.312819 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.312996 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.313288 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.317798 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.317987 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-ggxxq"] Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.322709 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-ggxxq" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.325264 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-jtkc5"] Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.325853 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fg5mw"] Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.326254 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-fg5mw" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.326276 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jtkc5" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.328228 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.329974 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.330627 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.332048 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.337485 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.337844 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qzf9d"] Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.338389 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qzf9d" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.338507 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-9dsht"] Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.338776 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.338936 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-9dsht" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.339452 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-m6fnq"] Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.339510 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.339907 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-m6fnq" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.340311 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g9fbj"] Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.340543 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.340631 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g9fbj" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.340984 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.341158 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-h76tc"] Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.345380 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.346650 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-h76tc" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.347546 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-965rk"] Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.348440 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-965rk" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.349387 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.349683 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xvw8z"] Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.350871 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-b8v8n"] Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.351049 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xvw8z" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.351878 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-b8v8n" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.356145 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-glqwl"] Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.357469 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-glqwl" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.358512 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.360307 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hwwqg"] Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.361420 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-t78bf"] Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.362063 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-k2t68"] Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.363570 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-t7v5l"] Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.363932 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493045-ndfr5"] Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.364386 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-t7v5l" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.364484 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8w4vg"] Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.366157 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4jjwv"] Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.366241 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8w4vg" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.365659 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493045-ndfr5" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.367675 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nsxdk"] Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.367826 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4jjwv" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.368293 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nsxdk" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.370219 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7rghx"] Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.370742 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-cc8qm"] Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.370851 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7rghx" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.371086 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-cc8qm" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.371768 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-xdv7f"] Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.373604 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-t9tps"] Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.374472 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-ttt2d"] Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.375789 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-tqfl8"] Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.379828 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.380571 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-jwpbc"] Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.381917 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8xmnf"] Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.382859 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-nl4rt"] Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.383485 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-nl4rt" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.386298 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-h76tc"] Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.387273 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-h6qvh"] Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.388606 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-cj4zp"] Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.389020 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-cj4zp" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.389395 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-dj2m9"] Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.390338 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-kjn9r"] Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.391141 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-b8v8n"] Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.392028 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qzf9d"] Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.392919 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g9fbj"] Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.394031 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-5m9j5"] Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.397027 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8q4rl"] Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.398000 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.398173 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-m6fnq"] Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.399054 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-ggxxq"] Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.400276 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-t7v5l"] Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.400680 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b626n"] Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.401509 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-glqwl"] Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.402414 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-965rk"] Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.403295 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-zrrr7"] Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.404420 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fg5mw"] Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.406809 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-s9zdh"] Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.407833 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493045-ndfr5"] Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.408691 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xvw8z"] Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.409742 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7rghx"] Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.410534 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8w4vg"] Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.411346 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4jjwv"] Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.412128 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-jtkc5"] Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.412940 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nsxdk"] Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.413940 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-cc8qm"] Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.420646 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.420970 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-nl4rt"] Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.421936 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-t2ncd"] Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.422913 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-v4lf2"] Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.423075 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-t2ncd" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.423431 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-v4lf2" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.423632 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-v4lf2"] Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.424594 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-t2ncd"] Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.425852 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1d9faf46-d412-4182-96a4-f8350fd4c34e-audit-dir\") pod \"oauth-openshift-558db77b4-s9zdh\" (UID: \"1d9faf46-d412-4182-96a4-f8350fd4c34e\") " pod="openshift-authentication/oauth-openshift-558db77b4-s9zdh" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.425881 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8sbb\" (UniqueName: \"kubernetes.io/projected/5cd3e28d-c288-4b1d-81c2-dff387bb3de7-kube-api-access-r8sbb\") pod \"console-operator-58897d9998-xdv7f\" (UID: \"5cd3e28d-c288-4b1d-81c2-dff387bb3de7\") " pod="openshift-console-operator/console-operator-58897d9998-xdv7f" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.425904 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9bp7\" (UniqueName: \"kubernetes.io/projected/33724c18-9aa8-4004-accc-fd8ff92cb999-kube-api-access-c9bp7\") pod \"openshift-apiserver-operator-796bbdcf4f-t78bf\" (UID: \"33724c18-9aa8-4004-accc-fd8ff92cb999\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-t78bf" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.425921 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/1d9faf46-d412-4182-96a4-f8350fd4c34e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-s9zdh\" (UID: \"1d9faf46-d412-4182-96a4-f8350fd4c34e\") " pod="openshift-authentication/oauth-openshift-558db77b4-s9zdh" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.425944 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4310bc4-d7e7-4f8b-832b-57bceda71a45-config\") pod \"route-controller-manager-6576b87f9c-5m9j5\" (UID: \"a4310bc4-d7e7-4f8b-832b-57bceda71a45\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5m9j5" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.425961 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33724c18-9aa8-4004-accc-fd8ff92cb999-config\") pod \"openshift-apiserver-operator-796bbdcf4f-t78bf\" (UID: \"33724c18-9aa8-4004-accc-fd8ff92cb999\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-t78bf" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.425976 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47fwv\" (UniqueName: \"kubernetes.io/projected/d1b83e60-dd95-437c-847c-f60f9c33ee1f-kube-api-access-47fwv\") pod \"openshift-controller-manager-operator-756b6f6bc6-8xmnf\" (UID: \"d1b83e60-dd95-437c-847c-f60f9c33ee1f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8xmnf" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.425992 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/536f8472-158f-45c2-a0f1-b6799b6bdbdd-images\") pod \"machine-api-operator-5694c8668f-ttt2d\" (UID: \"536f8472-158f-45c2-a0f1-b6799b6bdbdd\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ttt2d" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.426006 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a18026d8-4feb-4dd9-b0ab-c24e4aa102bf-config\") pod \"kube-apiserver-operator-766d6c64bb-b626n\" (UID: \"a18026d8-4feb-4dd9-b0ab-c24e4aa102bf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b626n" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.426028 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/536c56c9-e355-4b23-a079-9e34f4bc9123-bound-sa-token\") pod \"ingress-operator-5b745b69d9-jtkc5\" (UID: \"536c56c9-e355-4b23-a079-9e34f4bc9123\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jtkc5" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.426042 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae263a90-0a71-40b8-bda1-ec21b3680994-trusted-ca-bundle\") pod \"apiserver-76f77b778f-h6qvh\" (UID: \"ae263a90-0a71-40b8-bda1-ec21b3680994\") " pod="openshift-apiserver/apiserver-76f77b778f-h6qvh" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.426056 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a18026d8-4feb-4dd9-b0ab-c24e4aa102bf-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-b626n\" (UID: \"a18026d8-4feb-4dd9-b0ab-c24e4aa102bf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b626n" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.426072 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae263a90-0a71-40b8-bda1-ec21b3680994-serving-cert\") pod \"apiserver-76f77b778f-h6qvh\" (UID: \"ae263a90-0a71-40b8-bda1-ec21b3680994\") " pod="openshift-apiserver/apiserver-76f77b778f-h6qvh" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.426086 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fe7270c-3b58-4011-9776-1360c24896ca-config\") pod \"authentication-operator-69f744f599-dj2m9\" (UID: \"9fe7270c-3b58-4011-9776-1360c24896ca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dj2m9" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.426104 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/3f917c23-bd51-44a0-b75a-7acf03f1d2de-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-8q4rl\" (UID: \"3f917c23-bd51-44a0-b75a-7acf03f1d2de\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8q4rl" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.426142 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ae263a90-0a71-40b8-bda1-ec21b3680994-etcd-client\") pod \"apiserver-76f77b778f-h6qvh\" (UID: \"ae263a90-0a71-40b8-bda1-ec21b3680994\") " pod="openshift-apiserver/apiserver-76f77b778f-h6qvh" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.426158 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ae263a90-0a71-40b8-bda1-ec21b3680994-encryption-config\") pod \"apiserver-76f77b778f-h6qvh\" (UID: \"ae263a90-0a71-40b8-bda1-ec21b3680994\") " pod="openshift-apiserver/apiserver-76f77b778f-h6qvh" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.426174 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4d5c1bf9-0f8d-4363-8afc-f764165812c8-console-serving-cert\") pod \"console-f9d7485db-zrrr7\" (UID: \"4d5c1bf9-0f8d-4363-8afc-f764165812c8\") " pod="openshift-console/console-f9d7485db-zrrr7" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.426224 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvlt9\" (UniqueName: \"kubernetes.io/projected/39c42149-4d53-4e72-84f6-0c09d0f86ee2-kube-api-access-qvlt9\") pod \"openshift-config-operator-7777fb866f-tqfl8\" (UID: \"39c42149-4d53-4e72-84f6-0c09d0f86ee2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-tqfl8" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.426252 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cghwc\" (UniqueName: \"kubernetes.io/projected/8f8a2fee-019f-4885-93ad-57787b4478d8-kube-api-access-cghwc\") pod \"downloads-7954f5f757-k2t68\" (UID: \"8f8a2fee-019f-4885-93ad-57787b4478d8\") " pod="openshift-console/downloads-7954f5f757-k2t68" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.426281 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/536c56c9-e355-4b23-a079-9e34f4bc9123-metrics-tls\") pod \"ingress-operator-5b745b69d9-jtkc5\" (UID: \"536c56c9-e355-4b23-a079-9e34f4bc9123\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jtkc5" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.426299 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae263a90-0a71-40b8-bda1-ec21b3680994-config\") pod \"apiserver-76f77b778f-h6qvh\" (UID: \"ae263a90-0a71-40b8-bda1-ec21b3680994\") " pod="openshift-apiserver/apiserver-76f77b778f-h6qvh" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.426313 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ae263a90-0a71-40b8-bda1-ec21b3680994-image-import-ca\") pod \"apiserver-76f77b778f-h6qvh\" (UID: \"ae263a90-0a71-40b8-bda1-ec21b3680994\") " pod="openshift-apiserver/apiserver-76f77b778f-h6qvh" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.426328 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1d9faf46-d412-4182-96a4-f8350fd4c34e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-s9zdh\" (UID: \"1d9faf46-d412-4182-96a4-f8350fd4c34e\") " pod="openshift-authentication/oauth-openshift-558db77b4-s9zdh" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.426343 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1d9faf46-d412-4182-96a4-f8350fd4c34e-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-s9zdh\" (UID: \"1d9faf46-d412-4182-96a4-f8350fd4c34e\") " pod="openshift-authentication/oauth-openshift-558db77b4-s9zdh" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.426357 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5cmm\" (UniqueName: \"kubernetes.io/projected/4d5c1bf9-0f8d-4363-8afc-f764165812c8-kube-api-access-z5cmm\") pod \"console-f9d7485db-zrrr7\" (UID: \"4d5c1bf9-0f8d-4363-8afc-f764165812c8\") " pod="openshift-console/console-f9d7485db-zrrr7" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.426382 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1d9faf46-d412-4182-96a4-f8350fd4c34e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-s9zdh\" (UID: \"1d9faf46-d412-4182-96a4-f8350fd4c34e\") " pod="openshift-authentication/oauth-openshift-558db77b4-s9zdh" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.426396 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4d5c1bf9-0f8d-4363-8afc-f764165812c8-console-config\") pod \"console-f9d7485db-zrrr7\" (UID: \"4d5c1bf9-0f8d-4363-8afc-f764165812c8\") " pod="openshift-console/console-f9d7485db-zrrr7" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.426409 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4d5c1bf9-0f8d-4363-8afc-f764165812c8-trusted-ca-bundle\") pod \"console-f9d7485db-zrrr7\" (UID: \"4d5c1bf9-0f8d-4363-8afc-f764165812c8\") " pod="openshift-console/console-f9d7485db-zrrr7" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.426427 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdksz\" (UniqueName: \"kubernetes.io/projected/6680af4f-2095-41e6-9343-63ab88645dea-kube-api-access-jdksz\") pod \"cluster-samples-operator-665b6dd947-hwwqg\" (UID: \"6680af4f-2095-41e6-9343-63ab88645dea\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hwwqg" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.426441 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1d9faf46-d412-4182-96a4-f8350fd4c34e-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-s9zdh\" (UID: \"1d9faf46-d412-4182-96a4-f8350fd4c34e\") " pod="openshift-authentication/oauth-openshift-558db77b4-s9zdh" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.426463 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lqnn\" (UniqueName: \"kubernetes.io/projected/1d9faf46-d412-4182-96a4-f8350fd4c34e-kube-api-access-6lqnn\") pod \"oauth-openshift-558db77b4-s9zdh\" (UID: \"1d9faf46-d412-4182-96a4-f8350fd4c34e\") " pod="openshift-authentication/oauth-openshift-558db77b4-s9zdh" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.426477 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5cd3e28d-c288-4b1d-81c2-dff387bb3de7-serving-cert\") pod \"console-operator-58897d9998-xdv7f\" (UID: \"5cd3e28d-c288-4b1d-81c2-dff387bb3de7\") " pod="openshift-console-operator/console-operator-58897d9998-xdv7f" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.426504 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a18026d8-4feb-4dd9-b0ab-c24e4aa102bf-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-b626n\" (UID: \"a18026d8-4feb-4dd9-b0ab-c24e4aa102bf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b626n" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.426520 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a4310bc4-d7e7-4f8b-832b-57bceda71a45-client-ca\") pod \"route-controller-manager-6576b87f9c-5m9j5\" (UID: \"a4310bc4-d7e7-4f8b-832b-57bceda71a45\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5m9j5" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.426535 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ae263a90-0a71-40b8-bda1-ec21b3680994-etcd-serving-ca\") pod \"apiserver-76f77b778f-h6qvh\" (UID: \"ae263a90-0a71-40b8-bda1-ec21b3680994\") " pod="openshift-apiserver/apiserver-76f77b778f-h6qvh" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.426548 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1b83e60-dd95-437c-847c-f60f9c33ee1f-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-8xmnf\" (UID: \"d1b83e60-dd95-437c-847c-f60f9c33ee1f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8xmnf" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.426564 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/536f8472-158f-45c2-a0f1-b6799b6bdbdd-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-ttt2d\" (UID: \"536f8472-158f-45c2-a0f1-b6799b6bdbdd\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ttt2d" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.426577 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ae263a90-0a71-40b8-bda1-ec21b3680994-node-pullsecrets\") pod \"apiserver-76f77b778f-h6qvh\" (UID: \"ae263a90-0a71-40b8-bda1-ec21b3680994\") " pod="openshift-apiserver/apiserver-76f77b778f-h6qvh" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.426590 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4d5c1bf9-0f8d-4363-8afc-f764165812c8-console-oauth-config\") pod \"console-f9d7485db-zrrr7\" (UID: \"4d5c1bf9-0f8d-4363-8afc-f764165812c8\") " pod="openshift-console/console-f9d7485db-zrrr7" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.426604 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j95pl\" (UniqueName: \"kubernetes.io/projected/3f917c23-bd51-44a0-b75a-7acf03f1d2de-kube-api-access-j95pl\") pod \"cluster-image-registry-operator-dc59b4c8b-8q4rl\" (UID: \"3f917c23-bd51-44a0-b75a-7acf03f1d2de\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8q4rl" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.426620 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4310bc4-d7e7-4f8b-832b-57bceda71a45-serving-cert\") pod \"route-controller-manager-6576b87f9c-5m9j5\" (UID: \"a4310bc4-d7e7-4f8b-832b-57bceda71a45\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5m9j5" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.426636 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/536c56c9-e355-4b23-a079-9e34f4bc9123-trusted-ca\") pod \"ingress-operator-5b745b69d9-jtkc5\" (UID: \"536c56c9-e355-4b23-a079-9e34f4bc9123\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jtkc5" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.426651 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29rcv\" (UniqueName: \"kubernetes.io/projected/ae263a90-0a71-40b8-bda1-ec21b3680994-kube-api-access-29rcv\") pod \"apiserver-76f77b778f-h6qvh\" (UID: \"ae263a90-0a71-40b8-bda1-ec21b3680994\") " pod="openshift-apiserver/apiserver-76f77b778f-h6qvh" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.426665 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9fe7270c-3b58-4011-9776-1360c24896ca-serving-cert\") pod \"authentication-operator-69f744f599-dj2m9\" (UID: \"9fe7270c-3b58-4011-9776-1360c24896ca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dj2m9" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.426681 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3f917c23-bd51-44a0-b75a-7acf03f1d2de-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-8q4rl\" (UID: \"3f917c23-bd51-44a0-b75a-7acf03f1d2de\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8q4rl" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.426696 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1d9faf46-d412-4182-96a4-f8350fd4c34e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-s9zdh\" (UID: \"1d9faf46-d412-4182-96a4-f8350fd4c34e\") " pod="openshift-authentication/oauth-openshift-558db77b4-s9zdh" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.426710 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5cd3e28d-c288-4b1d-81c2-dff387bb3de7-trusted-ca\") pod \"console-operator-58897d9998-xdv7f\" (UID: \"5cd3e28d-c288-4b1d-81c2-dff387bb3de7\") " pod="openshift-console-operator/console-operator-58897d9998-xdv7f" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.426729 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6680af4f-2095-41e6-9343-63ab88645dea-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-hwwqg\" (UID: \"6680af4f-2095-41e6-9343-63ab88645dea\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hwwqg" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.426743 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4d5c1bf9-0f8d-4363-8afc-f764165812c8-service-ca\") pod \"console-f9d7485db-zrrr7\" (UID: \"4d5c1bf9-0f8d-4363-8afc-f764165812c8\") " pod="openshift-console/console-f9d7485db-zrrr7" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.426759 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcstj\" (UniqueName: \"kubernetes.io/projected/536c56c9-e355-4b23-a079-9e34f4bc9123-kube-api-access-kcstj\") pod \"ingress-operator-5b745b69d9-jtkc5\" (UID: \"536c56c9-e355-4b23-a079-9e34f4bc9123\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jtkc5" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.426775 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9fe7270c-3b58-4011-9776-1360c24896ca-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-dj2m9\" (UID: \"9fe7270c-3b58-4011-9776-1360c24896ca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dj2m9" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.426788 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1d9faf46-d412-4182-96a4-f8350fd4c34e-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-s9zdh\" (UID: \"1d9faf46-d412-4182-96a4-f8350fd4c34e\") " pod="openshift-authentication/oauth-openshift-558db77b4-s9zdh" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.426805 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/39c42149-4d53-4e72-84f6-0c09d0f86ee2-available-featuregates\") pod \"openshift-config-operator-7777fb866f-tqfl8\" (UID: \"39c42149-4d53-4e72-84f6-0c09d0f86ee2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-tqfl8" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.426821 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/536f8472-158f-45c2-a0f1-b6799b6bdbdd-config\") pod \"machine-api-operator-5694c8668f-ttt2d\" (UID: \"536f8472-158f-45c2-a0f1-b6799b6bdbdd\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ttt2d" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.426837 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kptw\" (UniqueName: \"kubernetes.io/projected/536f8472-158f-45c2-a0f1-b6799b6bdbdd-kube-api-access-2kptw\") pod \"machine-api-operator-5694c8668f-ttt2d\" (UID: \"536f8472-158f-45c2-a0f1-b6799b6bdbdd\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ttt2d" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.426852 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33724c18-9aa8-4004-accc-fd8ff92cb999-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-t78bf\" (UID: \"33724c18-9aa8-4004-accc-fd8ff92cb999\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-t78bf" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.426867 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ae263a90-0a71-40b8-bda1-ec21b3680994-audit\") pod \"apiserver-76f77b778f-h6qvh\" (UID: \"ae263a90-0a71-40b8-bda1-ec21b3680994\") " pod="openshift-apiserver/apiserver-76f77b778f-h6qvh" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.426879 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1b83e60-dd95-437c-847c-f60f9c33ee1f-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-8xmnf\" (UID: \"d1b83e60-dd95-437c-847c-f60f9c33ee1f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8xmnf" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.426896 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1d9faf46-d412-4182-96a4-f8350fd4c34e-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-s9zdh\" (UID: \"1d9faf46-d412-4182-96a4-f8350fd4c34e\") " pod="openshift-authentication/oauth-openshift-558db77b4-s9zdh" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.426923 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1d9faf46-d412-4182-96a4-f8350fd4c34e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-s9zdh\" (UID: \"1d9faf46-d412-4182-96a4-f8350fd4c34e\") " pod="openshift-authentication/oauth-openshift-558db77b4-s9zdh" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.426946 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vj69n\" (UniqueName: \"kubernetes.io/projected/a4310bc4-d7e7-4f8b-832b-57bceda71a45-kube-api-access-vj69n\") pod \"route-controller-manager-6576b87f9c-5m9j5\" (UID: \"a4310bc4-d7e7-4f8b-832b-57bceda71a45\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5m9j5" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.426960 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ae263a90-0a71-40b8-bda1-ec21b3680994-audit-dir\") pod \"apiserver-76f77b778f-h6qvh\" (UID: \"ae263a90-0a71-40b8-bda1-ec21b3680994\") " pod="openshift-apiserver/apiserver-76f77b778f-h6qvh" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.426976 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3f917c23-bd51-44a0-b75a-7acf03f1d2de-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-8q4rl\" (UID: \"3f917c23-bd51-44a0-b75a-7acf03f1d2de\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8q4rl" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.426988 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d9faf46-d412-4182-96a4-f8350fd4c34e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-s9zdh\" (UID: \"1d9faf46-d412-4182-96a4-f8350fd4c34e\") " pod="openshift-authentication/oauth-openshift-558db77b4-s9zdh" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.427003 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4d5c1bf9-0f8d-4363-8afc-f764165812c8-oauth-serving-cert\") pod \"console-f9d7485db-zrrr7\" (UID: \"4d5c1bf9-0f8d-4363-8afc-f764165812c8\") " pod="openshift-console/console-f9d7485db-zrrr7" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.427017 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-279xt\" (UniqueName: \"kubernetes.io/projected/9fe7270c-3b58-4011-9776-1360c24896ca-kube-api-access-279xt\") pod \"authentication-operator-69f744f599-dj2m9\" (UID: \"9fe7270c-3b58-4011-9776-1360c24896ca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dj2m9" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.427035 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39c42149-4d53-4e72-84f6-0c09d0f86ee2-serving-cert\") pod \"openshift-config-operator-7777fb866f-tqfl8\" (UID: \"39c42149-4d53-4e72-84f6-0c09d0f86ee2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-tqfl8" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.427050 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1d9faf46-d412-4182-96a4-f8350fd4c34e-audit-policies\") pod \"oauth-openshift-558db77b4-s9zdh\" (UID: \"1d9faf46-d412-4182-96a4-f8350fd4c34e\") " pod="openshift-authentication/oauth-openshift-558db77b4-s9zdh" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.427064 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1d9faf46-d412-4182-96a4-f8350fd4c34e-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-s9zdh\" (UID: \"1d9faf46-d412-4182-96a4-f8350fd4c34e\") " pod="openshift-authentication/oauth-openshift-558db77b4-s9zdh" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.427083 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9fe7270c-3b58-4011-9776-1360c24896ca-service-ca-bundle\") pod \"authentication-operator-69f744f599-dj2m9\" (UID: \"9fe7270c-3b58-4011-9776-1360c24896ca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dj2m9" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.427097 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5cd3e28d-c288-4b1d-81c2-dff387bb3de7-config\") pod \"console-operator-58897d9998-xdv7f\" (UID: \"5cd3e28d-c288-4b1d-81c2-dff387bb3de7\") " pod="openshift-console-operator/console-operator-58897d9998-xdv7f" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.438332 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.458043 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.465294 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-mmdr5"] Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.466399 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-mmdr5" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.474114 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-mmdr5"] Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.478039 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.497782 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.517619 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.527493 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1d9faf46-d412-4182-96a4-f8350fd4c34e-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-s9zdh\" (UID: \"1d9faf46-d412-4182-96a4-f8350fd4c34e\") " pod="openshift-authentication/oauth-openshift-558db77b4-s9zdh" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.527538 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1d9faf46-d412-4182-96a4-f8350fd4c34e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-s9zdh\" (UID: \"1d9faf46-d412-4182-96a4-f8350fd4c34e\") " pod="openshift-authentication/oauth-openshift-558db77b4-s9zdh" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.527562 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vj69n\" (UniqueName: \"kubernetes.io/projected/a4310bc4-d7e7-4f8b-832b-57bceda71a45-kube-api-access-vj69n\") pod \"route-controller-manager-6576b87f9c-5m9j5\" (UID: \"a4310bc4-d7e7-4f8b-832b-57bceda71a45\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5m9j5" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.527579 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ae263a90-0a71-40b8-bda1-ec21b3680994-audit-dir\") pod \"apiserver-76f77b778f-h6qvh\" (UID: \"ae263a90-0a71-40b8-bda1-ec21b3680994\") " pod="openshift-apiserver/apiserver-76f77b778f-h6qvh" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.527605 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3f917c23-bd51-44a0-b75a-7acf03f1d2de-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-8q4rl\" (UID: \"3f917c23-bd51-44a0-b75a-7acf03f1d2de\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8q4rl" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.527624 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d9faf46-d412-4182-96a4-f8350fd4c34e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-s9zdh\" (UID: \"1d9faf46-d412-4182-96a4-f8350fd4c34e\") " pod="openshift-authentication/oauth-openshift-558db77b4-s9zdh" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.527639 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4d5c1bf9-0f8d-4363-8afc-f764165812c8-oauth-serving-cert\") pod \"console-f9d7485db-zrrr7\" (UID: \"4d5c1bf9-0f8d-4363-8afc-f764165812c8\") " pod="openshift-console/console-f9d7485db-zrrr7" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.527656 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-279xt\" (UniqueName: \"kubernetes.io/projected/9fe7270c-3b58-4011-9776-1360c24896ca-kube-api-access-279xt\") pod \"authentication-operator-69f744f599-dj2m9\" (UID: \"9fe7270c-3b58-4011-9776-1360c24896ca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dj2m9" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.527675 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39c42149-4d53-4e72-84f6-0c09d0f86ee2-serving-cert\") pod \"openshift-config-operator-7777fb866f-tqfl8\" (UID: \"39c42149-4d53-4e72-84f6-0c09d0f86ee2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-tqfl8" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.527692 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1d9faf46-d412-4182-96a4-f8350fd4c34e-audit-policies\") pod \"oauth-openshift-558db77b4-s9zdh\" (UID: \"1d9faf46-d412-4182-96a4-f8350fd4c34e\") " pod="openshift-authentication/oauth-openshift-558db77b4-s9zdh" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.527709 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1d9faf46-d412-4182-96a4-f8350fd4c34e-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-s9zdh\" (UID: \"1d9faf46-d412-4182-96a4-f8350fd4c34e\") " pod="openshift-authentication/oauth-openshift-558db77b4-s9zdh" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.527732 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9fe7270c-3b58-4011-9776-1360c24896ca-service-ca-bundle\") pod \"authentication-operator-69f744f599-dj2m9\" (UID: \"9fe7270c-3b58-4011-9776-1360c24896ca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dj2m9" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.527748 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5cd3e28d-c288-4b1d-81c2-dff387bb3de7-config\") pod \"console-operator-58897d9998-xdv7f\" (UID: \"5cd3e28d-c288-4b1d-81c2-dff387bb3de7\") " pod="openshift-console-operator/console-operator-58897d9998-xdv7f" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.527764 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1d9faf46-d412-4182-96a4-f8350fd4c34e-audit-dir\") pod \"oauth-openshift-558db77b4-s9zdh\" (UID: \"1d9faf46-d412-4182-96a4-f8350fd4c34e\") " pod="openshift-authentication/oauth-openshift-558db77b4-s9zdh" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.527789 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8sbb\" (UniqueName: \"kubernetes.io/projected/5cd3e28d-c288-4b1d-81c2-dff387bb3de7-kube-api-access-r8sbb\") pod \"console-operator-58897d9998-xdv7f\" (UID: \"5cd3e28d-c288-4b1d-81c2-dff387bb3de7\") " pod="openshift-console-operator/console-operator-58897d9998-xdv7f" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.527807 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9bp7\" (UniqueName: \"kubernetes.io/projected/33724c18-9aa8-4004-accc-fd8ff92cb999-kube-api-access-c9bp7\") pod \"openshift-apiserver-operator-796bbdcf4f-t78bf\" (UID: \"33724c18-9aa8-4004-accc-fd8ff92cb999\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-t78bf" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.527824 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/1d9faf46-d412-4182-96a4-f8350fd4c34e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-s9zdh\" (UID: \"1d9faf46-d412-4182-96a4-f8350fd4c34e\") " pod="openshift-authentication/oauth-openshift-558db77b4-s9zdh" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.527840 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4310bc4-d7e7-4f8b-832b-57bceda71a45-config\") pod \"route-controller-manager-6576b87f9c-5m9j5\" (UID: \"a4310bc4-d7e7-4f8b-832b-57bceda71a45\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5m9j5" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.527855 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33724c18-9aa8-4004-accc-fd8ff92cb999-config\") pod \"openshift-apiserver-operator-796bbdcf4f-t78bf\" (UID: \"33724c18-9aa8-4004-accc-fd8ff92cb999\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-t78bf" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.527877 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47fwv\" (UniqueName: \"kubernetes.io/projected/d1b83e60-dd95-437c-847c-f60f9c33ee1f-kube-api-access-47fwv\") pod \"openshift-controller-manager-operator-756b6f6bc6-8xmnf\" (UID: \"d1b83e60-dd95-437c-847c-f60f9c33ee1f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8xmnf" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.527895 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/536f8472-158f-45c2-a0f1-b6799b6bdbdd-images\") pod \"machine-api-operator-5694c8668f-ttt2d\" (UID: \"536f8472-158f-45c2-a0f1-b6799b6bdbdd\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ttt2d" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.527910 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a18026d8-4feb-4dd9-b0ab-c24e4aa102bf-config\") pod \"kube-apiserver-operator-766d6c64bb-b626n\" (UID: \"a18026d8-4feb-4dd9-b0ab-c24e4aa102bf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b626n" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.527932 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/536c56c9-e355-4b23-a079-9e34f4bc9123-bound-sa-token\") pod \"ingress-operator-5b745b69d9-jtkc5\" (UID: \"536c56c9-e355-4b23-a079-9e34f4bc9123\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jtkc5" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.527948 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae263a90-0a71-40b8-bda1-ec21b3680994-trusted-ca-bundle\") pod \"apiserver-76f77b778f-h6qvh\" (UID: \"ae263a90-0a71-40b8-bda1-ec21b3680994\") " pod="openshift-apiserver/apiserver-76f77b778f-h6qvh" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.527967 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a18026d8-4feb-4dd9-b0ab-c24e4aa102bf-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-b626n\" (UID: \"a18026d8-4feb-4dd9-b0ab-c24e4aa102bf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b626n" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.527982 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae263a90-0a71-40b8-bda1-ec21b3680994-serving-cert\") pod \"apiserver-76f77b778f-h6qvh\" (UID: \"ae263a90-0a71-40b8-bda1-ec21b3680994\") " pod="openshift-apiserver/apiserver-76f77b778f-h6qvh" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.527997 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fe7270c-3b58-4011-9776-1360c24896ca-config\") pod \"authentication-operator-69f744f599-dj2m9\" (UID: \"9fe7270c-3b58-4011-9776-1360c24896ca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dj2m9" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.528014 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/3f917c23-bd51-44a0-b75a-7acf03f1d2de-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-8q4rl\" (UID: \"3f917c23-bd51-44a0-b75a-7acf03f1d2de\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8q4rl" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.528030 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ae263a90-0a71-40b8-bda1-ec21b3680994-etcd-client\") pod \"apiserver-76f77b778f-h6qvh\" (UID: \"ae263a90-0a71-40b8-bda1-ec21b3680994\") " pod="openshift-apiserver/apiserver-76f77b778f-h6qvh" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.529139 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d9faf46-d412-4182-96a4-f8350fd4c34e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-s9zdh\" (UID: \"1d9faf46-d412-4182-96a4-f8350fd4c34e\") " pod="openshift-authentication/oauth-openshift-558db77b4-s9zdh" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.529156 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ae263a90-0a71-40b8-bda1-ec21b3680994-encryption-config\") pod \"apiserver-76f77b778f-h6qvh\" (UID: \"ae263a90-0a71-40b8-bda1-ec21b3680994\") " pod="openshift-apiserver/apiserver-76f77b778f-h6qvh" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.529179 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4d5c1bf9-0f8d-4363-8afc-f764165812c8-console-serving-cert\") pod \"console-f9d7485db-zrrr7\" (UID: \"4d5c1bf9-0f8d-4363-8afc-f764165812c8\") " pod="openshift-console/console-f9d7485db-zrrr7" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.529212 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvlt9\" (UniqueName: \"kubernetes.io/projected/39c42149-4d53-4e72-84f6-0c09d0f86ee2-kube-api-access-qvlt9\") pod \"openshift-config-operator-7777fb866f-tqfl8\" (UID: \"39c42149-4d53-4e72-84f6-0c09d0f86ee2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-tqfl8" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.529234 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cghwc\" (UniqueName: \"kubernetes.io/projected/8f8a2fee-019f-4885-93ad-57787b4478d8-kube-api-access-cghwc\") pod \"downloads-7954f5f757-k2t68\" (UID: \"8f8a2fee-019f-4885-93ad-57787b4478d8\") " pod="openshift-console/downloads-7954f5f757-k2t68" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.529252 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/536c56c9-e355-4b23-a079-9e34f4bc9123-metrics-tls\") pod \"ingress-operator-5b745b69d9-jtkc5\" (UID: \"536c56c9-e355-4b23-a079-9e34f4bc9123\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jtkc5" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.529106 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae263a90-0a71-40b8-bda1-ec21b3680994-trusted-ca-bundle\") pod \"apiserver-76f77b778f-h6qvh\" (UID: \"ae263a90-0a71-40b8-bda1-ec21b3680994\") " pod="openshift-apiserver/apiserver-76f77b778f-h6qvh" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.529262 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4d5c1bf9-0f8d-4363-8afc-f764165812c8-oauth-serving-cert\") pod \"console-f9d7485db-zrrr7\" (UID: \"4d5c1bf9-0f8d-4363-8afc-f764165812c8\") " pod="openshift-console/console-f9d7485db-zrrr7" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.527987 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ae263a90-0a71-40b8-bda1-ec21b3680994-audit-dir\") pod \"apiserver-76f77b778f-h6qvh\" (UID: \"ae263a90-0a71-40b8-bda1-ec21b3680994\") " pod="openshift-apiserver/apiserver-76f77b778f-h6qvh" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.529278 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3f917c23-bd51-44a0-b75a-7acf03f1d2de-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-8q4rl\" (UID: \"3f917c23-bd51-44a0-b75a-7acf03f1d2de\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8q4rl" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.529590 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/536f8472-158f-45c2-a0f1-b6799b6bdbdd-images\") pod \"machine-api-operator-5694c8668f-ttt2d\" (UID: \"536f8472-158f-45c2-a0f1-b6799b6bdbdd\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ttt2d" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.529268 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae263a90-0a71-40b8-bda1-ec21b3680994-config\") pod \"apiserver-76f77b778f-h6qvh\" (UID: \"ae263a90-0a71-40b8-bda1-ec21b3680994\") " pod="openshift-apiserver/apiserver-76f77b778f-h6qvh" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.529709 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ae263a90-0a71-40b8-bda1-ec21b3680994-image-import-ca\") pod \"apiserver-76f77b778f-h6qvh\" (UID: \"ae263a90-0a71-40b8-bda1-ec21b3680994\") " pod="openshift-apiserver/apiserver-76f77b778f-h6qvh" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.529730 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1d9faf46-d412-4182-96a4-f8350fd4c34e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-s9zdh\" (UID: \"1d9faf46-d412-4182-96a4-f8350fd4c34e\") " pod="openshift-authentication/oauth-openshift-558db77b4-s9zdh" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.529751 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1d9faf46-d412-4182-96a4-f8350fd4c34e-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-s9zdh\" (UID: \"1d9faf46-d412-4182-96a4-f8350fd4c34e\") " pod="openshift-authentication/oauth-openshift-558db77b4-s9zdh" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.529769 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5cmm\" (UniqueName: \"kubernetes.io/projected/4d5c1bf9-0f8d-4363-8afc-f764165812c8-kube-api-access-z5cmm\") pod \"console-f9d7485db-zrrr7\" (UID: \"4d5c1bf9-0f8d-4363-8afc-f764165812c8\") " pod="openshift-console/console-f9d7485db-zrrr7" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.529787 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1d9faf46-d412-4182-96a4-f8350fd4c34e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-s9zdh\" (UID: \"1d9faf46-d412-4182-96a4-f8350fd4c34e\") " pod="openshift-authentication/oauth-openshift-558db77b4-s9zdh" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.529802 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4d5c1bf9-0f8d-4363-8afc-f764165812c8-console-config\") pod \"console-f9d7485db-zrrr7\" (UID: \"4d5c1bf9-0f8d-4363-8afc-f764165812c8\") " pod="openshift-console/console-f9d7485db-zrrr7" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.529818 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4d5c1bf9-0f8d-4363-8afc-f764165812c8-trusted-ca-bundle\") pod \"console-f9d7485db-zrrr7\" (UID: \"4d5c1bf9-0f8d-4363-8afc-f764165812c8\") " pod="openshift-console/console-f9d7485db-zrrr7" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.529844 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdksz\" (UniqueName: \"kubernetes.io/projected/6680af4f-2095-41e6-9343-63ab88645dea-kube-api-access-jdksz\") pod \"cluster-samples-operator-665b6dd947-hwwqg\" (UID: \"6680af4f-2095-41e6-9343-63ab88645dea\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hwwqg" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.529859 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1d9faf46-d412-4182-96a4-f8350fd4c34e-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-s9zdh\" (UID: \"1d9faf46-d412-4182-96a4-f8350fd4c34e\") " pod="openshift-authentication/oauth-openshift-558db77b4-s9zdh" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.529874 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lqnn\" (UniqueName: \"kubernetes.io/projected/1d9faf46-d412-4182-96a4-f8350fd4c34e-kube-api-access-6lqnn\") pod \"oauth-openshift-558db77b4-s9zdh\" (UID: \"1d9faf46-d412-4182-96a4-f8350fd4c34e\") " pod="openshift-authentication/oauth-openshift-558db77b4-s9zdh" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.529883 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5cd3e28d-c288-4b1d-81c2-dff387bb3de7-config\") pod \"console-operator-58897d9998-xdv7f\" (UID: \"5cd3e28d-c288-4b1d-81c2-dff387bb3de7\") " pod="openshift-console-operator/console-operator-58897d9998-xdv7f" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.529890 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5cd3e28d-c288-4b1d-81c2-dff387bb3de7-serving-cert\") pod \"console-operator-58897d9998-xdv7f\" (UID: \"5cd3e28d-c288-4b1d-81c2-dff387bb3de7\") " pod="openshift-console-operator/console-operator-58897d9998-xdv7f" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.529940 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a18026d8-4feb-4dd9-b0ab-c24e4aa102bf-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-b626n\" (UID: \"a18026d8-4feb-4dd9-b0ab-c24e4aa102bf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b626n" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.529963 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a4310bc4-d7e7-4f8b-832b-57bceda71a45-client-ca\") pod \"route-controller-manager-6576b87f9c-5m9j5\" (UID: \"a4310bc4-d7e7-4f8b-832b-57bceda71a45\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5m9j5" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.529981 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ae263a90-0a71-40b8-bda1-ec21b3680994-etcd-serving-ca\") pod \"apiserver-76f77b778f-h6qvh\" (UID: \"ae263a90-0a71-40b8-bda1-ec21b3680994\") " pod="openshift-apiserver/apiserver-76f77b778f-h6qvh" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.529998 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1b83e60-dd95-437c-847c-f60f9c33ee1f-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-8xmnf\" (UID: \"d1b83e60-dd95-437c-847c-f60f9c33ee1f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8xmnf" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.530016 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/536f8472-158f-45c2-a0f1-b6799b6bdbdd-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-ttt2d\" (UID: \"536f8472-158f-45c2-a0f1-b6799b6bdbdd\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ttt2d" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.530033 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ae263a90-0a71-40b8-bda1-ec21b3680994-node-pullsecrets\") pod \"apiserver-76f77b778f-h6qvh\" (UID: \"ae263a90-0a71-40b8-bda1-ec21b3680994\") " pod="openshift-apiserver/apiserver-76f77b778f-h6qvh" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.530050 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4d5c1bf9-0f8d-4363-8afc-f764165812c8-console-oauth-config\") pod \"console-f9d7485db-zrrr7\" (UID: \"4d5c1bf9-0f8d-4363-8afc-f764165812c8\") " pod="openshift-console/console-f9d7485db-zrrr7" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.530067 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j95pl\" (UniqueName: \"kubernetes.io/projected/3f917c23-bd51-44a0-b75a-7acf03f1d2de-kube-api-access-j95pl\") pod \"cluster-image-registry-operator-dc59b4c8b-8q4rl\" (UID: \"3f917c23-bd51-44a0-b75a-7acf03f1d2de\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8q4rl" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.530084 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4310bc4-d7e7-4f8b-832b-57bceda71a45-serving-cert\") pod \"route-controller-manager-6576b87f9c-5m9j5\" (UID: \"a4310bc4-d7e7-4f8b-832b-57bceda71a45\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5m9j5" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.530101 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/536c56c9-e355-4b23-a079-9e34f4bc9123-trusted-ca\") pod \"ingress-operator-5b745b69d9-jtkc5\" (UID: \"536c56c9-e355-4b23-a079-9e34f4bc9123\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jtkc5" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.530117 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29rcv\" (UniqueName: \"kubernetes.io/projected/ae263a90-0a71-40b8-bda1-ec21b3680994-kube-api-access-29rcv\") pod \"apiserver-76f77b778f-h6qvh\" (UID: \"ae263a90-0a71-40b8-bda1-ec21b3680994\") " pod="openshift-apiserver/apiserver-76f77b778f-h6qvh" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.530132 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9fe7270c-3b58-4011-9776-1360c24896ca-serving-cert\") pod \"authentication-operator-69f744f599-dj2m9\" (UID: \"9fe7270c-3b58-4011-9776-1360c24896ca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dj2m9" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.530150 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3f917c23-bd51-44a0-b75a-7acf03f1d2de-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-8q4rl\" (UID: \"3f917c23-bd51-44a0-b75a-7acf03f1d2de\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8q4rl" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.530167 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1d9faf46-d412-4182-96a4-f8350fd4c34e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-s9zdh\" (UID: \"1d9faf46-d412-4182-96a4-f8350fd4c34e\") " pod="openshift-authentication/oauth-openshift-558db77b4-s9zdh" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.530208 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5cd3e28d-c288-4b1d-81c2-dff387bb3de7-trusted-ca\") pod \"console-operator-58897d9998-xdv7f\" (UID: \"5cd3e28d-c288-4b1d-81c2-dff387bb3de7\") " pod="openshift-console-operator/console-operator-58897d9998-xdv7f" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.530228 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6680af4f-2095-41e6-9343-63ab88645dea-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-hwwqg\" (UID: \"6680af4f-2095-41e6-9343-63ab88645dea\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hwwqg" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.530243 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4d5c1bf9-0f8d-4363-8afc-f764165812c8-service-ca\") pod \"console-f9d7485db-zrrr7\" (UID: \"4d5c1bf9-0f8d-4363-8afc-f764165812c8\") " pod="openshift-console/console-f9d7485db-zrrr7" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.530259 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcstj\" (UniqueName: \"kubernetes.io/projected/536c56c9-e355-4b23-a079-9e34f4bc9123-kube-api-access-kcstj\") pod \"ingress-operator-5b745b69d9-jtkc5\" (UID: \"536c56c9-e355-4b23-a079-9e34f4bc9123\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jtkc5" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.530259 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33724c18-9aa8-4004-accc-fd8ff92cb999-config\") pod \"openshift-apiserver-operator-796bbdcf4f-t78bf\" (UID: \"33724c18-9aa8-4004-accc-fd8ff92cb999\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-t78bf" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.530273 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9fe7270c-3b58-4011-9776-1360c24896ca-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-dj2m9\" (UID: \"9fe7270c-3b58-4011-9776-1360c24896ca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dj2m9" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.530317 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1d9faf46-d412-4182-96a4-f8350fd4c34e-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-s9zdh\" (UID: \"1d9faf46-d412-4182-96a4-f8350fd4c34e\") " pod="openshift-authentication/oauth-openshift-558db77b4-s9zdh" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.530337 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/39c42149-4d53-4e72-84f6-0c09d0f86ee2-available-featuregates\") pod \"openshift-config-operator-7777fb866f-tqfl8\" (UID: \"39c42149-4d53-4e72-84f6-0c09d0f86ee2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-tqfl8" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.530355 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/536f8472-158f-45c2-a0f1-b6799b6bdbdd-config\") pod \"machine-api-operator-5694c8668f-ttt2d\" (UID: \"536f8472-158f-45c2-a0f1-b6799b6bdbdd\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ttt2d" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.530371 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kptw\" (UniqueName: \"kubernetes.io/projected/536f8472-158f-45c2-a0f1-b6799b6bdbdd-kube-api-access-2kptw\") pod \"machine-api-operator-5694c8668f-ttt2d\" (UID: \"536f8472-158f-45c2-a0f1-b6799b6bdbdd\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ttt2d" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.530387 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33724c18-9aa8-4004-accc-fd8ff92cb999-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-t78bf\" (UID: \"33724c18-9aa8-4004-accc-fd8ff92cb999\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-t78bf" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.530403 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ae263a90-0a71-40b8-bda1-ec21b3680994-audit\") pod \"apiserver-76f77b778f-h6qvh\" (UID: \"ae263a90-0a71-40b8-bda1-ec21b3680994\") " pod="openshift-apiserver/apiserver-76f77b778f-h6qvh" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.530419 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1b83e60-dd95-437c-847c-f60f9c33ee1f-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-8xmnf\" (UID: \"d1b83e60-dd95-437c-847c-f60f9c33ee1f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8xmnf" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.530738 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1d9faf46-d412-4182-96a4-f8350fd4c34e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-s9zdh\" (UID: \"1d9faf46-d412-4182-96a4-f8350fd4c34e\") " pod="openshift-authentication/oauth-openshift-558db77b4-s9zdh" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.530842 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9fe7270c-3b58-4011-9776-1360c24896ca-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-dj2m9\" (UID: \"9fe7270c-3b58-4011-9776-1360c24896ca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dj2m9" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.531585 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ae263a90-0a71-40b8-bda1-ec21b3680994-image-import-ca\") pod \"apiserver-76f77b778f-h6qvh\" (UID: \"ae263a90-0a71-40b8-bda1-ec21b3680994\") " pod="openshift-apiserver/apiserver-76f77b778f-h6qvh" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.531870 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae263a90-0a71-40b8-bda1-ec21b3680994-config\") pod \"apiserver-76f77b778f-h6qvh\" (UID: \"ae263a90-0a71-40b8-bda1-ec21b3680994\") " pod="openshift-apiserver/apiserver-76f77b778f-h6qvh" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.532566 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fe7270c-3b58-4011-9776-1360c24896ca-config\") pod \"authentication-operator-69f744f599-dj2m9\" (UID: \"9fe7270c-3b58-4011-9776-1360c24896ca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dj2m9" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.532676 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4d5c1bf9-0f8d-4363-8afc-f764165812c8-console-config\") pod \"console-f9d7485db-zrrr7\" (UID: \"4d5c1bf9-0f8d-4363-8afc-f764165812c8\") " pod="openshift-console/console-f9d7485db-zrrr7" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.533248 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1d9faf46-d412-4182-96a4-f8350fd4c34e-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-s9zdh\" (UID: \"1d9faf46-d412-4182-96a4-f8350fd4c34e\") " pod="openshift-authentication/oauth-openshift-558db77b4-s9zdh" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.533590 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4d5c1bf9-0f8d-4363-8afc-f764165812c8-service-ca\") pod \"console-f9d7485db-zrrr7\" (UID: \"4d5c1bf9-0f8d-4363-8afc-f764165812c8\") " pod="openshift-console/console-f9d7485db-zrrr7" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.533639 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ae263a90-0a71-40b8-bda1-ec21b3680994-etcd-serving-ca\") pod \"apiserver-76f77b778f-h6qvh\" (UID: \"ae263a90-0a71-40b8-bda1-ec21b3680994\") " pod="openshift-apiserver/apiserver-76f77b778f-h6qvh" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.533967 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/536f8472-158f-45c2-a0f1-b6799b6bdbdd-config\") pod \"machine-api-operator-5694c8668f-ttt2d\" (UID: \"536f8472-158f-45c2-a0f1-b6799b6bdbdd\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ttt2d" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.533259 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/39c42149-4d53-4e72-84f6-0c09d0f86ee2-available-featuregates\") pod \"openshift-config-operator-7777fb866f-tqfl8\" (UID: \"39c42149-4d53-4e72-84f6-0c09d0f86ee2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-tqfl8" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.534108 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5cd3e28d-c288-4b1d-81c2-dff387bb3de7-trusted-ca\") pod \"console-operator-58897d9998-xdv7f\" (UID: \"5cd3e28d-c288-4b1d-81c2-dff387bb3de7\") " pod="openshift-console-operator/console-operator-58897d9998-xdv7f" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.534167 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ae263a90-0a71-40b8-bda1-ec21b3680994-node-pullsecrets\") pod \"apiserver-76f77b778f-h6qvh\" (UID: \"ae263a90-0a71-40b8-bda1-ec21b3680994\") " pod="openshift-apiserver/apiserver-76f77b778f-h6qvh" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.534702 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1d9faf46-d412-4182-96a4-f8350fd4c34e-audit-dir\") pod \"oauth-openshift-558db77b4-s9zdh\" (UID: \"1d9faf46-d412-4182-96a4-f8350fd4c34e\") " pod="openshift-authentication/oauth-openshift-558db77b4-s9zdh" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.534879 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4d5c1bf9-0f8d-4363-8afc-f764165812c8-trusted-ca-bundle\") pod \"console-f9d7485db-zrrr7\" (UID: \"4d5c1bf9-0f8d-4363-8afc-f764165812c8\") " pod="openshift-console/console-f9d7485db-zrrr7" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.534906 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a4310bc4-d7e7-4f8b-832b-57bceda71a45-client-ca\") pod \"route-controller-manager-6576b87f9c-5m9j5\" (UID: \"a4310bc4-d7e7-4f8b-832b-57bceda71a45\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5m9j5" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.535142 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ae263a90-0a71-40b8-bda1-ec21b3680994-audit\") pod \"apiserver-76f77b778f-h6qvh\" (UID: \"ae263a90-0a71-40b8-bda1-ec21b3680994\") " pod="openshift-apiserver/apiserver-76f77b778f-h6qvh" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.535199 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ae263a90-0a71-40b8-bda1-ec21b3680994-encryption-config\") pod \"apiserver-76f77b778f-h6qvh\" (UID: \"ae263a90-0a71-40b8-bda1-ec21b3680994\") " pod="openshift-apiserver/apiserver-76f77b778f-h6qvh" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.535203 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ae263a90-0a71-40b8-bda1-ec21b3680994-etcd-client\") pod \"apiserver-76f77b778f-h6qvh\" (UID: \"ae263a90-0a71-40b8-bda1-ec21b3680994\") " pod="openshift-apiserver/apiserver-76f77b778f-h6qvh" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.535413 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1b83e60-dd95-437c-847c-f60f9c33ee1f-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-8xmnf\" (UID: \"d1b83e60-dd95-437c-847c-f60f9c33ee1f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8xmnf" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.535604 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4d5c1bf9-0f8d-4363-8afc-f764165812c8-console-oauth-config\") pod \"console-f9d7485db-zrrr7\" (UID: \"4d5c1bf9-0f8d-4363-8afc-f764165812c8\") " pod="openshift-console/console-f9d7485db-zrrr7" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.535765 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1d9faf46-d412-4182-96a4-f8350fd4c34e-audit-policies\") pod \"oauth-openshift-558db77b4-s9zdh\" (UID: \"1d9faf46-d412-4182-96a4-f8350fd4c34e\") " pod="openshift-authentication/oauth-openshift-558db77b4-s9zdh" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.535881 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9fe7270c-3b58-4011-9776-1360c24896ca-service-ca-bundle\") pod \"authentication-operator-69f744f599-dj2m9\" (UID: \"9fe7270c-3b58-4011-9776-1360c24896ca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dj2m9" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.536221 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/3f917c23-bd51-44a0-b75a-7acf03f1d2de-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-8q4rl\" (UID: \"3f917c23-bd51-44a0-b75a-7acf03f1d2de\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8q4rl" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.536650 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4310bc4-d7e7-4f8b-832b-57bceda71a45-config\") pod \"route-controller-manager-6576b87f9c-5m9j5\" (UID: \"a4310bc4-d7e7-4f8b-832b-57bceda71a45\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5m9j5" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.537753 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.537996 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4310bc4-d7e7-4f8b-832b-57bceda71a45-serving-cert\") pod \"route-controller-manager-6576b87f9c-5m9j5\" (UID: \"a4310bc4-d7e7-4f8b-832b-57bceda71a45\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5m9j5" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.538017 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4d5c1bf9-0f8d-4363-8afc-f764165812c8-console-serving-cert\") pod \"console-f9d7485db-zrrr7\" (UID: \"4d5c1bf9-0f8d-4363-8afc-f764165812c8\") " pod="openshift-console/console-f9d7485db-zrrr7" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.538518 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1d9faf46-d412-4182-96a4-f8350fd4c34e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-s9zdh\" (UID: \"1d9faf46-d412-4182-96a4-f8350fd4c34e\") " pod="openshift-authentication/oauth-openshift-558db77b4-s9zdh" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.538969 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae263a90-0a71-40b8-bda1-ec21b3680994-serving-cert\") pod \"apiserver-76f77b778f-h6qvh\" (UID: \"ae263a90-0a71-40b8-bda1-ec21b3680994\") " pod="openshift-apiserver/apiserver-76f77b778f-h6qvh" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.539141 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33724c18-9aa8-4004-accc-fd8ff92cb999-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-t78bf\" (UID: \"33724c18-9aa8-4004-accc-fd8ff92cb999\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-t78bf" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.539213 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1d9faf46-d412-4182-96a4-f8350fd4c34e-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-s9zdh\" (UID: \"1d9faf46-d412-4182-96a4-f8350fd4c34e\") " pod="openshift-authentication/oauth-openshift-558db77b4-s9zdh" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.539309 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6680af4f-2095-41e6-9343-63ab88645dea-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-hwwqg\" (UID: \"6680af4f-2095-41e6-9343-63ab88645dea\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hwwqg" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.539397 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1d9faf46-d412-4182-96a4-f8350fd4c34e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-s9zdh\" (UID: \"1d9faf46-d412-4182-96a4-f8350fd4c34e\") " pod="openshift-authentication/oauth-openshift-558db77b4-s9zdh" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.539409 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39c42149-4d53-4e72-84f6-0c09d0f86ee2-serving-cert\") pod \"openshift-config-operator-7777fb866f-tqfl8\" (UID: \"39c42149-4d53-4e72-84f6-0c09d0f86ee2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-tqfl8" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.539670 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1b83e60-dd95-437c-847c-f60f9c33ee1f-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-8xmnf\" (UID: \"d1b83e60-dd95-437c-847c-f60f9c33ee1f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8xmnf" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.539986 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1d9faf46-d412-4182-96a4-f8350fd4c34e-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-s9zdh\" (UID: \"1d9faf46-d412-4182-96a4-f8350fd4c34e\") " pod="openshift-authentication/oauth-openshift-558db77b4-s9zdh" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.540012 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/536f8472-158f-45c2-a0f1-b6799b6bdbdd-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-ttt2d\" (UID: \"536f8472-158f-45c2-a0f1-b6799b6bdbdd\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ttt2d" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.540298 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1d9faf46-d412-4182-96a4-f8350fd4c34e-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-s9zdh\" (UID: \"1d9faf46-d412-4182-96a4-f8350fd4c34e\") " pod="openshift-authentication/oauth-openshift-558db77b4-s9zdh" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.540488 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5cd3e28d-c288-4b1d-81c2-dff387bb3de7-serving-cert\") pod \"console-operator-58897d9998-xdv7f\" (UID: \"5cd3e28d-c288-4b1d-81c2-dff387bb3de7\") " pod="openshift-console-operator/console-operator-58897d9998-xdv7f" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.541013 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1d9faf46-d412-4182-96a4-f8350fd4c34e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-s9zdh\" (UID: \"1d9faf46-d412-4182-96a4-f8350fd4c34e\") " pod="openshift-authentication/oauth-openshift-558db77b4-s9zdh" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.541004 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1d9faf46-d412-4182-96a4-f8350fd4c34e-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-s9zdh\" (UID: \"1d9faf46-d412-4182-96a4-f8350fd4c34e\") " pod="openshift-authentication/oauth-openshift-558db77b4-s9zdh" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.541124 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9fe7270c-3b58-4011-9776-1360c24896ca-serving-cert\") pod \"authentication-operator-69f744f599-dj2m9\" (UID: \"9fe7270c-3b58-4011-9776-1360c24896ca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dj2m9" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.541363 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/1d9faf46-d412-4182-96a4-f8350fd4c34e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-s9zdh\" (UID: \"1d9faf46-d412-4182-96a4-f8350fd4c34e\") " pod="openshift-authentication/oauth-openshift-558db77b4-s9zdh" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.559810 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.579054 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.598639 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.602573 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a18026d8-4feb-4dd9-b0ab-c24e4aa102bf-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-b626n\" (UID: \"a18026d8-4feb-4dd9-b0ab-c24e4aa102bf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b626n" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.617713 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.619716 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a18026d8-4feb-4dd9-b0ab-c24e4aa102bf-config\") pod \"kube-apiserver-operator-766d6c64bb-b626n\" (UID: \"a18026d8-4feb-4dd9-b0ab-c24e4aa102bf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b626n" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.638362 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.658058 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.678067 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.698937 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.737785 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.758374 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.778767 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.797597 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.805020 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/536c56c9-e355-4b23-a079-9e34f4bc9123-metrics-tls\") pod \"ingress-operator-5b745b69d9-jtkc5\" (UID: \"536c56c9-e355-4b23-a079-9e34f4bc9123\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jtkc5" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.818079 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.837578 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.857889 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.882916 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.885876 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/536c56c9-e355-4b23-a079-9e34f4bc9123-trusted-ca\") pod \"ingress-operator-5b745b69d9-jtkc5\" (UID: \"536c56c9-e355-4b23-a079-9e34f4bc9123\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jtkc5" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.917774 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.938393 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.958108 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.979263 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 28 06:50:20 crc kubenswrapper[4642]: I0128 06:50:20.997706 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 28 06:50:21 crc kubenswrapper[4642]: I0128 06:50:21.017666 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 28 06:50:21 crc kubenswrapper[4642]: I0128 06:50:21.037785 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 28 06:50:21 crc kubenswrapper[4642]: I0128 06:50:21.058552 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 28 06:50:21 crc kubenswrapper[4642]: I0128 06:50:21.077946 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 28 06:50:21 crc kubenswrapper[4642]: I0128 06:50:21.098139 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 28 06:50:21 crc kubenswrapper[4642]: I0128 06:50:21.118018 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 28 06:50:21 crc kubenswrapper[4642]: I0128 06:50:21.141962 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 28 06:50:21 crc kubenswrapper[4642]: I0128 06:50:21.158280 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 28 06:50:21 crc kubenswrapper[4642]: I0128 06:50:21.177638 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 28 06:50:21 crc kubenswrapper[4642]: I0128 06:50:21.198109 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 28 06:50:21 crc kubenswrapper[4642]: I0128 06:50:21.217739 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 28 06:50:21 crc kubenswrapper[4642]: I0128 06:50:21.238496 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 28 06:50:21 crc kubenswrapper[4642]: I0128 06:50:21.257915 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 28 06:50:21 crc kubenswrapper[4642]: I0128 06:50:21.278492 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 28 06:50:21 crc kubenswrapper[4642]: I0128 06:50:21.297858 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 28 06:50:21 crc kubenswrapper[4642]: I0128 06:50:21.317933 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 28 06:50:21 crc kubenswrapper[4642]: I0128 06:50:21.338277 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 28 06:50:21 crc kubenswrapper[4642]: I0128 06:50:21.356538 4642 request.go:700] Waited for 1.007805999s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dmarketplace-operator-dockercfg-5nsgg&limit=500&resourceVersion=0 Jan 28 06:50:21 crc kubenswrapper[4642]: I0128 06:50:21.357851 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 28 06:50:21 crc kubenswrapper[4642]: I0128 06:50:21.378430 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 28 06:50:21 crc kubenswrapper[4642]: I0128 06:50:21.403179 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 28 06:50:21 crc kubenswrapper[4642]: I0128 06:50:21.418340 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 28 06:50:21 crc kubenswrapper[4642]: I0128 06:50:21.438169 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 28 06:50:21 crc kubenswrapper[4642]: I0128 06:50:21.458645 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 28 06:50:21 crc kubenswrapper[4642]: I0128 06:50:21.478254 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 28 06:50:21 crc kubenswrapper[4642]: I0128 06:50:21.497921 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 28 06:50:21 crc kubenswrapper[4642]: I0128 06:50:21.518625 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 28 06:50:21 crc kubenswrapper[4642]: I0128 06:50:21.537882 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 28 06:50:21 crc kubenswrapper[4642]: I0128 06:50:21.557956 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 28 06:50:21 crc kubenswrapper[4642]: I0128 06:50:21.577969 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 28 06:50:21 crc kubenswrapper[4642]: I0128 06:50:21.598454 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 28 06:50:21 crc kubenswrapper[4642]: I0128 06:50:21.618099 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 28 06:50:21 crc kubenswrapper[4642]: I0128 06:50:21.638252 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 28 06:50:21 crc kubenswrapper[4642]: I0128 06:50:21.658475 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 28 06:50:21 crc kubenswrapper[4642]: I0128 06:50:21.677804 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 28 06:50:21 crc kubenswrapper[4642]: I0128 06:50:21.698491 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 28 06:50:21 crc kubenswrapper[4642]: I0128 06:50:21.718345 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 28 06:50:21 crc kubenswrapper[4642]: I0128 06:50:21.737619 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 28 06:50:21 crc kubenswrapper[4642]: I0128 06:50:21.757785 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 28 06:50:21 crc kubenswrapper[4642]: I0128 06:50:21.778127 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 28 06:50:21 crc kubenswrapper[4642]: I0128 06:50:21.797771 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 28 06:50:21 crc kubenswrapper[4642]: I0128 06:50:21.817818 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 28 06:50:21 crc kubenswrapper[4642]: I0128 06:50:21.838539 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 28 06:50:21 crc kubenswrapper[4642]: I0128 06:50:21.858027 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 28 06:50:21 crc kubenswrapper[4642]: I0128 06:50:21.877821 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 28 06:50:21 crc kubenswrapper[4642]: I0128 06:50:21.898482 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 28 06:50:21 crc kubenswrapper[4642]: I0128 06:50:21.917936 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 28 06:50:21 crc kubenswrapper[4642]: I0128 06:50:21.938414 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 28 06:50:21 crc kubenswrapper[4642]: I0128 06:50:21.957887 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 28 06:50:21 crc kubenswrapper[4642]: I0128 06:50:21.978880 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 28 06:50:21 crc kubenswrapper[4642]: I0128 06:50:21.998598 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.017812 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.037959 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.058438 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.078052 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.097711 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.118363 4642 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.138467 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.157926 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.178064 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.198570 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.219277 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.238141 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.258136 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.278359 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.297507 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.331078 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vj69n\" (UniqueName: \"kubernetes.io/projected/a4310bc4-d7e7-4f8b-832b-57bceda71a45-kube-api-access-vj69n\") pod \"route-controller-manager-6576b87f9c-5m9j5\" (UID: \"a4310bc4-d7e7-4f8b-832b-57bceda71a45\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5m9j5" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.349018 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-279xt\" (UniqueName: \"kubernetes.io/projected/9fe7270c-3b58-4011-9776-1360c24896ca-kube-api-access-279xt\") pod \"authentication-operator-69f744f599-dj2m9\" (UID: \"9fe7270c-3b58-4011-9776-1360c24896ca\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dj2m9" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.356678 4642 request.go:700] Waited for 1.827405438s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/serviceaccounts/ingress-operator/token Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.369727 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/536c56c9-e355-4b23-a079-9e34f4bc9123-bound-sa-token\") pod \"ingress-operator-5b745b69d9-jtkc5\" (UID: \"536c56c9-e355-4b23-a079-9e34f4bc9123\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jtkc5" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.389302 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvlt9\" (UniqueName: \"kubernetes.io/projected/39c42149-4d53-4e72-84f6-0c09d0f86ee2-kube-api-access-qvlt9\") pod \"openshift-config-operator-7777fb866f-tqfl8\" (UID: \"39c42149-4d53-4e72-84f6-0c09d0f86ee2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-tqfl8" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.409073 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47fwv\" (UniqueName: \"kubernetes.io/projected/d1b83e60-dd95-437c-847c-f60f9c33ee1f-kube-api-access-47fwv\") pod \"openshift-controller-manager-operator-756b6f6bc6-8xmnf\" (UID: \"d1b83e60-dd95-437c-847c-f60f9c33ee1f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8xmnf" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.429789 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cghwc\" (UniqueName: \"kubernetes.io/projected/8f8a2fee-019f-4885-93ad-57787b4478d8-kube-api-access-cghwc\") pod \"downloads-7954f5f757-k2t68\" (UID: \"8f8a2fee-019f-4885-93ad-57787b4478d8\") " pod="openshift-console/downloads-7954f5f757-k2t68" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.438030 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-dj2m9" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.446286 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-tqfl8" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.449171 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j95pl\" (UniqueName: \"kubernetes.io/projected/3f917c23-bd51-44a0-b75a-7acf03f1d2de-kube-api-access-j95pl\") pod \"cluster-image-registry-operator-dc59b4c8b-8q4rl\" (UID: \"3f917c23-bd51-44a0-b75a-7acf03f1d2de\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8q4rl" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.470544 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a18026d8-4feb-4dd9-b0ab-c24e4aa102bf-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-b626n\" (UID: \"a18026d8-4feb-4dd9-b0ab-c24e4aa102bf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b626n" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.487885 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-k2t68" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.492159 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lqnn\" (UniqueName: \"kubernetes.io/projected/1d9faf46-d412-4182-96a4-f8350fd4c34e-kube-api-access-6lqnn\") pod \"oauth-openshift-558db77b4-s9zdh\" (UID: \"1d9faf46-d412-4182-96a4-f8350fd4c34e\") " pod="openshift-authentication/oauth-openshift-558db77b4-s9zdh" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.502508 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8xmnf" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.513761 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdksz\" (UniqueName: \"kubernetes.io/projected/6680af4f-2095-41e6-9343-63ab88645dea-kube-api-access-jdksz\") pod \"cluster-samples-operator-665b6dd947-hwwqg\" (UID: \"6680af4f-2095-41e6-9343-63ab88645dea\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hwwqg" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.514727 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5m9j5" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.532175 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kptw\" (UniqueName: \"kubernetes.io/projected/536f8472-158f-45c2-a0f1-b6799b6bdbdd-kube-api-access-2kptw\") pod \"machine-api-operator-5694c8668f-ttt2d\" (UID: \"536f8472-158f-45c2-a0f1-b6799b6bdbdd\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ttt2d" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.542982 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b626n" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.556320 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcstj\" (UniqueName: \"kubernetes.io/projected/536c56c9-e355-4b23-a079-9e34f4bc9123-kube-api-access-kcstj\") pod \"ingress-operator-5b745b69d9-jtkc5\" (UID: \"536c56c9-e355-4b23-a079-9e34f4bc9123\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jtkc5" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.561247 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jtkc5" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.577374 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29rcv\" (UniqueName: \"kubernetes.io/projected/ae263a90-0a71-40b8-bda1-ec21b3680994-kube-api-access-29rcv\") pod \"apiserver-76f77b778f-h6qvh\" (UID: \"ae263a90-0a71-40b8-bda1-ec21b3680994\") " pod="openshift-apiserver/apiserver-76f77b778f-h6qvh" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.585374 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-tqfl8"] Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.590874 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5cmm\" (UniqueName: \"kubernetes.io/projected/4d5c1bf9-0f8d-4363-8afc-f764165812c8-kube-api-access-z5cmm\") pod \"console-f9d7485db-zrrr7\" (UID: \"4d5c1bf9-0f8d-4363-8afc-f764165812c8\") " pod="openshift-console/console-f9d7485db-zrrr7" Jan 28 06:50:22 crc kubenswrapper[4642]: W0128 06:50:22.607427 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod39c42149_4d53_4e72_84f6_0c09d0f86ee2.slice/crio-f2aee6ee5f48098a01e158b252cf1f454a4dc7d5880bcb0dcedf4671bb2d7999 WatchSource:0}: Error finding container f2aee6ee5f48098a01e158b252cf1f454a4dc7d5880bcb0dcedf4671bb2d7999: Status 404 returned error can't find the container with id f2aee6ee5f48098a01e158b252cf1f454a4dc7d5880bcb0dcedf4671bb2d7999 Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.611458 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3f917c23-bd51-44a0-b75a-7acf03f1d2de-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-8q4rl\" (UID: \"3f917c23-bd51-44a0-b75a-7acf03f1d2de\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8q4rl" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.613126 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-dj2m9"] Jan 28 06:50:22 crc kubenswrapper[4642]: W0128 06:50:22.622351 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9fe7270c_3b58_4011_9776_1360c24896ca.slice/crio-4ac5886bebc57c8c193447b73865a200bff1f310f76d8127073e8161e1c842db WatchSource:0}: Error finding container 4ac5886bebc57c8c193447b73865a200bff1f310f76d8127073e8161e1c842db: Status 404 returned error can't find the container with id 4ac5886bebc57c8c193447b73865a200bff1f310f76d8127073e8161e1c842db Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.633205 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9bp7\" (UniqueName: \"kubernetes.io/projected/33724c18-9aa8-4004-accc-fd8ff92cb999-kube-api-access-c9bp7\") pod \"openshift-apiserver-operator-796bbdcf4f-t78bf\" (UID: \"33724c18-9aa8-4004-accc-fd8ff92cb999\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-t78bf" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.656801 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8sbb\" (UniqueName: \"kubernetes.io/projected/5cd3e28d-c288-4b1d-81c2-dff387bb3de7-kube-api-access-r8sbb\") pod \"console-operator-58897d9998-xdv7f\" (UID: \"5cd3e28d-c288-4b1d-81c2-dff387bb3de7\") " pod="openshift-console-operator/console-operator-58897d9998-xdv7f" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.682779 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-t78bf" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.686824 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-tqfl8" event={"ID":"39c42149-4d53-4e72-84f6-0c09d0f86ee2","Type":"ContainerStarted","Data":"f2aee6ee5f48098a01e158b252cf1f454a4dc7d5880bcb0dcedf4671bb2d7999"} Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.687547 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-dj2m9" event={"ID":"9fe7270c-3b58-4011-9776-1360c24896ca","Type":"ContainerStarted","Data":"4ac5886bebc57c8c193447b73865a200bff1f310f76d8127073e8161e1c842db"} Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.710397 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-h6qvh" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.723735 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-ttt2d" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.727362 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b626n"] Jan 28 06:50:22 crc kubenswrapper[4642]: W0128 06:50:22.736443 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda18026d8_4feb_4dd9_b0ab_c24e4aa102bf.slice/crio-8b88f4d8ab54c6876227014756d2941b469be7b5c3f149d44c1fbe3af41b3497 WatchSource:0}: Error finding container 8b88f4d8ab54c6876227014756d2941b469be7b5c3f149d44c1fbe3af41b3497: Status 404 returned error can't find the container with id 8b88f4d8ab54c6876227014756d2941b469be7b5c3f149d44c1fbe3af41b3497 Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.751604 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/522a2fa1-337b-4bb2-ac79-06d592d59bbe-etcd-service-ca\") pod \"etcd-operator-b45778765-kjn9r\" (UID: \"522a2fa1-337b-4bb2-ac79-06d592d59bbe\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kjn9r" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.751631 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lq5d\" (UniqueName: \"kubernetes.io/projected/51a36588-71bc-409b-b36b-2f76917e5c40-kube-api-access-4lq5d\") pod \"apiserver-7bbb656c7d-jwpbc\" (UID: \"51a36588-71bc-409b-b36b-2f76917e5c40\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jwpbc" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.751649 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/522a2fa1-337b-4bb2-ac79-06d592d59bbe-config\") pod \"etcd-operator-b45778765-kjn9r\" (UID: \"522a2fa1-337b-4bb2-ac79-06d592d59bbe\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kjn9r" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.751669 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/10725107-b912-41e8-b667-5d42e2fa5cd8-metrics-tls\") pod \"dns-operator-744455d44c-ggxxq\" (UID: \"10725107-b912-41e8-b667-5d42e2fa5cd8\") " pod="openshift-dns-operator/dns-operator-744455d44c-ggxxq" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.751690 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1f06bd76-391b-4d80-ba76-a992ee54241a-trusted-ca\") pod \"image-registry-697d97f7c8-fg5mw\" (UID: \"1f06bd76-391b-4d80-ba76-a992ee54241a\") " pod="openshift-image-registry/image-registry-697d97f7c8-fg5mw" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.751705 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1f06bd76-391b-4d80-ba76-a992ee54241a-registry-certificates\") pod \"image-registry-697d97f7c8-fg5mw\" (UID: \"1f06bd76-391b-4d80-ba76-a992ee54241a\") " pod="openshift-image-registry/image-registry-697d97f7c8-fg5mw" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.751719 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psvvh\" (UniqueName: \"kubernetes.io/projected/3ed61f23-b0b6-4a75-9f9c-44a992a64d23-kube-api-access-psvvh\") pod \"controller-manager-879f6c89f-t9tps\" (UID: \"3ed61f23-b0b6-4a75-9f9c-44a992a64d23\") " pod="openshift-controller-manager/controller-manager-879f6c89f-t9tps" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.751732 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/522a2fa1-337b-4bb2-ac79-06d592d59bbe-etcd-client\") pod \"etcd-operator-b45778765-kjn9r\" (UID: \"522a2fa1-337b-4bb2-ac79-06d592d59bbe\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kjn9r" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.751745 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/c1575b37-adf2-48d0-98ac-2ed31860ebeb-machine-approver-tls\") pod \"machine-approver-56656f9798-p5lc2\" (UID: \"c1575b37-adf2-48d0-98ac-2ed31860ebeb\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p5lc2" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.751762 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/51a36588-71bc-409b-b36b-2f76917e5c40-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-jwpbc\" (UID: \"51a36588-71bc-409b-b36b-2f76917e5c40\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jwpbc" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.751777 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/51a36588-71bc-409b-b36b-2f76917e5c40-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-jwpbc\" (UID: \"51a36588-71bc-409b-b36b-2f76917e5c40\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jwpbc" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.751790 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1f06bd76-391b-4d80-ba76-a992ee54241a-ca-trust-extracted\") pod \"image-registry-697d97f7c8-fg5mw\" (UID: \"1f06bd76-391b-4d80-ba76-a992ee54241a\") " pod="openshift-image-registry/image-registry-697d97f7c8-fg5mw" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.751820 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fg5mw\" (UID: \"1f06bd76-391b-4d80-ba76-a992ee54241a\") " pod="openshift-image-registry/image-registry-697d97f7c8-fg5mw" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.751835 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tnb6\" (UniqueName: \"kubernetes.io/projected/c1575b37-adf2-48d0-98ac-2ed31860ebeb-kube-api-access-9tnb6\") pod \"machine-approver-56656f9798-p5lc2\" (UID: \"c1575b37-adf2-48d0-98ac-2ed31860ebeb\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p5lc2" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.751850 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/51a36588-71bc-409b-b36b-2f76917e5c40-audit-dir\") pod \"apiserver-7bbb656c7d-jwpbc\" (UID: \"51a36588-71bc-409b-b36b-2f76917e5c40\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jwpbc" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.751873 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pr4f\" (UniqueName: \"kubernetes.io/projected/522a2fa1-337b-4bb2-ac79-06d592d59bbe-kube-api-access-2pr4f\") pod \"etcd-operator-b45778765-kjn9r\" (UID: \"522a2fa1-337b-4bb2-ac79-06d592d59bbe\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kjn9r" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.751886 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/51a36588-71bc-409b-b36b-2f76917e5c40-audit-policies\") pod \"apiserver-7bbb656c7d-jwpbc\" (UID: \"51a36588-71bc-409b-b36b-2f76917e5c40\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jwpbc" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.751899 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51a36588-71bc-409b-b36b-2f76917e5c40-serving-cert\") pod \"apiserver-7bbb656c7d-jwpbc\" (UID: \"51a36588-71bc-409b-b36b-2f76917e5c40\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jwpbc" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.751914 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1f06bd76-391b-4d80-ba76-a992ee54241a-bound-sa-token\") pod \"image-registry-697d97f7c8-fg5mw\" (UID: \"1f06bd76-391b-4d80-ba76-a992ee54241a\") " pod="openshift-image-registry/image-registry-697d97f7c8-fg5mw" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.751931 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/51a36588-71bc-409b-b36b-2f76917e5c40-encryption-config\") pod \"apiserver-7bbb656c7d-jwpbc\" (UID: \"51a36588-71bc-409b-b36b-2f76917e5c40\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jwpbc" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.751948 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/522a2fa1-337b-4bb2-ac79-06d592d59bbe-serving-cert\") pod \"etcd-operator-b45778765-kjn9r\" (UID: \"522a2fa1-337b-4bb2-ac79-06d592d59bbe\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kjn9r" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.751964 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4k75\" (UniqueName: \"kubernetes.io/projected/1f06bd76-391b-4d80-ba76-a992ee54241a-kube-api-access-z4k75\") pod \"image-registry-697d97f7c8-fg5mw\" (UID: \"1f06bd76-391b-4d80-ba76-a992ee54241a\") " pod="openshift-image-registry/image-registry-697d97f7c8-fg5mw" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.751986 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/522a2fa1-337b-4bb2-ac79-06d592d59bbe-etcd-ca\") pod \"etcd-operator-b45778765-kjn9r\" (UID: \"522a2fa1-337b-4bb2-ac79-06d592d59bbe\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kjn9r" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.752007 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c1575b37-adf2-48d0-98ac-2ed31860ebeb-auth-proxy-config\") pod \"machine-approver-56656f9798-p5lc2\" (UID: \"c1575b37-adf2-48d0-98ac-2ed31860ebeb\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p5lc2" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.752024 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwt4z\" (UniqueName: \"kubernetes.io/projected/10725107-b912-41e8-b667-5d42e2fa5cd8-kube-api-access-qwt4z\") pod \"dns-operator-744455d44c-ggxxq\" (UID: \"10725107-b912-41e8-b667-5d42e2fa5cd8\") " pod="openshift-dns-operator/dns-operator-744455d44c-ggxxq" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.752039 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1f06bd76-391b-4d80-ba76-a992ee54241a-registry-tls\") pod \"image-registry-697d97f7c8-fg5mw\" (UID: \"1f06bd76-391b-4d80-ba76-a992ee54241a\") " pod="openshift-image-registry/image-registry-697d97f7c8-fg5mw" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.752055 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1f06bd76-391b-4d80-ba76-a992ee54241a-installation-pull-secrets\") pod \"image-registry-697d97f7c8-fg5mw\" (UID: \"1f06bd76-391b-4d80-ba76-a992ee54241a\") " pod="openshift-image-registry/image-registry-697d97f7c8-fg5mw" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.752068 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ed61f23-b0b6-4a75-9f9c-44a992a64d23-config\") pod \"controller-manager-879f6c89f-t9tps\" (UID: \"3ed61f23-b0b6-4a75-9f9c-44a992a64d23\") " pod="openshift-controller-manager/controller-manager-879f6c89f-t9tps" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.752084 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ed61f23-b0b6-4a75-9f9c-44a992a64d23-serving-cert\") pod \"controller-manager-879f6c89f-t9tps\" (UID: \"3ed61f23-b0b6-4a75-9f9c-44a992a64d23\") " pod="openshift-controller-manager/controller-manager-879f6c89f-t9tps" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.752105 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/51a36588-71bc-409b-b36b-2f76917e5c40-etcd-client\") pod \"apiserver-7bbb656c7d-jwpbc\" (UID: \"51a36588-71bc-409b-b36b-2f76917e5c40\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jwpbc" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.752120 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1575b37-adf2-48d0-98ac-2ed31860ebeb-config\") pod \"machine-approver-56656f9798-p5lc2\" (UID: \"c1575b37-adf2-48d0-98ac-2ed31860ebeb\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p5lc2" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.752134 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3ed61f23-b0b6-4a75-9f9c-44a992a64d23-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-t9tps\" (UID: \"3ed61f23-b0b6-4a75-9f9c-44a992a64d23\") " pod="openshift-controller-manager/controller-manager-879f6c89f-t9tps" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.752149 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3ed61f23-b0b6-4a75-9f9c-44a992a64d23-client-ca\") pod \"controller-manager-879f6c89f-t9tps\" (UID: \"3ed61f23-b0b6-4a75-9f9c-44a992a64d23\") " pod="openshift-controller-manager/controller-manager-879f6c89f-t9tps" Jan 28 06:50:22 crc kubenswrapper[4642]: E0128 06:50:22.752514 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 06:50:23.252503943 +0000 UTC m=+146.484592752 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fg5mw" (UID: "1f06bd76-391b-4d80-ba76-a992ee54241a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.756399 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-s9zdh" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.763261 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-jtkc5"] Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.783757 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hwwqg" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.795637 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8q4rl" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.808922 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-zrrr7" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.815852 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-t78bf"] Jan 28 06:50:22 crc kubenswrapper[4642]: W0128 06:50:22.821422 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod536c56c9_e355_4b23_a079_9e34f4bc9123.slice/crio-6298e061816ad06258f4a89f7753a981eccf4882d0d8a407e4f1cd60e369225b WatchSource:0}: Error finding container 6298e061816ad06258f4a89f7753a981eccf4882d0d8a407e4f1cd60e369225b: Status 404 returned error can't find the container with id 6298e061816ad06258f4a89f7753a981eccf4882d0d8a407e4f1cd60e369225b Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.829474 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-xdv7f" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.852772 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.852977 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78e696e7-07de-444b-8c5c-1bc69534b3b8-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-qzf9d\" (UID: \"78e696e7-07de-444b-8c5c-1bc69534b3b8\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qzf9d" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.853016 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d980a7da-b442-446b-8bde-56e17d70b28b-config-volume\") pod \"collect-profiles-29493045-ndfr5\" (UID: \"d980a7da-b442-446b-8bde-56e17d70b28b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493045-ndfr5" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.853033 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrds4\" (UniqueName: \"kubernetes.io/projected/28e7930c-b0c7-4ef7-975d-fe130a30089c-kube-api-access-mrds4\") pod \"control-plane-machine-set-operator-78cbb6b69f-8w4vg\" (UID: \"28e7930c-b0c7-4ef7-975d-fe130a30089c\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8w4vg" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.853048 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-797zm\" (UniqueName: \"kubernetes.io/projected/78e696e7-07de-444b-8c5c-1bc69534b3b8-kube-api-access-797zm\") pod \"kube-storage-version-migrator-operator-b67b599dd-qzf9d\" (UID: \"78e696e7-07de-444b-8c5c-1bc69534b3b8\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qzf9d" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.853097 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0e3520a4-607d-48a0-ad47-32d5fe2efc54-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-xvw8z\" (UID: \"0e3520a4-607d-48a0-ad47-32d5fe2efc54\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xvw8z" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.853113 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhsd7\" (UniqueName: \"kubernetes.io/projected/ab318e2b-3fdb-4d5d-87df-5242468f1932-kube-api-access-zhsd7\") pod \"service-ca-9c57cc56f-cc8qm\" (UID: \"ab318e2b-3fdb-4d5d-87df-5242468f1932\") " pod="openshift-service-ca/service-ca-9c57cc56f-cc8qm" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.853130 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/fa7dcf61-b17b-42d7-8b79-e4bd9af04a31-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-nsxdk\" (UID: \"fa7dcf61-b17b-42d7-8b79-e4bd9af04a31\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nsxdk" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.853153 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ed61f23-b0b6-4a75-9f9c-44a992a64d23-serving-cert\") pod \"controller-manager-879f6c89f-t9tps\" (UID: \"3ed61f23-b0b6-4a75-9f9c-44a992a64d23\") " pod="openshift-controller-manager/controller-manager-879f6c89f-t9tps" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.853171 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/51a36588-71bc-409b-b36b-2f76917e5c40-etcd-client\") pod \"apiserver-7bbb656c7d-jwpbc\" (UID: \"51a36588-71bc-409b-b36b-2f76917e5c40\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jwpbc" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.853225 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/bfaa1a5c-0e92-4979-af54-075acbc06ea7-srv-cert\") pod \"olm-operator-6b444d44fb-4jjwv\" (UID: \"bfaa1a5c-0e92-4979-af54-075acbc06ea7\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4jjwv" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.853260 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a63bb09-02c3-415a-b7b5-577d3136029a-serving-cert\") pod \"service-ca-operator-777779d784-nl4rt\" (UID: \"9a63bb09-02c3-415a-b7b5-577d3136029a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nl4rt" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.853275 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ab318e2b-3fdb-4d5d-87df-5242468f1932-signing-cabundle\") pod \"service-ca-9c57cc56f-cc8qm\" (UID: \"ab318e2b-3fdb-4d5d-87df-5242468f1932\") " pod="openshift-service-ca/service-ca-9c57cc56f-cc8qm" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.853290 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/4e5b3697-2b63-4067-9a1c-6011c513a643-node-bootstrap-token\") pod \"machine-config-server-cj4zp\" (UID: \"4e5b3697-2b63-4067-9a1c-6011c513a643\") " pod="openshift-machine-config-operator/machine-config-server-cj4zp" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.853322 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3ed61f23-b0b6-4a75-9f9c-44a992a64d23-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-t9tps\" (UID: \"3ed61f23-b0b6-4a75-9f9c-44a992a64d23\") " pod="openshift-controller-manager/controller-manager-879f6c89f-t9tps" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.853337 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbv4c\" (UniqueName: \"kubernetes.io/projected/d980a7da-b442-446b-8bde-56e17d70b28b-kube-api-access-sbv4c\") pod \"collect-profiles-29493045-ndfr5\" (UID: \"d980a7da-b442-446b-8bde-56e17d70b28b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493045-ndfr5" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.853363 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1575b37-adf2-48d0-98ac-2ed31860ebeb-config\") pod \"machine-approver-56656f9798-p5lc2\" (UID: \"c1575b37-adf2-48d0-98ac-2ed31860ebeb\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p5lc2" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.853377 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3ed61f23-b0b6-4a75-9f9c-44a992a64d23-client-ca\") pod \"controller-manager-879f6c89f-t9tps\" (UID: \"3ed61f23-b0b6-4a75-9f9c-44a992a64d23\") " pod="openshift-controller-manager/controller-manager-879f6c89f-t9tps" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.853418 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6364f994-c14f-4cb9-ab55-ad13ad0973f1-auth-proxy-config\") pod \"machine-config-operator-74547568cd-glqwl\" (UID: \"6364f994-c14f-4cb9-ab55-ad13ad0973f1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-glqwl" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.853433 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d07c074-7835-41f1-b39a-b159f851e000-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-g9fbj\" (UID: \"3d07c074-7835-41f1-b39a-b159f851e000\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g9fbj" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.853490 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/10725107-b912-41e8-b667-5d42e2fa5cd8-metrics-tls\") pod \"dns-operator-744455d44c-ggxxq\" (UID: \"10725107-b912-41e8-b667-5d42e2fa5cd8\") " pod="openshift-dns-operator/dns-operator-744455d44c-ggxxq" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.853507 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c72wn\" (UniqueName: \"kubernetes.io/projected/1711b0d7-ac23-40bd-b522-46a7148d7a6f-kube-api-access-c72wn\") pod \"csi-hostpathplugin-t2ncd\" (UID: \"1711b0d7-ac23-40bd-b522-46a7148d7a6f\") " pod="hostpath-provisioner/csi-hostpathplugin-t2ncd" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.853540 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/c1575b37-adf2-48d0-98ac-2ed31860ebeb-machine-approver-tls\") pod \"machine-approver-56656f9798-p5lc2\" (UID: \"c1575b37-adf2-48d0-98ac-2ed31860ebeb\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p5lc2" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.853555 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psvvh\" (UniqueName: \"kubernetes.io/projected/3ed61f23-b0b6-4a75-9f9c-44a992a64d23-kube-api-access-psvvh\") pod \"controller-manager-879f6c89f-t9tps\" (UID: \"3ed61f23-b0b6-4a75-9f9c-44a992a64d23\") " pod="openshift-controller-manager/controller-manager-879f6c89f-t9tps" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.853569 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/1711b0d7-ac23-40bd-b522-46a7148d7a6f-plugins-dir\") pod \"csi-hostpathplugin-t2ncd\" (UID: \"1711b0d7-ac23-40bd-b522-46a7148d7a6f\") " pod="hostpath-provisioner/csi-hostpathplugin-t2ncd" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.853582 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/1711b0d7-ac23-40bd-b522-46a7148d7a6f-csi-data-dir\") pod \"csi-hostpathplugin-t2ncd\" (UID: \"1711b0d7-ac23-40bd-b522-46a7148d7a6f\") " pod="hostpath-provisioner/csi-hostpathplugin-t2ncd" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.853597 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ab246574-4025-47da-a132-1d1e72b35a00-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-965rk\" (UID: \"ab246574-4025-47da-a132-1d1e72b35a00\") " pod="openshift-marketplace/marketplace-operator-79b997595-965rk" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.853629 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.853660 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e3520a4-607d-48a0-ad47-32d5fe2efc54-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-xvw8z\" (UID: \"0e3520a4-607d-48a0-ad47-32d5fe2efc54\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xvw8z" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.853702 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qqp5\" (UniqueName: \"kubernetes.io/projected/9a63bb09-02c3-415a-b7b5-577d3136029a-kube-api-access-2qqp5\") pod \"service-ca-operator-777779d784-nl4rt\" (UID: \"9a63bb09-02c3-415a-b7b5-577d3136029a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nl4rt" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.853717 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/51a36588-71bc-409b-b36b-2f76917e5c40-audit-policies\") pod \"apiserver-7bbb656c7d-jwpbc\" (UID: \"51a36588-71bc-409b-b36b-2f76917e5c40\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jwpbc" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.853732 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51a36588-71bc-409b-b36b-2f76917e5c40-serving-cert\") pod \"apiserver-7bbb656c7d-jwpbc\" (UID: \"51a36588-71bc-409b-b36b-2f76917e5c40\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jwpbc" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.853750 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nm8c4\" (UniqueName: \"kubernetes.io/projected/4e5b3697-2b63-4067-9a1c-6011c513a643-kube-api-access-nm8c4\") pod \"machine-config-server-cj4zp\" (UID: \"4e5b3697-2b63-4067-9a1c-6011c513a643\") " pod="openshift-machine-config-operator/machine-config-server-cj4zp" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.853764 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/7b7c9f44-73e2-4109-827b-bed3a722a78c-default-certificate\") pod \"router-default-5444994796-9dsht\" (UID: \"7b7c9f44-73e2-4109-827b-bed3a722a78c\") " pod="openshift-ingress/router-default-5444994796-9dsht" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.853778 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78e696e7-07de-444b-8c5c-1bc69534b3b8-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-qzf9d\" (UID: \"78e696e7-07de-444b-8c5c-1bc69534b3b8\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qzf9d" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.853801 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/2f0d37e4-6b1f-4211-b41d-77838d34c9bd-profile-collector-cert\") pod \"catalog-operator-68c6474976-t7v5l\" (UID: \"2f0d37e4-6b1f-4211-b41d-77838d34c9bd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-t7v5l" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.853817 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1f06bd76-391b-4d80-ba76-a992ee54241a-bound-sa-token\") pod \"image-registry-697d97f7c8-fg5mw\" (UID: \"1f06bd76-391b-4d80-ba76-a992ee54241a\") " pod="openshift-image-registry/image-registry-697d97f7c8-fg5mw" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.853843 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b7c9f44-73e2-4109-827b-bed3a722a78c-service-ca-bundle\") pod \"router-default-5444994796-9dsht\" (UID: \"7b7c9f44-73e2-4109-827b-bed3a722a78c\") " pod="openshift-ingress/router-default-5444994796-9dsht" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.853858 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7b7c9f44-73e2-4109-827b-bed3a722a78c-metrics-certs\") pod \"router-default-5444994796-9dsht\" (UID: \"7b7c9f44-73e2-4109-827b-bed3a722a78c\") " pod="openshift-ingress/router-default-5444994796-9dsht" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.853873 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvqd2\" (UniqueName: \"kubernetes.io/projected/2f0d37e4-6b1f-4211-b41d-77838d34c9bd-kube-api-access-wvqd2\") pod \"catalog-operator-68c6474976-t7v5l\" (UID: \"2f0d37e4-6b1f-4211-b41d-77838d34c9bd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-t7v5l" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.853888 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/28e7930c-b0c7-4ef7-975d-fe130a30089c-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-8w4vg\" (UID: \"28e7930c-b0c7-4ef7-975d-fe130a30089c\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8w4vg" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.853919 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4k75\" (UniqueName: \"kubernetes.io/projected/1f06bd76-391b-4d80-ba76-a992ee54241a-kube-api-access-z4k75\") pod \"image-registry-697d97f7c8-fg5mw\" (UID: \"1f06bd76-391b-4d80-ba76-a992ee54241a\") " pod="openshift-image-registry/image-registry-697d97f7c8-fg5mw" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.853932 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d980a7da-b442-446b-8bde-56e17d70b28b-secret-volume\") pod \"collect-profiles-29493045-ndfr5\" (UID: \"d980a7da-b442-446b-8bde-56e17d70b28b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493045-ndfr5" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.853948 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/70c953e5-75c5-4a06-aacd-913ddfbea45f-webhook-cert\") pod \"packageserver-d55dfcdfc-7rghx\" (UID: \"70c953e5-75c5-4a06-aacd-913ddfbea45f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7rghx" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.853972 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cz27x\" (UniqueName: \"kubernetes.io/projected/6364f994-c14f-4cb9-ab55-ad13ad0973f1-kube-api-access-cz27x\") pod \"machine-config-operator-74547568cd-glqwl\" (UID: \"6364f994-c14f-4cb9-ab55-ad13ad0973f1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-glqwl" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.853989 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/522a2fa1-337b-4bb2-ac79-06d592d59bbe-etcd-ca\") pod \"etcd-operator-b45778765-kjn9r\" (UID: \"522a2fa1-337b-4bb2-ac79-06d592d59bbe\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kjn9r" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.854002 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ab318e2b-3fdb-4d5d-87df-5242468f1932-signing-key\") pod \"service-ca-9c57cc56f-cc8qm\" (UID: \"ab318e2b-3fdb-4d5d-87df-5242468f1932\") " pod="openshift-service-ca/service-ca-9c57cc56f-cc8qm" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.854015 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2f0d37e4-6b1f-4211-b41d-77838d34c9bd-srv-cert\") pod \"catalog-operator-68c6474976-t7v5l\" (UID: \"2f0d37e4-6b1f-4211-b41d-77838d34c9bd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-t7v5l" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.854044 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c1575b37-adf2-48d0-98ac-2ed31860ebeb-auth-proxy-config\") pod \"machine-approver-56656f9798-p5lc2\" (UID: \"c1575b37-adf2-48d0-98ac-2ed31860ebeb\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p5lc2" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.854058 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/399e3aa1-f241-4695-ba69-c8e2900b7bb2-proxy-tls\") pod \"machine-config-controller-84d6567774-m6fnq\" (UID: \"399e3aa1-f241-4695-ba69-c8e2900b7bb2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-m6fnq" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.854083 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/399e3aa1-f241-4695-ba69-c8e2900b7bb2-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-m6fnq\" (UID: \"399e3aa1-f241-4695-ba69-c8e2900b7bb2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-m6fnq" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.854106 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e24a32c1-9c23-41a2-b3a2-58ba818a0534-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-b8v8n\" (UID: \"e24a32c1-9c23-41a2-b3a2-58ba818a0534\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-b8v8n" Jan 28 06:50:22 crc kubenswrapper[4642]: E0128 06:50:22.856544 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 06:50:23.356518108 +0000 UTC m=+146.588606917 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.860683 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1575b37-adf2-48d0-98ac-2ed31860ebeb-config\") pod \"machine-approver-56656f9798-p5lc2\" (UID: \"c1575b37-adf2-48d0-98ac-2ed31860ebeb\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p5lc2" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.861096 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3ed61f23-b0b6-4a75-9f9c-44a992a64d23-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-t9tps\" (UID: \"3ed61f23-b0b6-4a75-9f9c-44a992a64d23\") " pod="openshift-controller-manager/controller-manager-879f6c89f-t9tps" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.861369 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3ed61f23-b0b6-4a75-9f9c-44a992a64d23-client-ca\") pod \"controller-manager-879f6c89f-t9tps\" (UID: \"3ed61f23-b0b6-4a75-9f9c-44a992a64d23\") " pod="openshift-controller-manager/controller-manager-879f6c89f-t9tps" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.861724 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/522a2fa1-337b-4bb2-ac79-06d592d59bbe-etcd-ca\") pod \"etcd-operator-b45778765-kjn9r\" (UID: \"522a2fa1-337b-4bb2-ac79-06d592d59bbe\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kjn9r" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.863731 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/c1575b37-adf2-48d0-98ac-2ed31860ebeb-machine-approver-tls\") pod \"machine-approver-56656f9798-p5lc2\" (UID: \"c1575b37-adf2-48d0-98ac-2ed31860ebeb\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p5lc2" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.864133 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-k2t68"] Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.864918 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51a36588-71bc-409b-b36b-2f76917e5c40-serving-cert\") pod \"apiserver-7bbb656c7d-jwpbc\" (UID: \"51a36588-71bc-409b-b36b-2f76917e5c40\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jwpbc" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.866020 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ed61f23-b0b6-4a75-9f9c-44a992a64d23-serving-cert\") pod \"controller-manager-879f6c89f-t9tps\" (UID: \"3ed61f23-b0b6-4a75-9f9c-44a992a64d23\") " pod="openshift-controller-manager/controller-manager-879f6c89f-t9tps" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.866392 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c1575b37-adf2-48d0-98ac-2ed31860ebeb-auth-proxy-config\") pod \"machine-approver-56656f9798-p5lc2\" (UID: \"c1575b37-adf2-48d0-98ac-2ed31860ebeb\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p5lc2" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.867459 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwt4z\" (UniqueName: \"kubernetes.io/projected/10725107-b912-41e8-b667-5d42e2fa5cd8-kube-api-access-qwt4z\") pod \"dns-operator-744455d44c-ggxxq\" (UID: \"10725107-b912-41e8-b667-5d42e2fa5cd8\") " pod="openshift-dns-operator/dns-operator-744455d44c-ggxxq" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.867551 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zgpn\" (UniqueName: \"kubernetes.io/projected/e24a32c1-9c23-41a2-b3a2-58ba818a0534-kube-api-access-9zgpn\") pod \"multus-admission-controller-857f4d67dd-b8v8n\" (UID: \"e24a32c1-9c23-41a2-b3a2-58ba818a0534\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-b8v8n" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.868039 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/51a36588-71bc-409b-b36b-2f76917e5c40-audit-policies\") pod \"apiserver-7bbb656c7d-jwpbc\" (UID: \"51a36588-71bc-409b-b36b-2f76917e5c40\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jwpbc" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.868232 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6364f994-c14f-4cb9-ab55-ad13ad0973f1-proxy-tls\") pod \"machine-config-operator-74547568cd-glqwl\" (UID: \"6364f994-c14f-4cb9-ab55-ad13ad0973f1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-glqwl" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.868301 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1711b0d7-ac23-40bd-b522-46a7148d7a6f-registration-dir\") pod \"csi-hostpathplugin-t2ncd\" (UID: \"1711b0d7-ac23-40bd-b522-46a7148d7a6f\") " pod="hostpath-provisioner/csi-hostpathplugin-t2ncd" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.868332 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a84eeb48-6e52-4989-b622-f1031dad991a-metrics-tls\") pod \"dns-default-mmdr5\" (UID: \"a84eeb48-6e52-4989-b622-f1031dad991a\") " pod="openshift-dns/dns-default-mmdr5" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.868388 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1f06bd76-391b-4d80-ba76-a992ee54241a-registry-tls\") pod \"image-registry-697d97f7c8-fg5mw\" (UID: \"1f06bd76-391b-4d80-ba76-a992ee54241a\") " pod="openshift-image-registry/image-registry-697d97f7c8-fg5mw" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.868411 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7gqj\" (UniqueName: \"kubernetes.io/projected/a84eeb48-6e52-4989-b622-f1031dad991a-kube-api-access-r7gqj\") pod \"dns-default-mmdr5\" (UID: \"a84eeb48-6e52-4989-b622-f1031dad991a\") " pod="openshift-dns/dns-default-mmdr5" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.868475 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ed61f23-b0b6-4a75-9f9c-44a992a64d23-config\") pod \"controller-manager-879f6c89f-t9tps\" (UID: \"3ed61f23-b0b6-4a75-9f9c-44a992a64d23\") " pod="openshift-controller-manager/controller-manager-879f6c89f-t9tps" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.868500 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zfxt\" (UniqueName: \"kubernetes.io/projected/bfaa1a5c-0e92-4979-af54-075acbc06ea7-kube-api-access-5zfxt\") pod \"olm-operator-6b444d44fb-4jjwv\" (UID: \"bfaa1a5c-0e92-4979-af54-075acbc06ea7\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4jjwv" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.868521 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/70c953e5-75c5-4a06-aacd-913ddfbea45f-apiservice-cert\") pod \"packageserver-d55dfcdfc-7rghx\" (UID: \"70c953e5-75c5-4a06-aacd-913ddfbea45f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7rghx" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.868535 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a84eeb48-6e52-4989-b622-f1031dad991a-config-volume\") pod \"dns-default-mmdr5\" (UID: \"a84eeb48-6e52-4989-b622-f1031dad991a\") " pod="openshift-dns/dns-default-mmdr5" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.868586 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1f06bd76-391b-4d80-ba76-a992ee54241a-installation-pull-secrets\") pod \"image-registry-697d97f7c8-fg5mw\" (UID: \"1f06bd76-391b-4d80-ba76-a992ee54241a\") " pod="openshift-image-registry/image-registry-697d97f7c8-fg5mw" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.868636 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbjsl\" (UniqueName: \"kubernetes.io/projected/96101a91-8390-4743-821e-7bc7fe37f8e4-kube-api-access-tbjsl\") pod \"ingress-canary-v4lf2\" (UID: \"96101a91-8390-4743-821e-7bc7fe37f8e4\") " pod="openshift-ingress-canary/ingress-canary-v4lf2" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.868669 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjb82\" (UniqueName: \"kubernetes.io/projected/86ebb583-2f1b-4a28-b1b8-81123c2484d7-kube-api-access-zjb82\") pod \"migrator-59844c95c7-h76tc\" (UID: \"86ebb583-2f1b-4a28-b1b8-81123c2484d7\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-h76tc" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.868691 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e3520a4-607d-48a0-ad47-32d5fe2efc54-config\") pod \"kube-controller-manager-operator-78b949d7b-xvw8z\" (UID: \"0e3520a4-607d-48a0-ad47-32d5fe2efc54\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xvw8z" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.868742 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d07c074-7835-41f1-b39a-b159f851e000-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-g9fbj\" (UID: \"3d07c074-7835-41f1-b39a-b159f851e000\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g9fbj" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.868764 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/522a2fa1-337b-4bb2-ac79-06d592d59bbe-etcd-service-ca\") pod \"etcd-operator-b45778765-kjn9r\" (UID: \"522a2fa1-337b-4bb2-ac79-06d592d59bbe\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kjn9r" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.868786 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lq5d\" (UniqueName: \"kubernetes.io/projected/51a36588-71bc-409b-b36b-2f76917e5c40-kube-api-access-4lq5d\") pod \"apiserver-7bbb656c7d-jwpbc\" (UID: \"51a36588-71bc-409b-b36b-2f76917e5c40\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jwpbc" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.868849 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/522a2fa1-337b-4bb2-ac79-06d592d59bbe-config\") pod \"etcd-operator-b45778765-kjn9r\" (UID: \"522a2fa1-337b-4bb2-ac79-06d592d59bbe\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kjn9r" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.868875 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2x6c\" (UniqueName: \"kubernetes.io/projected/fa7dcf61-b17b-42d7-8b79-e4bd9af04a31-kube-api-access-d2x6c\") pod \"package-server-manager-789f6589d5-nsxdk\" (UID: \"fa7dcf61-b17b-42d7-8b79-e4bd9af04a31\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nsxdk" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.868894 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a63bb09-02c3-415a-b7b5-577d3136029a-config\") pod \"service-ca-operator-777779d784-nl4rt\" (UID: \"9a63bb09-02c3-415a-b7b5-577d3136029a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nl4rt" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.868913 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3d07c074-7835-41f1-b39a-b159f851e000-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-g9fbj\" (UID: \"3d07c074-7835-41f1-b39a-b159f851e000\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g9fbj" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.868952 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1f06bd76-391b-4d80-ba76-a992ee54241a-trusted-ca\") pod \"image-registry-697d97f7c8-fg5mw\" (UID: \"1f06bd76-391b-4d80-ba76-a992ee54241a\") " pod="openshift-image-registry/image-registry-697d97f7c8-fg5mw" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.869570 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47t7v\" (UniqueName: \"kubernetes.io/projected/ab246574-4025-47da-a132-1d1e72b35a00-kube-api-access-47t7v\") pod \"marketplace-operator-79b997595-965rk\" (UID: \"ab246574-4025-47da-a132-1d1e72b35a00\") " pod="openshift-marketplace/marketplace-operator-79b997595-965rk" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.869642 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1f06bd76-391b-4d80-ba76-a992ee54241a-registry-certificates\") pod \"image-registry-697d97f7c8-fg5mw\" (UID: \"1f06bd76-391b-4d80-ba76-a992ee54241a\") " pod="openshift-image-registry/image-registry-697d97f7c8-fg5mw" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.869656 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/10725107-b912-41e8-b667-5d42e2fa5cd8-metrics-tls\") pod \"dns-operator-744455d44c-ggxxq\" (UID: \"10725107-b912-41e8-b667-5d42e2fa5cd8\") " pod="openshift-dns-operator/dns-operator-744455d44c-ggxxq" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.869669 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/522a2fa1-337b-4bb2-ac79-06d592d59bbe-etcd-client\") pod \"etcd-operator-b45778765-kjn9r\" (UID: \"522a2fa1-337b-4bb2-ac79-06d592d59bbe\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kjn9r" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.869688 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1711b0d7-ac23-40bd-b522-46a7148d7a6f-socket-dir\") pod \"csi-hostpathplugin-t2ncd\" (UID: \"1711b0d7-ac23-40bd-b522-46a7148d7a6f\") " pod="hostpath-provisioner/csi-hostpathplugin-t2ncd" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.868534 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/51a36588-71bc-409b-b36b-2f76917e5c40-etcd-client\") pod \"apiserver-7bbb656c7d-jwpbc\" (UID: \"51a36588-71bc-409b-b36b-2f76917e5c40\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jwpbc" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.869754 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/51a36588-71bc-409b-b36b-2f76917e5c40-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-jwpbc\" (UID: \"51a36588-71bc-409b-b36b-2f76917e5c40\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jwpbc" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.869780 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/51a36588-71bc-409b-b36b-2f76917e5c40-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-jwpbc\" (UID: \"51a36588-71bc-409b-b36b-2f76917e5c40\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jwpbc" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.869810 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/96101a91-8390-4743-821e-7bc7fe37f8e4-cert\") pod \"ingress-canary-v4lf2\" (UID: \"96101a91-8390-4743-821e-7bc7fe37f8e4\") " pod="openshift-ingress-canary/ingress-canary-v4lf2" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.869831 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1f06bd76-391b-4d80-ba76-a992ee54241a-ca-trust-extracted\") pod \"image-registry-697d97f7c8-fg5mw\" (UID: \"1f06bd76-391b-4d80-ba76-a992ee54241a\") " pod="openshift-image-registry/image-registry-697d97f7c8-fg5mw" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.869853 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/7b7c9f44-73e2-4109-827b-bed3a722a78c-stats-auth\") pod \"router-default-5444994796-9dsht\" (UID: \"7b7c9f44-73e2-4109-827b-bed3a722a78c\") " pod="openshift-ingress/router-default-5444994796-9dsht" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.869888 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.869927 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fg5mw\" (UID: \"1f06bd76-391b-4d80-ba76-a992ee54241a\") " pod="openshift-image-registry/image-registry-697d97f7c8-fg5mw" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.869972 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tnb6\" (UniqueName: \"kubernetes.io/projected/c1575b37-adf2-48d0-98ac-2ed31860ebeb-kube-api-access-9tnb6\") pod \"machine-approver-56656f9798-p5lc2\" (UID: \"c1575b37-adf2-48d0-98ac-2ed31860ebeb\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p5lc2" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.870008 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hvht\" (UniqueName: \"kubernetes.io/projected/7b7c9f44-73e2-4109-827b-bed3a722a78c-kube-api-access-9hvht\") pod \"router-default-5444994796-9dsht\" (UID: \"7b7c9f44-73e2-4109-827b-bed3a722a78c\") " pod="openshift-ingress/router-default-5444994796-9dsht" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.870047 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/70c953e5-75c5-4a06-aacd-913ddfbea45f-tmpfs\") pod \"packageserver-d55dfcdfc-7rghx\" (UID: \"70c953e5-75c5-4a06-aacd-913ddfbea45f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7rghx" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.870079 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/51a36588-71bc-409b-b36b-2f76917e5c40-audit-dir\") pod \"apiserver-7bbb656c7d-jwpbc\" (UID: \"51a36588-71bc-409b-b36b-2f76917e5c40\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jwpbc" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.870137 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/1711b0d7-ac23-40bd-b522-46a7148d7a6f-mountpoint-dir\") pod \"csi-hostpathplugin-t2ncd\" (UID: \"1711b0d7-ac23-40bd-b522-46a7148d7a6f\") " pod="hostpath-provisioner/csi-hostpathplugin-t2ncd" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.870159 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pr4f\" (UniqueName: \"kubernetes.io/projected/522a2fa1-337b-4bb2-ac79-06d592d59bbe-kube-api-access-2pr4f\") pod \"etcd-operator-b45778765-kjn9r\" (UID: \"522a2fa1-337b-4bb2-ac79-06d592d59bbe\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kjn9r" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.870177 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/4e5b3697-2b63-4067-9a1c-6011c513a643-certs\") pod \"machine-config-server-cj4zp\" (UID: \"4e5b3697-2b63-4067-9a1c-6011c513a643\") " pod="openshift-machine-config-operator/machine-config-server-cj4zp" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.870230 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6364f994-c14f-4cb9-ab55-ad13ad0973f1-images\") pod \"machine-config-operator-74547568cd-glqwl\" (UID: \"6364f994-c14f-4cb9-ab55-ad13ad0973f1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-glqwl" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.870250 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/bfaa1a5c-0e92-4979-af54-075acbc06ea7-profile-collector-cert\") pod \"olm-operator-6b444d44fb-4jjwv\" (UID: \"bfaa1a5c-0e92-4979-af54-075acbc06ea7\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4jjwv" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.870281 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bctqm\" (UniqueName: \"kubernetes.io/projected/70c953e5-75c5-4a06-aacd-913ddfbea45f-kube-api-access-bctqm\") pod \"packageserver-d55dfcdfc-7rghx\" (UID: \"70c953e5-75c5-4a06-aacd-913ddfbea45f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7rghx" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.870301 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/51a36588-71bc-409b-b36b-2f76917e5c40-encryption-config\") pod \"apiserver-7bbb656c7d-jwpbc\" (UID: \"51a36588-71bc-409b-b36b-2f76917e5c40\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jwpbc" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.870321 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ab246574-4025-47da-a132-1d1e72b35a00-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-965rk\" (UID: \"ab246574-4025-47da-a132-1d1e72b35a00\") " pod="openshift-marketplace/marketplace-operator-79b997595-965rk" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.870341 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8skcq\" (UniqueName: \"kubernetes.io/projected/399e3aa1-f241-4695-ba69-c8e2900b7bb2-kube-api-access-8skcq\") pod \"machine-config-controller-84d6567774-m6fnq\" (UID: \"399e3aa1-f241-4695-ba69-c8e2900b7bb2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-m6fnq" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.870385 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/522a2fa1-337b-4bb2-ac79-06d592d59bbe-serving-cert\") pod \"etcd-operator-b45778765-kjn9r\" (UID: \"522a2fa1-337b-4bb2-ac79-06d592d59bbe\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kjn9r" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.871848 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/522a2fa1-337b-4bb2-ac79-06d592d59bbe-etcd-service-ca\") pod \"etcd-operator-b45778765-kjn9r\" (UID: \"522a2fa1-337b-4bb2-ac79-06d592d59bbe\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kjn9r" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.872615 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/522a2fa1-337b-4bb2-ac79-06d592d59bbe-config\") pod \"etcd-operator-b45778765-kjn9r\" (UID: \"522a2fa1-337b-4bb2-ac79-06d592d59bbe\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kjn9r" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.873410 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ed61f23-b0b6-4a75-9f9c-44a992a64d23-config\") pod \"controller-manager-879f6c89f-t9tps\" (UID: \"3ed61f23-b0b6-4a75-9f9c-44a992a64d23\") " pod="openshift-controller-manager/controller-manager-879f6c89f-t9tps" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.873832 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/51a36588-71bc-409b-b36b-2f76917e5c40-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-jwpbc\" (UID: \"51a36588-71bc-409b-b36b-2f76917e5c40\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jwpbc" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.880693 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.896000 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1f06bd76-391b-4d80-ba76-a992ee54241a-ca-trust-extracted\") pod \"image-registry-697d97f7c8-fg5mw\" (UID: \"1f06bd76-391b-4d80-ba76-a992ee54241a\") " pod="openshift-image-registry/image-registry-697d97f7c8-fg5mw" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.897180 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/51a36588-71bc-409b-b36b-2f76917e5c40-audit-dir\") pod \"apiserver-7bbb656c7d-jwpbc\" (UID: \"51a36588-71bc-409b-b36b-2f76917e5c40\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jwpbc" Jan 28 06:50:22 crc kubenswrapper[4642]: E0128 06:50:22.897788 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 06:50:23.397748211 +0000 UTC m=+146.629837040 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fg5mw" (UID: "1f06bd76-391b-4d80-ba76-a992ee54241a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.903293 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-h6qvh"] Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.903369 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/51a36588-71bc-409b-b36b-2f76917e5c40-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-jwpbc\" (UID: \"51a36588-71bc-409b-b36b-2f76917e5c40\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jwpbc" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.903709 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.904702 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/522a2fa1-337b-4bb2-ac79-06d592d59bbe-serving-cert\") pod \"etcd-operator-b45778765-kjn9r\" (UID: \"522a2fa1-337b-4bb2-ac79-06d592d59bbe\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kjn9r" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.906458 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1f06bd76-391b-4d80-ba76-a992ee54241a-trusted-ca\") pod \"image-registry-697d97f7c8-fg5mw\" (UID: \"1f06bd76-391b-4d80-ba76-a992ee54241a\") " pod="openshift-image-registry/image-registry-697d97f7c8-fg5mw" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.908758 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/522a2fa1-337b-4bb2-ac79-06d592d59bbe-etcd-client\") pod \"etcd-operator-b45778765-kjn9r\" (UID: \"522a2fa1-337b-4bb2-ac79-06d592d59bbe\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kjn9r" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.909277 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1f06bd76-391b-4d80-ba76-a992ee54241a-registry-certificates\") pod \"image-registry-697d97f7c8-fg5mw\" (UID: \"1f06bd76-391b-4d80-ba76-a992ee54241a\") " pod="openshift-image-registry/image-registry-697d97f7c8-fg5mw" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.912632 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1f06bd76-391b-4d80-ba76-a992ee54241a-installation-pull-secrets\") pod \"image-registry-697d97f7c8-fg5mw\" (UID: \"1f06bd76-391b-4d80-ba76-a992ee54241a\") " pod="openshift-image-registry/image-registry-697d97f7c8-fg5mw" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.912642 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/51a36588-71bc-409b-b36b-2f76917e5c40-encryption-config\") pod \"apiserver-7bbb656c7d-jwpbc\" (UID: \"51a36588-71bc-409b-b36b-2f76917e5c40\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jwpbc" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.925305 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psvvh\" (UniqueName: \"kubernetes.io/projected/3ed61f23-b0b6-4a75-9f9c-44a992a64d23-kube-api-access-psvvh\") pod \"controller-manager-879f6c89f-t9tps\" (UID: \"3ed61f23-b0b6-4a75-9f9c-44a992a64d23\") " pod="openshift-controller-manager/controller-manager-879f6c89f-t9tps" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.925706 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1f06bd76-391b-4d80-ba76-a992ee54241a-registry-tls\") pod \"image-registry-697d97f7c8-fg5mw\" (UID: \"1f06bd76-391b-4d80-ba76-a992ee54241a\") " pod="openshift-image-registry/image-registry-697d97f7c8-fg5mw" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.925739 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8xmnf"] Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.927137 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-ttt2d"] Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.928065 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-5m9j5"] Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.929267 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4k75\" (UniqueName: \"kubernetes.io/projected/1f06bd76-391b-4d80-ba76-a992ee54241a-kube-api-access-z4k75\") pod \"image-registry-697d97f7c8-fg5mw\" (UID: \"1f06bd76-391b-4d80-ba76-a992ee54241a\") " pod="openshift-image-registry/image-registry-697d97f7c8-fg5mw" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.941702 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwt4z\" (UniqueName: \"kubernetes.io/projected/10725107-b912-41e8-b667-5d42e2fa5cd8-kube-api-access-qwt4z\") pod \"dns-operator-744455d44c-ggxxq\" (UID: \"10725107-b912-41e8-b667-5d42e2fa5cd8\") " pod="openshift-dns-operator/dns-operator-744455d44c-ggxxq" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.952933 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-s9zdh"] Jan 28 06:50:22 crc kubenswrapper[4642]: W0128 06:50:22.959583 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod536f8472_158f_45c2_a0f1_b6799b6bdbdd.slice/crio-15bd9b51bc94ad22d88da6aff1e6bdd4bce66caf9dc526f2e257753377b482d2 WatchSource:0}: Error finding container 15bd9b51bc94ad22d88da6aff1e6bdd4bce66caf9dc526f2e257753377b482d2: Status 404 returned error can't find the container with id 15bd9b51bc94ad22d88da6aff1e6bdd4bce66caf9dc526f2e257753377b482d2 Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.971706 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 06:50:22 crc kubenswrapper[4642]: E0128 06:50:22.971867 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 06:50:23.471849673 +0000 UTC m=+146.703938482 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.971939 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/1711b0d7-ac23-40bd-b522-46a7148d7a6f-csi-data-dir\") pod \"csi-hostpathplugin-t2ncd\" (UID: \"1711b0d7-ac23-40bd-b522-46a7148d7a6f\") " pod="hostpath-provisioner/csi-hostpathplugin-t2ncd" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.971971 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ab246574-4025-47da-a132-1d1e72b35a00-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-965rk\" (UID: \"ab246574-4025-47da-a132-1d1e72b35a00\") " pod="openshift-marketplace/marketplace-operator-79b997595-965rk" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.971995 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e3520a4-607d-48a0-ad47-32d5fe2efc54-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-xvw8z\" (UID: \"0e3520a4-607d-48a0-ad47-32d5fe2efc54\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xvw8z" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.972011 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qqp5\" (UniqueName: \"kubernetes.io/projected/9a63bb09-02c3-415a-b7b5-577d3136029a-kube-api-access-2qqp5\") pod \"service-ca-operator-777779d784-nl4rt\" (UID: \"9a63bb09-02c3-415a-b7b5-577d3136029a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nl4rt" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.972033 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nm8c4\" (UniqueName: \"kubernetes.io/projected/4e5b3697-2b63-4067-9a1c-6011c513a643-kube-api-access-nm8c4\") pod \"machine-config-server-cj4zp\" (UID: \"4e5b3697-2b63-4067-9a1c-6011c513a643\") " pod="openshift-machine-config-operator/machine-config-server-cj4zp" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.972141 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/7b7c9f44-73e2-4109-827b-bed3a722a78c-default-certificate\") pod \"router-default-5444994796-9dsht\" (UID: \"7b7c9f44-73e2-4109-827b-bed3a722a78c\") " pod="openshift-ingress/router-default-5444994796-9dsht" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.972503 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/1711b0d7-ac23-40bd-b522-46a7148d7a6f-csi-data-dir\") pod \"csi-hostpathplugin-t2ncd\" (UID: \"1711b0d7-ac23-40bd-b522-46a7148d7a6f\") " pod="hostpath-provisioner/csi-hostpathplugin-t2ncd" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.973560 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1f06bd76-391b-4d80-ba76-a992ee54241a-bound-sa-token\") pod \"image-registry-697d97f7c8-fg5mw\" (UID: \"1f06bd76-391b-4d80-ba76-a992ee54241a\") " pod="openshift-image-registry/image-registry-697d97f7c8-fg5mw" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.974304 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78e696e7-07de-444b-8c5c-1bc69534b3b8-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-qzf9d\" (UID: \"78e696e7-07de-444b-8c5c-1bc69534b3b8\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qzf9d" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.974342 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/2f0d37e4-6b1f-4211-b41d-77838d34c9bd-profile-collector-cert\") pod \"catalog-operator-68c6474976-t7v5l\" (UID: \"2f0d37e4-6b1f-4211-b41d-77838d34c9bd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-t7v5l" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.974370 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b7c9f44-73e2-4109-827b-bed3a722a78c-service-ca-bundle\") pod \"router-default-5444994796-9dsht\" (UID: \"7b7c9f44-73e2-4109-827b-bed3a722a78c\") " pod="openshift-ingress/router-default-5444994796-9dsht" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.974404 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7b7c9f44-73e2-4109-827b-bed3a722a78c-metrics-certs\") pod \"router-default-5444994796-9dsht\" (UID: \"7b7c9f44-73e2-4109-827b-bed3a722a78c\") " pod="openshift-ingress/router-default-5444994796-9dsht" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.974441 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvqd2\" (UniqueName: \"kubernetes.io/projected/2f0d37e4-6b1f-4211-b41d-77838d34c9bd-kube-api-access-wvqd2\") pod \"catalog-operator-68c6474976-t7v5l\" (UID: \"2f0d37e4-6b1f-4211-b41d-77838d34c9bd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-t7v5l" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.974560 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/28e7930c-b0c7-4ef7-975d-fe130a30089c-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-8w4vg\" (UID: \"28e7930c-b0c7-4ef7-975d-fe130a30089c\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8w4vg" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.974614 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.974634 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cz27x\" (UniqueName: \"kubernetes.io/projected/6364f994-c14f-4cb9-ab55-ad13ad0973f1-kube-api-access-cz27x\") pod \"machine-config-operator-74547568cd-glqwl\" (UID: \"6364f994-c14f-4cb9-ab55-ad13ad0973f1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-glqwl" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.975903 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d980a7da-b442-446b-8bde-56e17d70b28b-secret-volume\") pod \"collect-profiles-29493045-ndfr5\" (UID: \"d980a7da-b442-446b-8bde-56e17d70b28b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493045-ndfr5" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.975930 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/70c953e5-75c5-4a06-aacd-913ddfbea45f-webhook-cert\") pod \"packageserver-d55dfcdfc-7rghx\" (UID: \"70c953e5-75c5-4a06-aacd-913ddfbea45f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7rghx" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.975949 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ab318e2b-3fdb-4d5d-87df-5242468f1932-signing-key\") pod \"service-ca-9c57cc56f-cc8qm\" (UID: \"ab318e2b-3fdb-4d5d-87df-5242468f1932\") " pod="openshift-service-ca/service-ca-9c57cc56f-cc8qm" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.975968 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2f0d37e4-6b1f-4211-b41d-77838d34c9bd-srv-cert\") pod \"catalog-operator-68c6474976-t7v5l\" (UID: \"2f0d37e4-6b1f-4211-b41d-77838d34c9bd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-t7v5l" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.976011 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/399e3aa1-f241-4695-ba69-c8e2900b7bb2-proxy-tls\") pod \"machine-config-controller-84d6567774-m6fnq\" (UID: \"399e3aa1-f241-4695-ba69-c8e2900b7bb2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-m6fnq" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.976028 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/399e3aa1-f241-4695-ba69-c8e2900b7bb2-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-m6fnq\" (UID: \"399e3aa1-f241-4695-ba69-c8e2900b7bb2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-m6fnq" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.976046 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e24a32c1-9c23-41a2-b3a2-58ba818a0534-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-b8v8n\" (UID: \"e24a32c1-9c23-41a2-b3a2-58ba818a0534\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-b8v8n" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.976065 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zgpn\" (UniqueName: \"kubernetes.io/projected/e24a32c1-9c23-41a2-b3a2-58ba818a0534-kube-api-access-9zgpn\") pod \"multus-admission-controller-857f4d67dd-b8v8n\" (UID: \"e24a32c1-9c23-41a2-b3a2-58ba818a0534\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-b8v8n" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.976093 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6364f994-c14f-4cb9-ab55-ad13ad0973f1-proxy-tls\") pod \"machine-config-operator-74547568cd-glqwl\" (UID: \"6364f994-c14f-4cb9-ab55-ad13ad0973f1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-glqwl" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.976115 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1711b0d7-ac23-40bd-b522-46a7148d7a6f-registration-dir\") pod \"csi-hostpathplugin-t2ncd\" (UID: \"1711b0d7-ac23-40bd-b522-46a7148d7a6f\") " pod="hostpath-provisioner/csi-hostpathplugin-t2ncd" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.976130 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a84eeb48-6e52-4989-b622-f1031dad991a-metrics-tls\") pod \"dns-default-mmdr5\" (UID: \"a84eeb48-6e52-4989-b622-f1031dad991a\") " pod="openshift-dns/dns-default-mmdr5" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.976160 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7gqj\" (UniqueName: \"kubernetes.io/projected/a84eeb48-6e52-4989-b622-f1031dad991a-kube-api-access-r7gqj\") pod \"dns-default-mmdr5\" (UID: \"a84eeb48-6e52-4989-b622-f1031dad991a\") " pod="openshift-dns/dns-default-mmdr5" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.976181 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zfxt\" (UniqueName: \"kubernetes.io/projected/bfaa1a5c-0e92-4979-af54-075acbc06ea7-kube-api-access-5zfxt\") pod \"olm-operator-6b444d44fb-4jjwv\" (UID: \"bfaa1a5c-0e92-4979-af54-075acbc06ea7\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4jjwv" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.976208 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/70c953e5-75c5-4a06-aacd-913ddfbea45f-apiservice-cert\") pod \"packageserver-d55dfcdfc-7rghx\" (UID: \"70c953e5-75c5-4a06-aacd-913ddfbea45f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7rghx" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.976224 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a84eeb48-6e52-4989-b622-f1031dad991a-config-volume\") pod \"dns-default-mmdr5\" (UID: \"a84eeb48-6e52-4989-b622-f1031dad991a\") " pod="openshift-dns/dns-default-mmdr5" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.976243 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.976261 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbjsl\" (UniqueName: \"kubernetes.io/projected/96101a91-8390-4743-821e-7bc7fe37f8e4-kube-api-access-tbjsl\") pod \"ingress-canary-v4lf2\" (UID: \"96101a91-8390-4743-821e-7bc7fe37f8e4\") " pod="openshift-ingress-canary/ingress-canary-v4lf2" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.976280 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjb82\" (UniqueName: \"kubernetes.io/projected/86ebb583-2f1b-4a28-b1b8-81123c2484d7-kube-api-access-zjb82\") pod \"migrator-59844c95c7-h76tc\" (UID: \"86ebb583-2f1b-4a28-b1b8-81123c2484d7\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-h76tc" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.976295 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e3520a4-607d-48a0-ad47-32d5fe2efc54-config\") pod \"kube-controller-manager-operator-78b949d7b-xvw8z\" (UID: \"0e3520a4-607d-48a0-ad47-32d5fe2efc54\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xvw8z" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.976322 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d07c074-7835-41f1-b39a-b159f851e000-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-g9fbj\" (UID: \"3d07c074-7835-41f1-b39a-b159f851e000\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g9fbj" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.976347 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a63bb09-02c3-415a-b7b5-577d3136029a-config\") pod \"service-ca-operator-777779d784-nl4rt\" (UID: \"9a63bb09-02c3-415a-b7b5-577d3136029a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nl4rt" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.976361 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3d07c074-7835-41f1-b39a-b159f851e000-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-g9fbj\" (UID: \"3d07c074-7835-41f1-b39a-b159f851e000\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g9fbj" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.976376 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2x6c\" (UniqueName: \"kubernetes.io/projected/fa7dcf61-b17b-42d7-8b79-e4bd9af04a31-kube-api-access-d2x6c\") pod \"package-server-manager-789f6589d5-nsxdk\" (UID: \"fa7dcf61-b17b-42d7-8b79-e4bd9af04a31\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nsxdk" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.976395 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47t7v\" (UniqueName: \"kubernetes.io/projected/ab246574-4025-47da-a132-1d1e72b35a00-kube-api-access-47t7v\") pod \"marketplace-operator-79b997595-965rk\" (UID: \"ab246574-4025-47da-a132-1d1e72b35a00\") " pod="openshift-marketplace/marketplace-operator-79b997595-965rk" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.976413 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1711b0d7-ac23-40bd-b522-46a7148d7a6f-socket-dir\") pod \"csi-hostpathplugin-t2ncd\" (UID: \"1711b0d7-ac23-40bd-b522-46a7148d7a6f\") " pod="hostpath-provisioner/csi-hostpathplugin-t2ncd" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.976429 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/96101a91-8390-4743-821e-7bc7fe37f8e4-cert\") pod \"ingress-canary-v4lf2\" (UID: \"96101a91-8390-4743-821e-7bc7fe37f8e4\") " pod="openshift-ingress-canary/ingress-canary-v4lf2" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.976443 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/7b7c9f44-73e2-4109-827b-bed3a722a78c-stats-auth\") pod \"router-default-5444994796-9dsht\" (UID: \"7b7c9f44-73e2-4109-827b-bed3a722a78c\") " pod="openshift-ingress/router-default-5444994796-9dsht" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.976477 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fg5mw\" (UID: \"1f06bd76-391b-4d80-ba76-a992ee54241a\") " pod="openshift-image-registry/image-registry-697d97f7c8-fg5mw" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.976498 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hvht\" (UniqueName: \"kubernetes.io/projected/7b7c9f44-73e2-4109-827b-bed3a722a78c-kube-api-access-9hvht\") pod \"router-default-5444994796-9dsht\" (UID: \"7b7c9f44-73e2-4109-827b-bed3a722a78c\") " pod="openshift-ingress/router-default-5444994796-9dsht" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.976511 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/70c953e5-75c5-4a06-aacd-913ddfbea45f-tmpfs\") pod \"packageserver-d55dfcdfc-7rghx\" (UID: \"70c953e5-75c5-4a06-aacd-913ddfbea45f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7rghx" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.976528 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/1711b0d7-ac23-40bd-b522-46a7148d7a6f-mountpoint-dir\") pod \"csi-hostpathplugin-t2ncd\" (UID: \"1711b0d7-ac23-40bd-b522-46a7148d7a6f\") " pod="hostpath-provisioner/csi-hostpathplugin-t2ncd" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.976541 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/4e5b3697-2b63-4067-9a1c-6011c513a643-certs\") pod \"machine-config-server-cj4zp\" (UID: \"4e5b3697-2b63-4067-9a1c-6011c513a643\") " pod="openshift-machine-config-operator/machine-config-server-cj4zp" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.976562 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6364f994-c14f-4cb9-ab55-ad13ad0973f1-images\") pod \"machine-config-operator-74547568cd-glqwl\" (UID: \"6364f994-c14f-4cb9-ab55-ad13ad0973f1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-glqwl" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.976577 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/bfaa1a5c-0e92-4979-af54-075acbc06ea7-profile-collector-cert\") pod \"olm-operator-6b444d44fb-4jjwv\" (UID: \"bfaa1a5c-0e92-4979-af54-075acbc06ea7\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4jjwv" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.976594 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bctqm\" (UniqueName: \"kubernetes.io/projected/70c953e5-75c5-4a06-aacd-913ddfbea45f-kube-api-access-bctqm\") pod \"packageserver-d55dfcdfc-7rghx\" (UID: \"70c953e5-75c5-4a06-aacd-913ddfbea45f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7rghx" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.976613 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ab246574-4025-47da-a132-1d1e72b35a00-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-965rk\" (UID: \"ab246574-4025-47da-a132-1d1e72b35a00\") " pod="openshift-marketplace/marketplace-operator-79b997595-965rk" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.976621 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/7b7c9f44-73e2-4109-827b-bed3a722a78c-default-certificate\") pod \"router-default-5444994796-9dsht\" (UID: \"7b7c9f44-73e2-4109-827b-bed3a722a78c\") " pod="openshift-ingress/router-default-5444994796-9dsht" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.976627 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8skcq\" (UniqueName: \"kubernetes.io/projected/399e3aa1-f241-4695-ba69-c8e2900b7bb2-kube-api-access-8skcq\") pod \"machine-config-controller-84d6567774-m6fnq\" (UID: \"399e3aa1-f241-4695-ba69-c8e2900b7bb2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-m6fnq" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.976740 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78e696e7-07de-444b-8c5c-1bc69534b3b8-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-qzf9d\" (UID: \"78e696e7-07de-444b-8c5c-1bc69534b3b8\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qzf9d" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.976762 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d980a7da-b442-446b-8bde-56e17d70b28b-config-volume\") pod \"collect-profiles-29493045-ndfr5\" (UID: \"d980a7da-b442-446b-8bde-56e17d70b28b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493045-ndfr5" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.976779 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrds4\" (UniqueName: \"kubernetes.io/projected/28e7930c-b0c7-4ef7-975d-fe130a30089c-kube-api-access-mrds4\") pod \"control-plane-machine-set-operator-78cbb6b69f-8w4vg\" (UID: \"28e7930c-b0c7-4ef7-975d-fe130a30089c\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8w4vg" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.976794 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-797zm\" (UniqueName: \"kubernetes.io/projected/78e696e7-07de-444b-8c5c-1bc69534b3b8-kube-api-access-797zm\") pod \"kube-storage-version-migrator-operator-b67b599dd-qzf9d\" (UID: \"78e696e7-07de-444b-8c5c-1bc69534b3b8\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qzf9d" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.976811 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0e3520a4-607d-48a0-ad47-32d5fe2efc54-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-xvw8z\" (UID: \"0e3520a4-607d-48a0-ad47-32d5fe2efc54\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xvw8z" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.976824 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhsd7\" (UniqueName: \"kubernetes.io/projected/ab318e2b-3fdb-4d5d-87df-5242468f1932-kube-api-access-zhsd7\") pod \"service-ca-9c57cc56f-cc8qm\" (UID: \"ab318e2b-3fdb-4d5d-87df-5242468f1932\") " pod="openshift-service-ca/service-ca-9c57cc56f-cc8qm" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.976839 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/fa7dcf61-b17b-42d7-8b79-e4bd9af04a31-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-nsxdk\" (UID: \"fa7dcf61-b17b-42d7-8b79-e4bd9af04a31\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nsxdk" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.976857 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/bfaa1a5c-0e92-4979-af54-075acbc06ea7-srv-cert\") pod \"olm-operator-6b444d44fb-4jjwv\" (UID: \"bfaa1a5c-0e92-4979-af54-075acbc06ea7\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4jjwv" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.976875 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a63bb09-02c3-415a-b7b5-577d3136029a-serving-cert\") pod \"service-ca-operator-777779d784-nl4rt\" (UID: \"9a63bb09-02c3-415a-b7b5-577d3136029a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nl4rt" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.976889 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ab318e2b-3fdb-4d5d-87df-5242468f1932-signing-cabundle\") pod \"service-ca-9c57cc56f-cc8qm\" (UID: \"ab318e2b-3fdb-4d5d-87df-5242468f1932\") " pod="openshift-service-ca/service-ca-9c57cc56f-cc8qm" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.976902 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/4e5b3697-2b63-4067-9a1c-6011c513a643-node-bootstrap-token\") pod \"machine-config-server-cj4zp\" (UID: \"4e5b3697-2b63-4067-9a1c-6011c513a643\") " pod="openshift-machine-config-operator/machine-config-server-cj4zp" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.976918 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbv4c\" (UniqueName: \"kubernetes.io/projected/d980a7da-b442-446b-8bde-56e17d70b28b-kube-api-access-sbv4c\") pod \"collect-profiles-29493045-ndfr5\" (UID: \"d980a7da-b442-446b-8bde-56e17d70b28b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493045-ndfr5" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.976945 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6364f994-c14f-4cb9-ab55-ad13ad0973f1-auth-proxy-config\") pod \"machine-config-operator-74547568cd-glqwl\" (UID: \"6364f994-c14f-4cb9-ab55-ad13ad0973f1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-glqwl" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.976961 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d07c074-7835-41f1-b39a-b159f851e000-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-g9fbj\" (UID: \"3d07c074-7835-41f1-b39a-b159f851e000\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g9fbj" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.976981 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c72wn\" (UniqueName: \"kubernetes.io/projected/1711b0d7-ac23-40bd-b522-46a7148d7a6f-kube-api-access-c72wn\") pod \"csi-hostpathplugin-t2ncd\" (UID: \"1711b0d7-ac23-40bd-b522-46a7148d7a6f\") " pod="hostpath-provisioner/csi-hostpathplugin-t2ncd" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.977002 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/1711b0d7-ac23-40bd-b522-46a7148d7a6f-plugins-dir\") pod \"csi-hostpathplugin-t2ncd\" (UID: \"1711b0d7-ac23-40bd-b522-46a7148d7a6f\") " pod="hostpath-provisioner/csi-hostpathplugin-t2ncd" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.977230 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/1711b0d7-ac23-40bd-b522-46a7148d7a6f-plugins-dir\") pod \"csi-hostpathplugin-t2ncd\" (UID: \"1711b0d7-ac23-40bd-b522-46a7148d7a6f\") " pod="hostpath-provisioner/csi-hostpathplugin-t2ncd" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.977754 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1711b0d7-ac23-40bd-b522-46a7148d7a6f-socket-dir\") pod \"csi-hostpathplugin-t2ncd\" (UID: \"1711b0d7-ac23-40bd-b522-46a7148d7a6f\") " pod="hostpath-provisioner/csi-hostpathplugin-t2ncd" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.977878 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b7c9f44-73e2-4109-827b-bed3a722a78c-service-ca-bundle\") pod \"router-default-5444994796-9dsht\" (UID: \"7b7c9f44-73e2-4109-827b-bed3a722a78c\") " pod="openshift-ingress/router-default-5444994796-9dsht" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.977939 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ab246574-4025-47da-a132-1d1e72b35a00-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-965rk\" (UID: \"ab246574-4025-47da-a132-1d1e72b35a00\") " pod="openshift-marketplace/marketplace-operator-79b997595-965rk" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.978328 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7b7c9f44-73e2-4109-827b-bed3a722a78c-metrics-certs\") pod \"router-default-5444994796-9dsht\" (UID: \"7b7c9f44-73e2-4109-827b-bed3a722a78c\") " pod="openshift-ingress/router-default-5444994796-9dsht" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.979069 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d980a7da-b442-446b-8bde-56e17d70b28b-config-volume\") pod \"collect-profiles-29493045-ndfr5\" (UID: \"d980a7da-b442-446b-8bde-56e17d70b28b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493045-ndfr5" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.979380 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1711b0d7-ac23-40bd-b522-46a7148d7a6f-registration-dir\") pod \"csi-hostpathplugin-t2ncd\" (UID: \"1711b0d7-ac23-40bd-b522-46a7148d7a6f\") " pod="hostpath-provisioner/csi-hostpathplugin-t2ncd" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.979611 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ab318e2b-3fdb-4d5d-87df-5242468f1932-signing-cabundle\") pod \"service-ca-9c57cc56f-cc8qm\" (UID: \"ab318e2b-3fdb-4d5d-87df-5242468f1932\") " pod="openshift-service-ca/service-ca-9c57cc56f-cc8qm" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.980078 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78e696e7-07de-444b-8c5c-1bc69534b3b8-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-qzf9d\" (UID: \"78e696e7-07de-444b-8c5c-1bc69534b3b8\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qzf9d" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.980246 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a63bb09-02c3-415a-b7b5-577d3136029a-config\") pod \"service-ca-operator-777779d784-nl4rt\" (UID: \"9a63bb09-02c3-415a-b7b5-577d3136029a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nl4rt" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.980591 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e3520a4-607d-48a0-ad47-32d5fe2efc54-config\") pod \"kube-controller-manager-operator-78b949d7b-xvw8z\" (UID: \"0e3520a4-607d-48a0-ad47-32d5fe2efc54\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xvw8z" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.980624 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6364f994-c14f-4cb9-ab55-ad13ad0973f1-auth-proxy-config\") pod \"machine-config-operator-74547568cd-glqwl\" (UID: \"6364f994-c14f-4cb9-ab55-ad13ad0973f1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-glqwl" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.981025 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6364f994-c14f-4cb9-ab55-ad13ad0973f1-images\") pod \"machine-config-operator-74547568cd-glqwl\" (UID: \"6364f994-c14f-4cb9-ab55-ad13ad0973f1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-glqwl" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.981624 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78e696e7-07de-444b-8c5c-1bc69534b3b8-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-qzf9d\" (UID: \"78e696e7-07de-444b-8c5c-1bc69534b3b8\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qzf9d" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.984500 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/2f0d37e4-6b1f-4211-b41d-77838d34c9bd-profile-collector-cert\") pod \"catalog-operator-68c6474976-t7v5l\" (UID: \"2f0d37e4-6b1f-4211-b41d-77838d34c9bd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-t7v5l" Jan 28 06:50:22 crc kubenswrapper[4642]: E0128 06:50:22.984584 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 06:50:23.48457348 +0000 UTC m=+146.716662289 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fg5mw" (UID: "1f06bd76-391b-4d80-ba76-a992ee54241a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.985080 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/70c953e5-75c5-4a06-aacd-913ddfbea45f-tmpfs\") pod \"packageserver-d55dfcdfc-7rghx\" (UID: \"70c953e5-75c5-4a06-aacd-913ddfbea45f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7rghx" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.985948 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d07c074-7835-41f1-b39a-b159f851e000-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-g9fbj\" (UID: \"3d07c074-7835-41f1-b39a-b159f851e000\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g9fbj" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.985986 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/1711b0d7-ac23-40bd-b522-46a7148d7a6f-mountpoint-dir\") pod \"csi-hostpathplugin-t2ncd\" (UID: \"1711b0d7-ac23-40bd-b522-46a7148d7a6f\") " pod="hostpath-provisioner/csi-hostpathplugin-t2ncd" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.986552 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/399e3aa1-f241-4695-ba69-c8e2900b7bb2-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-m6fnq\" (UID: \"399e3aa1-f241-4695-ba69-c8e2900b7bb2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-m6fnq" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.987942 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d07c074-7835-41f1-b39a-b159f851e000-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-g9fbj\" (UID: \"3d07c074-7835-41f1-b39a-b159f851e000\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g9fbj" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.991292 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6364f994-c14f-4cb9-ab55-ad13ad0973f1-proxy-tls\") pod \"machine-config-operator-74547568cd-glqwl\" (UID: \"6364f994-c14f-4cb9-ab55-ad13ad0973f1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-glqwl" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.993667 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a84eeb48-6e52-4989-b622-f1031dad991a-metrics-tls\") pod \"dns-default-mmdr5\" (UID: \"a84eeb48-6e52-4989-b622-f1031dad991a\") " pod="openshift-dns/dns-default-mmdr5" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.993955 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/96101a91-8390-4743-821e-7bc7fe37f8e4-cert\") pod \"ingress-canary-v4lf2\" (UID: \"96101a91-8390-4743-821e-7bc7fe37f8e4\") " pod="openshift-ingress-canary/ingress-canary-v4lf2" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.994563 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e3520a4-607d-48a0-ad47-32d5fe2efc54-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-xvw8z\" (UID: \"0e3520a4-607d-48a0-ad47-32d5fe2efc54\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xvw8z" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.994770 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a63bb09-02c3-415a-b7b5-577d3136029a-serving-cert\") pod \"service-ca-operator-777779d784-nl4rt\" (UID: \"9a63bb09-02c3-415a-b7b5-577d3136029a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nl4rt" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.995048 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a84eeb48-6e52-4989-b622-f1031dad991a-config-volume\") pod \"dns-default-mmdr5\" (UID: \"a84eeb48-6e52-4989-b622-f1031dad991a\") " pod="openshift-dns/dns-default-mmdr5" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.996416 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/4e5b3697-2b63-4067-9a1c-6011c513a643-node-bootstrap-token\") pod \"machine-config-server-cj4zp\" (UID: \"4e5b3697-2b63-4067-9a1c-6011c513a643\") " pod="openshift-machine-config-operator/machine-config-server-cj4zp" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.998468 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/399e3aa1-f241-4695-ba69-c8e2900b7bb2-proxy-tls\") pod \"machine-config-controller-84d6567774-m6fnq\" (UID: \"399e3aa1-f241-4695-ba69-c8e2900b7bb2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-m6fnq" Jan 28 06:50:22 crc kubenswrapper[4642]: I0128 06:50:22.999101 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/70c953e5-75c5-4a06-aacd-913ddfbea45f-webhook-cert\") pod \"packageserver-d55dfcdfc-7rghx\" (UID: \"70c953e5-75c5-4a06-aacd-913ddfbea45f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7rghx" Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.003548 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ab246574-4025-47da-a132-1d1e72b35a00-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-965rk\" (UID: \"ab246574-4025-47da-a132-1d1e72b35a00\") " pod="openshift-marketplace/marketplace-operator-79b997595-965rk" Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.004530 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/7b7c9f44-73e2-4109-827b-bed3a722a78c-stats-auth\") pod \"router-default-5444994796-9dsht\" (UID: \"7b7c9f44-73e2-4109-827b-bed3a722a78c\") " pod="openshift-ingress/router-default-5444994796-9dsht" Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.006025 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/28e7930c-b0c7-4ef7-975d-fe130a30089c-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-8w4vg\" (UID: \"28e7930c-b0c7-4ef7-975d-fe130a30089c\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8w4vg" Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.006105 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.007308 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/fa7dcf61-b17b-42d7-8b79-e4bd9af04a31-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-nsxdk\" (UID: \"fa7dcf61-b17b-42d7-8b79-e4bd9af04a31\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nsxdk" Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.008138 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d980a7da-b442-446b-8bde-56e17d70b28b-secret-volume\") pod \"collect-profiles-29493045-ndfr5\" (UID: \"d980a7da-b442-446b-8bde-56e17d70b28b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493045-ndfr5" Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.008569 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.008986 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/70c953e5-75c5-4a06-aacd-913ddfbea45f-apiservice-cert\") pod \"packageserver-d55dfcdfc-7rghx\" (UID: \"70c953e5-75c5-4a06-aacd-913ddfbea45f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7rghx" Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.009639 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/4e5b3697-2b63-4067-9a1c-6011c513a643-certs\") pod \"machine-config-server-cj4zp\" (UID: \"4e5b3697-2b63-4067-9a1c-6011c513a643\") " pod="openshift-machine-config-operator/machine-config-server-cj4zp" Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.010306 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e24a32c1-9c23-41a2-b3a2-58ba818a0534-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-b8v8n\" (UID: \"e24a32c1-9c23-41a2-b3a2-58ba818a0534\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-b8v8n" Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.010603 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tnb6\" (UniqueName: \"kubernetes.io/projected/c1575b37-adf2-48d0-98ac-2ed31860ebeb-kube-api-access-9tnb6\") pod \"machine-approver-56656f9798-p5lc2\" (UID: \"c1575b37-adf2-48d0-98ac-2ed31860ebeb\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p5lc2" Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.012556 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2f0d37e4-6b1f-4211-b41d-77838d34c9bd-srv-cert\") pod \"catalog-operator-68c6474976-t7v5l\" (UID: \"2f0d37e4-6b1f-4211-b41d-77838d34c9bd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-t7v5l" Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.012593 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/bfaa1a5c-0e92-4979-af54-075acbc06ea7-profile-collector-cert\") pod \"olm-operator-6b444d44fb-4jjwv\" (UID: \"bfaa1a5c-0e92-4979-af54-075acbc06ea7\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4jjwv" Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.012564 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ab318e2b-3fdb-4d5d-87df-5242468f1932-signing-key\") pod \"service-ca-9c57cc56f-cc8qm\" (UID: \"ab318e2b-3fdb-4d5d-87df-5242468f1932\") " pod="openshift-service-ca/service-ca-9c57cc56f-cc8qm" Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.013111 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/bfaa1a5c-0e92-4979-af54-075acbc06ea7-srv-cert\") pod \"olm-operator-6b444d44fb-4jjwv\" (UID: \"bfaa1a5c-0e92-4979-af54-075acbc06ea7\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4jjwv" Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.013716 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.016593 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lq5d\" (UniqueName: \"kubernetes.io/projected/51a36588-71bc-409b-b36b-2f76917e5c40-kube-api-access-4lq5d\") pod \"apiserver-7bbb656c7d-jwpbc\" (UID: \"51a36588-71bc-409b-b36b-2f76917e5c40\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jwpbc" Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.020090 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.022553 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.036905 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pr4f\" (UniqueName: \"kubernetes.io/projected/522a2fa1-337b-4bb2-ac79-06d592d59bbe-kube-api-access-2pr4f\") pod \"etcd-operator-b45778765-kjn9r\" (UID: \"522a2fa1-337b-4bb2-ac79-06d592d59bbe\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kjn9r" Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.044052 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hwwqg"] Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.068253 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-t9tps" Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.074752 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qqp5\" (UniqueName: \"kubernetes.io/projected/9a63bb09-02c3-415a-b7b5-577d3136029a-kube-api-access-2qqp5\") pod \"service-ca-operator-777779d784-nl4rt\" (UID: \"9a63bb09-02c3-415a-b7b5-577d3136029a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nl4rt" Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.075025 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jwpbc" Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.078104 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 06:50:23 crc kubenswrapper[4642]: E0128 06:50:23.078567 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 06:50:23.578553969 +0000 UTC m=+146.810642778 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.086407 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8q4rl"] Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.094355 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nm8c4\" (UniqueName: \"kubernetes.io/projected/4e5b3697-2b63-4067-9a1c-6011c513a643-kube-api-access-nm8c4\") pod \"machine-config-server-cj4zp\" (UID: \"4e5b3697-2b63-4067-9a1c-6011c513a643\") " pod="openshift-machine-config-operator/machine-config-server-cj4zp" Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.112689 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvqd2\" (UniqueName: \"kubernetes.io/projected/2f0d37e4-6b1f-4211-b41d-77838d34c9bd-kube-api-access-wvqd2\") pod \"catalog-operator-68c6474976-t7v5l\" (UID: \"2f0d37e4-6b1f-4211-b41d-77838d34c9bd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-t7v5l" Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.134199 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-xdv7f"] Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.136495 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7gqj\" (UniqueName: \"kubernetes.io/projected/a84eeb48-6e52-4989-b622-f1031dad991a-kube-api-access-r7gqj\") pod \"dns-default-mmdr5\" (UID: \"a84eeb48-6e52-4989-b622-f1031dad991a\") " pod="openshift-dns/dns-default-mmdr5" Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.136974 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-kjn9r" Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.152871 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-ggxxq" Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.162024 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cz27x\" (UniqueName: \"kubernetes.io/projected/6364f994-c14f-4cb9-ab55-ad13ad0973f1-kube-api-access-cz27x\") pod \"machine-config-operator-74547568cd-glqwl\" (UID: \"6364f994-c14f-4cb9-ab55-ad13ad0973f1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-glqwl" Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.184235 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fg5mw\" (UID: \"1f06bd76-391b-4d80-ba76-a992ee54241a\") " pod="openshift-image-registry/image-registry-697d97f7c8-fg5mw" Jan 28 06:50:23 crc kubenswrapper[4642]: E0128 06:50:23.184553 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 06:50:23.684539312 +0000 UTC m=+146.916628121 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fg5mw" (UID: "1f06bd76-391b-4d80-ba76-a992ee54241a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.191402 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjb82\" (UniqueName: \"kubernetes.io/projected/86ebb583-2f1b-4a28-b1b8-81123c2484d7-kube-api-access-zjb82\") pod \"migrator-59844c95c7-h76tc\" (UID: \"86ebb583-2f1b-4a28-b1b8-81123c2484d7\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-h76tc" Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.192616 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-h76tc" Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.208533 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zfxt\" (UniqueName: \"kubernetes.io/projected/bfaa1a5c-0e92-4979-af54-075acbc06ea7-kube-api-access-5zfxt\") pod \"olm-operator-6b444d44fb-4jjwv\" (UID: \"bfaa1a5c-0e92-4979-af54-075acbc06ea7\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4jjwv" Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.217279 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-glqwl" Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.220296 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrds4\" (UniqueName: \"kubernetes.io/projected/28e7930c-b0c7-4ef7-975d-fe130a30089c-kube-api-access-mrds4\") pod \"control-plane-machine-set-operator-78cbb6b69f-8w4vg\" (UID: \"28e7930c-b0c7-4ef7-975d-fe130a30089c\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8w4vg" Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.223739 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-t7v5l" Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.231902 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-797zm\" (UniqueName: \"kubernetes.io/projected/78e696e7-07de-444b-8c5c-1bc69534b3b8-kube-api-access-797zm\") pod \"kube-storage-version-migrator-operator-b67b599dd-qzf9d\" (UID: \"78e696e7-07de-444b-8c5c-1bc69534b3b8\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qzf9d" Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.235551 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8w4vg" Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.255365 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3d07c074-7835-41f1-b39a-b159f851e000-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-g9fbj\" (UID: \"3d07c074-7835-41f1-b39a-b159f851e000\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g9fbj" Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.261272 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-zrrr7"] Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.271401 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p5lc2" Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.279275 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4jjwv" Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.285316 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 06:50:23 crc kubenswrapper[4642]: E0128 06:50:23.285717 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 06:50:23.785703256 +0000 UTC m=+147.017792064 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.288732 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbv4c\" (UniqueName: \"kubernetes.io/projected/d980a7da-b442-446b-8bde-56e17d70b28b-kube-api-access-sbv4c\") pod \"collect-profiles-29493045-ndfr5\" (UID: \"d980a7da-b442-446b-8bde-56e17d70b28b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493045-ndfr5" Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.295155 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bctqm\" (UniqueName: \"kubernetes.io/projected/70c953e5-75c5-4a06-aacd-913ddfbea45f-kube-api-access-bctqm\") pod \"packageserver-d55dfcdfc-7rghx\" (UID: \"70c953e5-75c5-4a06-aacd-913ddfbea45f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7rghx" Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.309640 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hvht\" (UniqueName: \"kubernetes.io/projected/7b7c9f44-73e2-4109-827b-bed3a722a78c-kube-api-access-9hvht\") pod \"router-default-5444994796-9dsht\" (UID: \"7b7c9f44-73e2-4109-827b-bed3a722a78c\") " pod="openshift-ingress/router-default-5444994796-9dsht" Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.310406 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-nl4rt" Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.316013 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-cj4zp" Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.337129 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47t7v\" (UniqueName: \"kubernetes.io/projected/ab246574-4025-47da-a132-1d1e72b35a00-kube-api-access-47t7v\") pod \"marketplace-operator-79b997595-965rk\" (UID: \"ab246574-4025-47da-a132-1d1e72b35a00\") " pod="openshift-marketplace/marketplace-operator-79b997595-965rk" Jan 28 06:50:23 crc kubenswrapper[4642]: W0128 06:50:23.339948 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4d5c1bf9_0f8d_4363_8afc_f764165812c8.slice/crio-b18f645d44a502c8183f4e50ba70dcb71d6d2ef5c5e5aa7e1becc2b5370c98a1 WatchSource:0}: Error finding container b18f645d44a502c8183f4e50ba70dcb71d6d2ef5c5e5aa7e1becc2b5370c98a1: Status 404 returned error can't find the container with id b18f645d44a502c8183f4e50ba70dcb71d6d2ef5c5e5aa7e1becc2b5370c98a1 Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.351098 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0e3520a4-607d-48a0-ad47-32d5fe2efc54-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-xvw8z\" (UID: \"0e3520a4-607d-48a0-ad47-32d5fe2efc54\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xvw8z" Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.363549 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-mmdr5" Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.386734 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fg5mw\" (UID: \"1f06bd76-391b-4d80-ba76-a992ee54241a\") " pod="openshift-image-registry/image-registry-697d97f7c8-fg5mw" Jan 28 06:50:23 crc kubenswrapper[4642]: E0128 06:50:23.387210 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 06:50:23.887178444 +0000 UTC m=+147.119267253 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fg5mw" (UID: "1f06bd76-391b-4d80-ba76-a992ee54241a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.392560 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhsd7\" (UniqueName: \"kubernetes.io/projected/ab318e2b-3fdb-4d5d-87df-5242468f1932-kube-api-access-zhsd7\") pod \"service-ca-9c57cc56f-cc8qm\" (UID: \"ab318e2b-3fdb-4d5d-87df-5242468f1932\") " pod="openshift-service-ca/service-ca-9c57cc56f-cc8qm" Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.396930 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbjsl\" (UniqueName: \"kubernetes.io/projected/96101a91-8390-4743-821e-7bc7fe37f8e4-kube-api-access-tbjsl\") pod \"ingress-canary-v4lf2\" (UID: \"96101a91-8390-4743-821e-7bc7fe37f8e4\") " pod="openshift-ingress-canary/ingress-canary-v4lf2" Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.410606 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zgpn\" (UniqueName: \"kubernetes.io/projected/e24a32c1-9c23-41a2-b3a2-58ba818a0534-kube-api-access-9zgpn\") pod \"multus-admission-controller-857f4d67dd-b8v8n\" (UID: \"e24a32c1-9c23-41a2-b3a2-58ba818a0534\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-b8v8n" Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.453152 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c72wn\" (UniqueName: \"kubernetes.io/projected/1711b0d7-ac23-40bd-b522-46a7148d7a6f-kube-api-access-c72wn\") pod \"csi-hostpathplugin-t2ncd\" (UID: \"1711b0d7-ac23-40bd-b522-46a7148d7a6f\") " pod="hostpath-provisioner/csi-hostpathplugin-t2ncd" Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.460624 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8skcq\" (UniqueName: \"kubernetes.io/projected/399e3aa1-f241-4695-ba69-c8e2900b7bb2-kube-api-access-8skcq\") pod \"machine-config-controller-84d6567774-m6fnq\" (UID: \"399e3aa1-f241-4695-ba69-c8e2900b7bb2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-m6fnq" Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.468586 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qzf9d" Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.476627 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-9dsht" Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.476933 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2x6c\" (UniqueName: \"kubernetes.io/projected/fa7dcf61-b17b-42d7-8b79-e4bd9af04a31-kube-api-access-d2x6c\") pod \"package-server-manager-789f6589d5-nsxdk\" (UID: \"fa7dcf61-b17b-42d7-8b79-e4bd9af04a31\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nsxdk" Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.481707 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-m6fnq" Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.487370 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g9fbj" Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.497369 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 06:50:23 crc kubenswrapper[4642]: E0128 06:50:23.497650 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 06:50:23.997635219 +0000 UTC m=+147.229724028 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.499629 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-965rk" Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.505882 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xvw8z" Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.514597 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-b8v8n" Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.569085 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493045-ndfr5" Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.584311 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nsxdk" Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.590715 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7rghx" Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.590819 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-jwpbc"] Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.592298 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-t9tps"] Jan 28 06:50:23 crc kubenswrapper[4642]: W0128 06:50:23.596254 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7b7c9f44_73e2_4109_827b_bed3a722a78c.slice/crio-c9b707cf095eccd5ba4fe84aa435177d41a81c179392284aa18c660f13916186 WatchSource:0}: Error finding container c9b707cf095eccd5ba4fe84aa435177d41a81c179392284aa18c660f13916186: Status 404 returned error can't find the container with id c9b707cf095eccd5ba4fe84aa435177d41a81c179392284aa18c660f13916186 Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.597542 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-cc8qm" Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.598548 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fg5mw\" (UID: \"1f06bd76-391b-4d80-ba76-a992ee54241a\") " pod="openshift-image-registry/image-registry-697d97f7c8-fg5mw" Jan 28 06:50:23 crc kubenswrapper[4642]: E0128 06:50:23.598893 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 06:50:24.098876638 +0000 UTC m=+147.330965447 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fg5mw" (UID: "1f06bd76-391b-4d80-ba76-a992ee54241a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.654524 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-t2ncd" Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.657321 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-v4lf2" Jan 28 06:50:23 crc kubenswrapper[4642]: W0128 06:50:23.675906 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-37aefcf547b6cefa2b97d2d573ec47bd5b54a82581f4e9d609924b6a1bfd14ab WatchSource:0}: Error finding container 37aefcf547b6cefa2b97d2d573ec47bd5b54a82581f4e9d609924b6a1bfd14ab: Status 404 returned error can't find the container with id 37aefcf547b6cefa2b97d2d573ec47bd5b54a82581f4e9d609924b6a1bfd14ab Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.698938 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 06:50:23 crc kubenswrapper[4642]: E0128 06:50:23.699300 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 06:50:24.199286803 +0000 UTC m=+147.431375612 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.708883 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"37aefcf547b6cefa2b97d2d573ec47bd5b54a82581f4e9d609924b6a1bfd14ab"} Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.710052 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8xmnf" event={"ID":"d1b83e60-dd95-437c-847c-f60f9c33ee1f","Type":"ContainerStarted","Data":"1cd020f606dd1e416ca2bbb1823762f21592ca2173f6306ad0c8ce977c2a1a0f"} Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.710082 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8xmnf" event={"ID":"d1b83e60-dd95-437c-847c-f60f9c33ee1f","Type":"ContainerStarted","Data":"cc4db279f6dbc3374b7a8591397045a7129befc317ea0f21e603681aacc4b813"} Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.716816 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-dj2m9" event={"ID":"9fe7270c-3b58-4011-9776-1360c24896ca","Type":"ContainerStarted","Data":"a2b0a232a650542d9eb845e86a23b2bc24fff7cf2eebcc735f3a16bd45c8bba2"} Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.739773 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-ttt2d" event={"ID":"536f8472-158f-45c2-a0f1-b6799b6bdbdd","Type":"ContainerStarted","Data":"dfe64a9d11755f37bea7363d8434cd4c66566e14f8f1a8afc3093362599839b9"} Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.740987 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-ttt2d" event={"ID":"536f8472-158f-45c2-a0f1-b6799b6bdbdd","Type":"ContainerStarted","Data":"5c3284dccdcc2ae3fb297a9444a943a5b3169536aa87ea51e400d96b595f2b72"} Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.741003 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-ttt2d" event={"ID":"536f8472-158f-45c2-a0f1-b6799b6bdbdd","Type":"ContainerStarted","Data":"15bd9b51bc94ad22d88da6aff1e6bdd4bce66caf9dc526f2e257753377b482d2"} Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.741955 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-k2t68" event={"ID":"8f8a2fee-019f-4885-93ad-57787b4478d8","Type":"ContainerStarted","Data":"826ca7e3d5bcea886bb10933fcea73e421fccb9b5d63d9c0a3ea18a59e1f0d4a"} Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.741997 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-k2t68" event={"ID":"8f8a2fee-019f-4885-93ad-57787b4478d8","Type":"ContainerStarted","Data":"07c40cda3bafbb00d3f68640ddc2a8b01b1dc6efc49a73272ca0306af6e11454"} Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.742482 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-k2t68" Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.743826 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hwwqg" event={"ID":"6680af4f-2095-41e6-9343-63ab88645dea","Type":"ContainerStarted","Data":"652cbdb1ac09d98bfeecaede16a4c14ccc020f72fca77b334e7e1e9727c33950"} Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.743979 4642 patch_prober.go:28] interesting pod/downloads-7954f5f757-k2t68 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.744035 4642 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-k2t68" podUID="8f8a2fee-019f-4885-93ad-57787b4478d8" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.746702 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-zrrr7" event={"ID":"4d5c1bf9-0f8d-4363-8afc-f764165812c8","Type":"ContainerStarted","Data":"b18f645d44a502c8183f4e50ba70dcb71d6d2ef5c5e5aa7e1becc2b5370c98a1"} Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.752464 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-xdv7f" event={"ID":"5cd3e28d-c288-4b1d-81c2-dff387bb3de7","Type":"ContainerStarted","Data":"43eebe1d82cb2ad99d5090d825dcf7908fcb650dd17a72fa8ecaecf01d3eaba5"} Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.752487 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-xdv7f" event={"ID":"5cd3e28d-c288-4b1d-81c2-dff387bb3de7","Type":"ContainerStarted","Data":"dc8fb949ee3dc4e090c749e9ce23338f48019a52a167121e6d41ca5c31d9f42d"} Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.753002 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-xdv7f" Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.754987 4642 patch_prober.go:28] interesting pod/console-operator-58897d9998-xdv7f container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.755022 4642 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-xdv7f" podUID="5cd3e28d-c288-4b1d-81c2-dff387bb3de7" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.755126 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-9dsht" event={"ID":"7b7c9f44-73e2-4109-827b-bed3a722a78c","Type":"ContainerStarted","Data":"c9b707cf095eccd5ba4fe84aa435177d41a81c179392284aa18c660f13916186"} Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.757838 4642 generic.go:334] "Generic (PLEG): container finished" podID="ae263a90-0a71-40b8-bda1-ec21b3680994" containerID="d68352b4e2d67acf793c1a4a3295a26b12882fab7de4be1e6f5dfd014982a6aa" exitCode=0 Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.757894 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-h6qvh" event={"ID":"ae263a90-0a71-40b8-bda1-ec21b3680994","Type":"ContainerDied","Data":"d68352b4e2d67acf793c1a4a3295a26b12882fab7de4be1e6f5dfd014982a6aa"} Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.757910 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-h6qvh" event={"ID":"ae263a90-0a71-40b8-bda1-ec21b3680994","Type":"ContainerStarted","Data":"2ff3d1a430646007254f36172eb452ebdc9e858b4ae4464da35ac85620870731"} Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.765506 4642 generic.go:334] "Generic (PLEG): container finished" podID="39c42149-4d53-4e72-84f6-0c09d0f86ee2" containerID="110f19b575e26a6fbc56b030fdd8c3b888c49ae556f2137bf46dbb83b358234b" exitCode=0 Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.765577 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-tqfl8" event={"ID":"39c42149-4d53-4e72-84f6-0c09d0f86ee2","Type":"ContainerDied","Data":"110f19b575e26a6fbc56b030fdd8c3b888c49ae556f2137bf46dbb83b358234b"} Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.770617 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b626n" event={"ID":"a18026d8-4feb-4dd9-b0ab-c24e4aa102bf","Type":"ContainerStarted","Data":"2069824e95aabaff990674263708e8a69d3855b2a14dfd8b8c12da7833be8d14"} Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.770653 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b626n" event={"ID":"a18026d8-4feb-4dd9-b0ab-c24e4aa102bf","Type":"ContainerStarted","Data":"8b88f4d8ab54c6876227014756d2941b469be7b5c3f149d44c1fbe3af41b3497"} Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.778531 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-cj4zp" event={"ID":"4e5b3697-2b63-4067-9a1c-6011c513a643","Type":"ContainerStarted","Data":"ffa05b199838ec9cd2b7620b7dc073affd2cb273c1b7d3ef0daf564c219e4d5e"} Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.783669 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-s9zdh" event={"ID":"1d9faf46-d412-4182-96a4-f8350fd4c34e","Type":"ContainerStarted","Data":"9cb244a6245a92262ccae2c4187dae820334a649771ab142179d166c7b3c3446"} Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.783882 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-s9zdh" Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.784810 4642 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-s9zdh container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.27:6443/healthz\": dial tcp 10.217.0.27:6443: connect: connection refused" start-of-body= Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.784842 4642 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-s9zdh" podUID="1d9faf46-d412-4182-96a4-f8350fd4c34e" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.27:6443/healthz\": dial tcp 10.217.0.27:6443: connect: connection refused" Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.785721 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5m9j5" event={"ID":"a4310bc4-d7e7-4f8b-832b-57bceda71a45","Type":"ContainerStarted","Data":"0539d90d377dce9a8ffd4fd92e903761054a18c41d6001ab10ccde7944c39791"} Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.785747 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5m9j5" event={"ID":"a4310bc4-d7e7-4f8b-832b-57bceda71a45","Type":"ContainerStarted","Data":"54463251fc2cf840e27384500d94711273888e542012bf8e41db5764d78c4c91"} Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.786292 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5m9j5" Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.789788 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jtkc5" event={"ID":"536c56c9-e355-4b23-a079-9e34f4bc9123","Type":"ContainerStarted","Data":"ba819a463332e141cd864a6fda4acd58132377b48c45af2006747768cdd9bf56"} Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.789814 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jtkc5" event={"ID":"536c56c9-e355-4b23-a079-9e34f4bc9123","Type":"ContainerStarted","Data":"17da01d2bcbf41d90c990493541615f9ffc38b1c7a2054695eace570ac5f62c7"} Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.789825 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jtkc5" event={"ID":"536c56c9-e355-4b23-a079-9e34f4bc9123","Type":"ContainerStarted","Data":"6298e061816ad06258f4a89f7753a981eccf4882d0d8a407e4f1cd60e369225b"} Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.796419 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-t78bf" event={"ID":"33724c18-9aa8-4004-accc-fd8ff92cb999","Type":"ContainerStarted","Data":"392b7c976613090a9ff59250ac225b925ba59bac649c5701366b2f397193b80b"} Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.796443 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-t78bf" event={"ID":"33724c18-9aa8-4004-accc-fd8ff92cb999","Type":"ContainerStarted","Data":"ae9694ce759b662d2619209c1584dfd5d4cdafc8fc32f73c3e05df8751df2869"} Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.800047 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fg5mw\" (UID: \"1f06bd76-391b-4d80-ba76-a992ee54241a\") " pod="openshift-image-registry/image-registry-697d97f7c8-fg5mw" Jan 28 06:50:23 crc kubenswrapper[4642]: E0128 06:50:23.801139 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 06:50:24.301123993 +0000 UTC m=+147.533212802 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fg5mw" (UID: "1f06bd76-391b-4d80-ba76-a992ee54241a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.801662 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"649b459cb0da79389042861eca93d084f8be3f2a6336a4fb73d0ff6ba7348d56"} Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.813350 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8q4rl" event={"ID":"3f917c23-bd51-44a0-b75a-7acf03f1d2de","Type":"ContainerStarted","Data":"20241048a239187317a740306e9d97867be3932cfd9c19b057df709e4f860ab9"} Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.815977 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-kjn9r"] Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.825905 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p5lc2" event={"ID":"c1575b37-adf2-48d0-98ac-2ed31860ebeb","Type":"ContainerStarted","Data":"e36ec27442189a76355d2c4095778eabfdc746be5e568703af2e28eb128d35ff"} Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.903146 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.904267 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-ggxxq"] Jan 28 06:50:23 crc kubenswrapper[4642]: E0128 06:50:23.904898 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 06:50:24.404877207 +0000 UTC m=+147.636966016 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 06:50:23 crc kubenswrapper[4642]: I0128 06:50:23.957724 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5m9j5" Jan 28 06:50:24 crc kubenswrapper[4642]: I0128 06:50:24.013439 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fg5mw\" (UID: \"1f06bd76-391b-4d80-ba76-a992ee54241a\") " pod="openshift-image-registry/image-registry-697d97f7c8-fg5mw" Jan 28 06:50:24 crc kubenswrapper[4642]: E0128 06:50:24.013752 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 06:50:24.513739402 +0000 UTC m=+147.745828202 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fg5mw" (UID: "1f06bd76-391b-4d80-ba76-a992ee54241a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 06:50:24 crc kubenswrapper[4642]: I0128 06:50:24.034918 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-t7v5l"] Jan 28 06:50:24 crc kubenswrapper[4642]: I0128 06:50:24.114487 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 06:50:24 crc kubenswrapper[4642]: I0128 06:50:24.114493 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b626n" podStartSLOduration=124.11448101 podStartE2EDuration="2m4.11448101s" podCreationTimestamp="2026-01-28 06:48:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 06:50:24.112654014 +0000 UTC m=+147.344742823" watchObservedRunningTime="2026-01-28 06:50:24.11448101 +0000 UTC m=+147.346569820" Jan 28 06:50:24 crc kubenswrapper[4642]: E0128 06:50:24.114610 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 06:50:24.614598122 +0000 UTC m=+147.846686930 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 06:50:24 crc kubenswrapper[4642]: I0128 06:50:24.115134 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fg5mw\" (UID: \"1f06bd76-391b-4d80-ba76-a992ee54241a\") " pod="openshift-image-registry/image-registry-697d97f7c8-fg5mw" Jan 28 06:50:24 crc kubenswrapper[4642]: E0128 06:50:24.115478 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 06:50:24.615470141 +0000 UTC m=+147.847558951 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fg5mw" (UID: "1f06bd76-391b-4d80-ba76-a992ee54241a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 06:50:24 crc kubenswrapper[4642]: I0128 06:50:24.173339 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-cj4zp" podStartSLOduration=4.173316996 podStartE2EDuration="4.173316996s" podCreationTimestamp="2026-01-28 06:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 06:50:24.169225919 +0000 UTC m=+147.401314728" watchObservedRunningTime="2026-01-28 06:50:24.173316996 +0000 UTC m=+147.405405805" Jan 28 06:50:24 crc kubenswrapper[4642]: I0128 06:50:24.216278 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 06:50:24 crc kubenswrapper[4642]: E0128 06:50:24.216855 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 06:50:24.716842267 +0000 UTC m=+147.948931075 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 06:50:24 crc kubenswrapper[4642]: I0128 06:50:24.251176 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-dj2m9" podStartSLOduration=124.251161469 podStartE2EDuration="2m4.251161469s" podCreationTimestamp="2026-01-28 06:48:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 06:50:24.24944483 +0000 UTC m=+147.481533639" watchObservedRunningTime="2026-01-28 06:50:24.251161469 +0000 UTC m=+147.483250278" Jan 28 06:50:24 crc kubenswrapper[4642]: I0128 06:50:24.317445 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fg5mw\" (UID: \"1f06bd76-391b-4d80-ba76-a992ee54241a\") " pod="openshift-image-registry/image-registry-697d97f7c8-fg5mw" Jan 28 06:50:24 crc kubenswrapper[4642]: E0128 06:50:24.317695 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 06:50:24.817684575 +0000 UTC m=+148.049773384 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fg5mw" (UID: "1f06bd76-391b-4d80-ba76-a992ee54241a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 06:50:24 crc kubenswrapper[4642]: I0128 06:50:24.410028 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8q4rl" podStartSLOduration=124.410012976 podStartE2EDuration="2m4.410012976s" podCreationTimestamp="2026-01-28 06:48:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 06:50:24.40758054 +0000 UTC m=+147.639669349" watchObservedRunningTime="2026-01-28 06:50:24.410012976 +0000 UTC m=+147.642101785" Jan 28 06:50:24 crc kubenswrapper[4642]: I0128 06:50:24.418417 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 06:50:24 crc kubenswrapper[4642]: E0128 06:50:24.418704 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 06:50:24.918693116 +0000 UTC m=+148.150781925 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 06:50:24 crc kubenswrapper[4642]: I0128 06:50:24.519523 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fg5mw\" (UID: \"1f06bd76-391b-4d80-ba76-a992ee54241a\") " pod="openshift-image-registry/image-registry-697d97f7c8-fg5mw" Jan 28 06:50:24 crc kubenswrapper[4642]: E0128 06:50:24.519894 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 06:50:25.019881364 +0000 UTC m=+148.251970173 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fg5mw" (UID: "1f06bd76-391b-4d80-ba76-a992ee54241a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 06:50:24 crc kubenswrapper[4642]: I0128 06:50:24.622779 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 06:50:24 crc kubenswrapper[4642]: I0128 06:50:24.623854 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jtkc5" podStartSLOduration=124.623838562 podStartE2EDuration="2m4.623838562s" podCreationTimestamp="2026-01-28 06:48:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 06:50:24.612708234 +0000 UTC m=+147.844797043" watchObservedRunningTime="2026-01-28 06:50:24.623838562 +0000 UTC m=+147.855927371" Jan 28 06:50:24 crc kubenswrapper[4642]: I0128 06:50:24.624056 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-ttt2d" podStartSLOduration=124.624052945 podStartE2EDuration="2m4.624052945s" podCreationTimestamp="2026-01-28 06:48:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 06:50:24.573477432 +0000 UTC m=+147.805566230" watchObservedRunningTime="2026-01-28 06:50:24.624052945 +0000 UTC m=+147.856141754" Jan 28 06:50:24 crc kubenswrapper[4642]: E0128 06:50:24.625663 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 06:50:25.125648447 +0000 UTC m=+148.357737256 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 06:50:24 crc kubenswrapper[4642]: I0128 06:50:24.627658 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fg5mw\" (UID: \"1f06bd76-391b-4d80-ba76-a992ee54241a\") " pod="openshift-image-registry/image-registry-697d97f7c8-fg5mw" Jan 28 06:50:24 crc kubenswrapper[4642]: E0128 06:50:24.628052 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 06:50:25.128039044 +0000 UTC m=+148.360127854 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fg5mw" (UID: "1f06bd76-391b-4d80-ba76-a992ee54241a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 06:50:24 crc kubenswrapper[4642]: I0128 06:50:24.678311 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8w4vg"] Jan 28 06:50:24 crc kubenswrapper[4642]: I0128 06:50:24.678415 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4jjwv"] Jan 28 06:50:24 crc kubenswrapper[4642]: I0128 06:50:24.733769 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 06:50:24 crc kubenswrapper[4642]: E0128 06:50:24.734307 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 06:50:25.234295177 +0000 UTC m=+148.466383986 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 06:50:24 crc kubenswrapper[4642]: I0128 06:50:24.734473 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-glqwl"] Jan 28 06:50:24 crc kubenswrapper[4642]: W0128 06:50:24.772639 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6364f994_c14f_4cb9_ab55_ad13ad0973f1.slice/crio-a60e01e61d2b2b97cd66c61a53c3b8444be1912e78b8ab6ae6e119ea0de08bb6 WatchSource:0}: Error finding container a60e01e61d2b2b97cd66c61a53c3b8444be1912e78b8ab6ae6e119ea0de08bb6: Status 404 returned error can't find the container with id a60e01e61d2b2b97cd66c61a53c3b8444be1912e78b8ab6ae6e119ea0de08bb6 Jan 28 06:50:24 crc kubenswrapper[4642]: I0128 06:50:24.823288 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-nl4rt"] Jan 28 06:50:24 crc kubenswrapper[4642]: I0128 06:50:24.824448 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-h76tc"] Jan 28 06:50:24 crc kubenswrapper[4642]: I0128 06:50:24.839638 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fg5mw\" (UID: \"1f06bd76-391b-4d80-ba76-a992ee54241a\") " pod="openshift-image-registry/image-registry-697d97f7c8-fg5mw" Jan 28 06:50:24 crc kubenswrapper[4642]: E0128 06:50:24.839920 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 06:50:25.339908 +0000 UTC m=+148.571996809 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fg5mw" (UID: "1f06bd76-391b-4d80-ba76-a992ee54241a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 06:50:24 crc kubenswrapper[4642]: I0128 06:50:24.856027 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5m9j5" podStartSLOduration=124.856011204 podStartE2EDuration="2m4.856011204s" podCreationTimestamp="2026-01-28 06:48:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 06:50:24.826357167 +0000 UTC m=+148.058445976" watchObservedRunningTime="2026-01-28 06:50:24.856011204 +0000 UTC m=+148.088100013" Jan 28 06:50:24 crc kubenswrapper[4642]: I0128 06:50:24.865381 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-glqwl" event={"ID":"6364f994-c14f-4cb9-ab55-ad13ad0973f1","Type":"ContainerStarted","Data":"a60e01e61d2b2b97cd66c61a53c3b8444be1912e78b8ab6ae6e119ea0de08bb6"} Jan 28 06:50:24 crc kubenswrapper[4642]: I0128 06:50:24.878998 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-mmdr5"] Jan 28 06:50:24 crc kubenswrapper[4642]: I0128 06:50:24.903812 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p5lc2" event={"ID":"c1575b37-adf2-48d0-98ac-2ed31860ebeb","Type":"ContainerStarted","Data":"dbc6bb232110acfae13d85808b569a4e1c9256d55bf34e832d66af8ff7dc9f84"} Jan 28 06:50:24 crc kubenswrapper[4642]: I0128 06:50:24.903846 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p5lc2" event={"ID":"c1575b37-adf2-48d0-98ac-2ed31860ebeb","Type":"ContainerStarted","Data":"a57181de49e3c5749efc81de59effc984e5a3ca60915e5e3d29cc5a7f1d0b208"} Jan 28 06:50:24 crc kubenswrapper[4642]: W0128 06:50:24.905717 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a63bb09_02c3_415a_b7b5_577d3136029a.slice/crio-20ebbd420efd71e6e781b2d938804105fe9c417ee356908b0b33b30a9bb50cca WatchSource:0}: Error finding container 20ebbd420efd71e6e781b2d938804105fe9c417ee356908b0b33b30a9bb50cca: Status 404 returned error can't find the container with id 20ebbd420efd71e6e781b2d938804105fe9c417ee356908b0b33b30a9bb50cca Jan 28 06:50:24 crc kubenswrapper[4642]: I0128 06:50:24.906076 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-t78bf" podStartSLOduration=124.906064235 podStartE2EDuration="2m4.906064235s" podCreationTimestamp="2026-01-28 06:48:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 06:50:24.904673369 +0000 UTC m=+148.136762178" watchObservedRunningTime="2026-01-28 06:50:24.906064235 +0000 UTC m=+148.138153044" Jan 28 06:50:24 crc kubenswrapper[4642]: I0128 06:50:24.907629 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8w4vg" event={"ID":"28e7930c-b0c7-4ef7-975d-fe130a30089c","Type":"ContainerStarted","Data":"48df880e545eb823e768d21ee930892c6183d41aa964dafdd20e700f809466dc"} Jan 28 06:50:24 crc kubenswrapper[4642]: I0128 06:50:24.924751 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4jjwv" event={"ID":"bfaa1a5c-0e92-4979-af54-075acbc06ea7","Type":"ContainerStarted","Data":"3cdb3a74ccedae0f6a9d1c05f17b81e31228a7783d184b6466dce96599d23c5a"} Jan 28 06:50:24 crc kubenswrapper[4642]: I0128 06:50:24.926940 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-9dsht" event={"ID":"7b7c9f44-73e2-4109-827b-bed3a722a78c","Type":"ContainerStarted","Data":"0c51ffc10d8423438645109bcac9294e1079145da86768f3e7b1de58e2f5e2b3"} Jan 28 06:50:24 crc kubenswrapper[4642]: I0128 06:50:24.932033 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"9378ae1f1d057a37be140a6cb29a7a5d5f176c879ccb4fe80a3b128bc8a48628"} Jan 28 06:50:24 crc kubenswrapper[4642]: I0128 06:50:24.933391 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nsxdk"] Jan 28 06:50:24 crc kubenswrapper[4642]: I0128 06:50:24.939635 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-xdv7f" podStartSLOduration=124.939621834 podStartE2EDuration="2m4.939621834s" podCreationTimestamp="2026-01-28 06:48:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 06:50:24.939543438 +0000 UTC m=+148.171632246" watchObservedRunningTime="2026-01-28 06:50:24.939621834 +0000 UTC m=+148.171710644" Jan 28 06:50:24 crc kubenswrapper[4642]: I0128 06:50:24.942137 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 06:50:24 crc kubenswrapper[4642]: E0128 06:50:24.942354 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 06:50:25.442306345 +0000 UTC m=+148.674395154 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 06:50:24 crc kubenswrapper[4642]: I0128 06:50:24.942629 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fg5mw\" (UID: \"1f06bd76-391b-4d80-ba76-a992ee54241a\") " pod="openshift-image-registry/image-registry-697d97f7c8-fg5mw" Jan 28 06:50:24 crc kubenswrapper[4642]: E0128 06:50:24.943056 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 06:50:25.443043101 +0000 UTC m=+148.675131909 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fg5mw" (UID: "1f06bd76-391b-4d80-ba76-a992ee54241a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 06:50:24 crc kubenswrapper[4642]: I0128 06:50:24.955156 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-t7v5l" event={"ID":"2f0d37e4-6b1f-4211-b41d-77838d34c9bd","Type":"ContainerStarted","Data":"2e647298906e79f2044d62b59c542438f896501565021eb8e8c031b704a9d634"} Jan 28 06:50:24 crc kubenswrapper[4642]: I0128 06:50:24.955218 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-t7v5l" event={"ID":"2f0d37e4-6b1f-4211-b41d-77838d34c9bd","Type":"ContainerStarted","Data":"862de92d8b70fa3586d39adb3d34961712e2467549ae23e0b04b3ad88812fdc2"} Jan 28 06:50:24 crc kubenswrapper[4642]: I0128 06:50:24.956169 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-t7v5l" Jan 28 06:50:24 crc kubenswrapper[4642]: I0128 06:50:24.964209 4642 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-t7v5l container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Jan 28 06:50:24 crc kubenswrapper[4642]: I0128 06:50:24.964255 4642 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-t7v5l" podUID="2f0d37e4-6b1f-4211-b41d-77838d34c9bd" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" Jan 28 06:50:24 crc kubenswrapper[4642]: I0128 06:50:24.965545 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"934c9a17d505a558f8b0bad2209b3828bca2d90cb2ab8249835b117d155cdbda"} Jan 28 06:50:24 crc kubenswrapper[4642]: I0128 06:50:24.965579 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"003b7d7d274b9b65d44eff11dd64a05abd144daea4bf5acc8477e7060d3a32d9"} Jan 28 06:50:24 crc kubenswrapper[4642]: I0128 06:50:24.966764 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 06:50:24 crc kubenswrapper[4642]: I0128 06:50:24.993581 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"9359bcf747bb474a285aff4283a965e4d01fe7ec83849369bd29a06439fdc451"} Jan 28 06:50:24 crc kubenswrapper[4642]: I0128 06:50:24.998486 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7rghx"] Jan 28 06:50:25 crc kubenswrapper[4642]: I0128 06:50:25.004848 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-h6qvh" event={"ID":"ae263a90-0a71-40b8-bda1-ec21b3680994","Type":"ContainerStarted","Data":"682565c2481f49e0e96b918e7b455972eade8a9dd1ea57ee37564cd360d9b55d"} Jan 28 06:50:25 crc kubenswrapper[4642]: I0128 06:50:25.008969 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-kjn9r" event={"ID":"522a2fa1-337b-4bb2-ac79-06d592d59bbe","Type":"ContainerStarted","Data":"ee197617bd954a27213b2e459d0a266c8099539362044ef50332732c76d0649e"} Jan 28 06:50:25 crc kubenswrapper[4642]: I0128 06:50:25.008991 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-kjn9r" event={"ID":"522a2fa1-337b-4bb2-ac79-06d592d59bbe","Type":"ContainerStarted","Data":"b1ee6ead225e4acd19c37b0a0865788db0f97e00fff156e9d3b4a6ee2f3ec6b1"} Jan 28 06:50:25 crc kubenswrapper[4642]: I0128 06:50:25.021424 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qzf9d"] Jan 28 06:50:25 crc kubenswrapper[4642]: I0128 06:50:25.024666 4642 generic.go:334] "Generic (PLEG): container finished" podID="51a36588-71bc-409b-b36b-2f76917e5c40" containerID="985a1c8f7ad0a6311067cf7c0563d14a53d0e4c7b06430933d57d52bae5e6397" exitCode=0 Jan 28 06:50:25 crc kubenswrapper[4642]: I0128 06:50:25.024726 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jwpbc" event={"ID":"51a36588-71bc-409b-b36b-2f76917e5c40","Type":"ContainerDied","Data":"985a1c8f7ad0a6311067cf7c0563d14a53d0e4c7b06430933d57d52bae5e6397"} Jan 28 06:50:25 crc kubenswrapper[4642]: I0128 06:50:25.024753 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jwpbc" event={"ID":"51a36588-71bc-409b-b36b-2f76917e5c40","Type":"ContainerStarted","Data":"885fd9baac02627b2ba1cbb93797de3f6bbc583c54ebdac4203222a540c47006"} Jan 28 06:50:25 crc kubenswrapper[4642]: I0128 06:50:25.027761 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xvw8z"] Jan 28 06:50:25 crc kubenswrapper[4642]: I0128 06:50:25.040533 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493045-ndfr5"] Jan 28 06:50:25 crc kubenswrapper[4642]: I0128 06:50:25.043555 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-m6fnq"] Jan 28 06:50:25 crc kubenswrapper[4642]: I0128 06:50:25.043997 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 06:50:25 crc kubenswrapper[4642]: E0128 06:50:25.044179 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 06:50:25.544155627 +0000 UTC m=+148.776244437 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 06:50:25 crc kubenswrapper[4642]: I0128 06:50:25.044335 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fg5mw\" (UID: \"1f06bd76-391b-4d80-ba76-a992ee54241a\") " pod="openshift-image-registry/image-registry-697d97f7c8-fg5mw" Jan 28 06:50:25 crc kubenswrapper[4642]: E0128 06:50:25.045938 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 06:50:25.545925578 +0000 UTC m=+148.778014387 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fg5mw" (UID: "1f06bd76-391b-4d80-ba76-a992ee54241a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 06:50:25 crc kubenswrapper[4642]: I0128 06:50:25.046964 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-v4lf2"] Jan 28 06:50:25 crc kubenswrapper[4642]: I0128 06:50:25.048416 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g9fbj"] Jan 28 06:50:25 crc kubenswrapper[4642]: I0128 06:50:25.049784 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-b8v8n"] Jan 28 06:50:25 crc kubenswrapper[4642]: I0128 06:50:25.051153 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-cc8qm"] Jan 28 06:50:25 crc kubenswrapper[4642]: I0128 06:50:25.058648 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-965rk"] Jan 28 06:50:25 crc kubenswrapper[4642]: I0128 06:50:25.078744 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-tqfl8" event={"ID":"39c42149-4d53-4e72-84f6-0c09d0f86ee2","Type":"ContainerStarted","Data":"8d497486fea68e2899c145361248059b2fa334a7caa8dd67e7b430487c3cb999"} Jan 28 06:50:25 crc kubenswrapper[4642]: I0128 06:50:25.078784 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-tqfl8" Jan 28 06:50:25 crc kubenswrapper[4642]: W0128 06:50:25.092396 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode24a32c1_9c23_41a2_b3a2_58ba818a0534.slice/crio-bf2747b54f731fac75ee71701ea70b9e2d49a962a87adaee96a69b30a28d5031 WatchSource:0}: Error finding container bf2747b54f731fac75ee71701ea70b9e2d49a962a87adaee96a69b30a28d5031: Status 404 returned error can't find the container with id bf2747b54f731fac75ee71701ea70b9e2d49a962a87adaee96a69b30a28d5031 Jan 28 06:50:25 crc kubenswrapper[4642]: I0128 06:50:25.092727 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-t9tps" event={"ID":"3ed61f23-b0b6-4a75-9f9c-44a992a64d23","Type":"ContainerStarted","Data":"949d30e971a64837560d12686c36196753107cfc7b0b3ca8660db110af38f233"} Jan 28 06:50:25 crc kubenswrapper[4642]: I0128 06:50:25.092757 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-t9tps" event={"ID":"3ed61f23-b0b6-4a75-9f9c-44a992a64d23","Type":"ContainerStarted","Data":"22326decbba305092dfaf37ab087c858c2100971c0e195dad2033196a3adf056"} Jan 28 06:50:25 crc kubenswrapper[4642]: I0128 06:50:25.093574 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-t9tps" Jan 28 06:50:25 crc kubenswrapper[4642]: I0128 06:50:25.100373 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8xmnf" podStartSLOduration=125.100362596 podStartE2EDuration="2m5.100362596s" podCreationTimestamp="2026-01-28 06:48:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 06:50:25.09372559 +0000 UTC m=+148.325814399" watchObservedRunningTime="2026-01-28 06:50:25.100362596 +0000 UTC m=+148.332451405" Jan 28 06:50:25 crc kubenswrapper[4642]: I0128 06:50:25.113759 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-t2ncd"] Jan 28 06:50:25 crc kubenswrapper[4642]: I0128 06:50:25.113804 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-t9tps" Jan 28 06:50:25 crc kubenswrapper[4642]: I0128 06:50:25.120436 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-cj4zp" event={"ID":"4e5b3697-2b63-4067-9a1c-6011c513a643","Type":"ContainerStarted","Data":"8a7554d1493ee02507cfb7afc27c5a922a1c2269a1f8509d449d3eb5f9b88f09"} Jan 28 06:50:25 crc kubenswrapper[4642]: W0128 06:50:25.143879 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab318e2b_3fdb_4d5d_87df_5242468f1932.slice/crio-20c22daa8911616ef26adaf045e8ffe116fdb23571e765b1d00e09164e26ef76 WatchSource:0}: Error finding container 20c22daa8911616ef26adaf045e8ffe116fdb23571e765b1d00e09164e26ef76: Status 404 returned error can't find the container with id 20c22daa8911616ef26adaf045e8ffe116fdb23571e765b1d00e09164e26ef76 Jan 28 06:50:25 crc kubenswrapper[4642]: I0128 06:50:25.145366 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 06:50:25 crc kubenswrapper[4642]: E0128 06:50:25.146487 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 06:50:25.64647243 +0000 UTC m=+148.878561239 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 06:50:25 crc kubenswrapper[4642]: W0128 06:50:25.158287 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78e696e7_07de_444b_8c5c_1bc69534b3b8.slice/crio-1a70c46493a2d40764869ec1fb0416606f76f7a240cc502c269d2c08dd76ac59 WatchSource:0}: Error finding container 1a70c46493a2d40764869ec1fb0416606f76f7a240cc502c269d2c08dd76ac59: Status 404 returned error can't find the container with id 1a70c46493a2d40764869ec1fb0416606f76f7a240cc502c269d2c08dd76ac59 Jan 28 06:50:25 crc kubenswrapper[4642]: I0128 06:50:25.159377 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-s9zdh" event={"ID":"1d9faf46-d412-4182-96a4-f8350fd4c34e","Type":"ContainerStarted","Data":"e7079d16ef882be6c81243ffe9c1fe72341e956c4f23f620c2ec82dc87aa52d7"} Jan 28 06:50:25 crc kubenswrapper[4642]: I0128 06:50:25.165061 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-ggxxq" event={"ID":"10725107-b912-41e8-b667-5d42e2fa5cd8","Type":"ContainerStarted","Data":"f5887395cc794d905b37a8930e58fb245c87441950d673cc2c426246273e5f69"} Jan 28 06:50:25 crc kubenswrapper[4642]: I0128 06:50:25.165100 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-ggxxq" event={"ID":"10725107-b912-41e8-b667-5d42e2fa5cd8","Type":"ContainerStarted","Data":"d36c0f37d4eb47f4a3be7b5a05f3b5c76628186797ed8ea8f530bf5578528644"} Jan 28 06:50:25 crc kubenswrapper[4642]: I0128 06:50:25.166431 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-s9zdh" Jan 28 06:50:25 crc kubenswrapper[4642]: I0128 06:50:25.174594 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8q4rl" event={"ID":"3f917c23-bd51-44a0-b75a-7acf03f1d2de","Type":"ContainerStarted","Data":"535bb0404cae150d0a200c5509f417f5bef66ad9d33aef1dcbd41bc8380fe5b7"} Jan 28 06:50:25 crc kubenswrapper[4642]: I0128 06:50:25.177225 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-zrrr7" event={"ID":"4d5c1bf9-0f8d-4363-8afc-f764165812c8","Type":"ContainerStarted","Data":"5fdd9de374dc7bbbe4544d2061418cadeafd97dc7af4db1f60f5a2a8b8aa9824"} Jan 28 06:50:25 crc kubenswrapper[4642]: I0128 06:50:25.183146 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hwwqg" event={"ID":"6680af4f-2095-41e6-9343-63ab88645dea","Type":"ContainerStarted","Data":"928f66f3fe99b97aa9c8548463fb4ff86585fa98b18abb05526298eb8174f4fc"} Jan 28 06:50:25 crc kubenswrapper[4642]: I0128 06:50:25.183165 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hwwqg" event={"ID":"6680af4f-2095-41e6-9343-63ab88645dea","Type":"ContainerStarted","Data":"a861f9d9b588e44bf3609848ef887d7621beb5dd9a4c7692b4de153a02cea44a"} Jan 28 06:50:25 crc kubenswrapper[4642]: I0128 06:50:25.184036 4642 patch_prober.go:28] interesting pod/downloads-7954f5f757-k2t68 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 28 06:50:25 crc kubenswrapper[4642]: I0128 06:50:25.184058 4642 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-k2t68" podUID="8f8a2fee-019f-4885-93ad-57787b4478d8" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 28 06:50:25 crc kubenswrapper[4642]: I0128 06:50:25.215889 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-xdv7f" Jan 28 06:50:25 crc kubenswrapper[4642]: I0128 06:50:25.252234 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fg5mw\" (UID: \"1f06bd76-391b-4d80-ba76-a992ee54241a\") " pod="openshift-image-registry/image-registry-697d97f7c8-fg5mw" Jan 28 06:50:25 crc kubenswrapper[4642]: E0128 06:50:25.254613 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 06:50:25.754601285 +0000 UTC m=+148.986690094 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fg5mw" (UID: "1f06bd76-391b-4d80-ba76-a992ee54241a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 06:50:25 crc kubenswrapper[4642]: I0128 06:50:25.274282 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-s9zdh" podStartSLOduration=125.274263886 podStartE2EDuration="2m5.274263886s" podCreationTimestamp="2026-01-28 06:48:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 06:50:25.266607953 +0000 UTC m=+148.498696762" watchObservedRunningTime="2026-01-28 06:50:25.274263886 +0000 UTC m=+148.506352684" Jan 28 06:50:25 crc kubenswrapper[4642]: I0128 06:50:25.336876 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-k2t68" podStartSLOduration=125.336857428 podStartE2EDuration="2m5.336857428s" podCreationTimestamp="2026-01-28 06:48:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 06:50:25.336566331 +0000 UTC m=+148.568655140" watchObservedRunningTime="2026-01-28 06:50:25.336857428 +0000 UTC m=+148.568946237" Jan 28 06:50:25 crc kubenswrapper[4642]: I0128 06:50:25.354133 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 06:50:25 crc kubenswrapper[4642]: E0128 06:50:25.354264 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 06:50:25.854250709 +0000 UTC m=+149.086339518 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 06:50:25 crc kubenswrapper[4642]: I0128 06:50:25.354809 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fg5mw\" (UID: \"1f06bd76-391b-4d80-ba76-a992ee54241a\") " pod="openshift-image-registry/image-registry-697d97f7c8-fg5mw" Jan 28 06:50:25 crc kubenswrapper[4642]: E0128 06:50:25.359481 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 06:50:25.859471811 +0000 UTC m=+149.091560620 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fg5mw" (UID: "1f06bd76-391b-4d80-ba76-a992ee54241a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 06:50:25 crc kubenswrapper[4642]: I0128 06:50:25.383233 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-zrrr7" podStartSLOduration=125.383214336 podStartE2EDuration="2m5.383214336s" podCreationTimestamp="2026-01-28 06:48:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 06:50:25.380134161 +0000 UTC m=+148.612222971" watchObservedRunningTime="2026-01-28 06:50:25.383214336 +0000 UTC m=+148.615303146" Jan 28 06:50:25 crc kubenswrapper[4642]: I0128 06:50:25.456260 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 06:50:25 crc kubenswrapper[4642]: E0128 06:50:25.456546 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 06:50:25.956529811 +0000 UTC m=+149.188618619 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 06:50:25 crc kubenswrapper[4642]: I0128 06:50:25.477370 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-9dsht" Jan 28 06:50:25 crc kubenswrapper[4642]: I0128 06:50:25.486366 4642 patch_prober.go:28] interesting pod/router-default-5444994796-9dsht container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 06:50:25 crc kubenswrapper[4642]: [-]has-synced failed: reason withheld Jan 28 06:50:25 crc kubenswrapper[4642]: [+]process-running ok Jan 28 06:50:25 crc kubenswrapper[4642]: healthz check failed Jan 28 06:50:25 crc kubenswrapper[4642]: I0128 06:50:25.486415 4642 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9dsht" podUID="7b7c9f44-73e2-4109-827b-bed3a722a78c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 06:50:25 crc kubenswrapper[4642]: I0128 06:50:25.560461 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fg5mw\" (UID: \"1f06bd76-391b-4d80-ba76-a992ee54241a\") " pod="openshift-image-registry/image-registry-697d97f7c8-fg5mw" Jan 28 06:50:25 crc kubenswrapper[4642]: E0128 06:50:25.560719 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 06:50:26.060708294 +0000 UTC m=+149.292797104 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fg5mw" (UID: "1f06bd76-391b-4d80-ba76-a992ee54241a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 06:50:25 crc kubenswrapper[4642]: I0128 06:50:25.614533 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-t9tps" podStartSLOduration=125.614513604 podStartE2EDuration="2m5.614513604s" podCreationTimestamp="2026-01-28 06:48:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 06:50:25.614214332 +0000 UTC m=+148.846303140" watchObservedRunningTime="2026-01-28 06:50:25.614513604 +0000 UTC m=+148.846602413" Jan 28 06:50:25 crc kubenswrapper[4642]: I0128 06:50:25.664971 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 06:50:25 crc kubenswrapper[4642]: E0128 06:50:25.665309 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 06:50:26.165282462 +0000 UTC m=+149.397371272 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 06:50:25 crc kubenswrapper[4642]: I0128 06:50:25.665439 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fg5mw\" (UID: \"1f06bd76-391b-4d80-ba76-a992ee54241a\") " pod="openshift-image-registry/image-registry-697d97f7c8-fg5mw" Jan 28 06:50:25 crc kubenswrapper[4642]: E0128 06:50:25.665709 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 06:50:26.165702483 +0000 UTC m=+149.397791292 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fg5mw" (UID: "1f06bd76-391b-4d80-ba76-a992ee54241a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 06:50:25 crc kubenswrapper[4642]: I0128 06:50:25.702498 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-kjn9r" podStartSLOduration=125.702483376 podStartE2EDuration="2m5.702483376s" podCreationTimestamp="2026-01-28 06:48:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 06:50:25.654723348 +0000 UTC m=+148.886812158" watchObservedRunningTime="2026-01-28 06:50:25.702483376 +0000 UTC m=+148.934572186" Jan 28 06:50:25 crc kubenswrapper[4642]: I0128 06:50:25.702919 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-t7v5l" podStartSLOduration=125.702913957 podStartE2EDuration="2m5.702913957s" podCreationTimestamp="2026-01-28 06:48:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 06:50:25.701294651 +0000 UTC m=+148.933383460" watchObservedRunningTime="2026-01-28 06:50:25.702913957 +0000 UTC m=+148.935002766" Jan 28 06:50:25 crc kubenswrapper[4642]: I0128 06:50:25.769933 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 06:50:25 crc kubenswrapper[4642]: E0128 06:50:25.770256 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 06:50:26.270243219 +0000 UTC m=+149.502332027 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 06:50:25 crc kubenswrapper[4642]: I0128 06:50:25.779364 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p5lc2" podStartSLOduration=125.779349289 podStartE2EDuration="2m5.779349289s" podCreationTimestamp="2026-01-28 06:48:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 06:50:25.778463463 +0000 UTC m=+149.010552272" watchObservedRunningTime="2026-01-28 06:50:25.779349289 +0000 UTC m=+149.011438098" Jan 28 06:50:25 crc kubenswrapper[4642]: I0128 06:50:25.826883 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-tqfl8" podStartSLOduration=125.826867893 podStartE2EDuration="2m5.826867893s" podCreationTimestamp="2026-01-28 06:48:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 06:50:25.819036951 +0000 UTC m=+149.051125760" watchObservedRunningTime="2026-01-28 06:50:25.826867893 +0000 UTC m=+149.058956702" Jan 28 06:50:25 crc kubenswrapper[4642]: I0128 06:50:25.856374 4642 csr.go:261] certificate signing request csr-scddq is approved, waiting to be issued Jan 28 06:50:25 crc kubenswrapper[4642]: I0128 06:50:25.857468 4642 csr.go:257] certificate signing request csr-scddq is issued Jan 28 06:50:25 crc kubenswrapper[4642]: I0128 06:50:25.875821 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fg5mw\" (UID: \"1f06bd76-391b-4d80-ba76-a992ee54241a\") " pod="openshift-image-registry/image-registry-697d97f7c8-fg5mw" Jan 28 06:50:25 crc kubenswrapper[4642]: E0128 06:50:25.876081 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 06:50:26.376070324 +0000 UTC m=+149.608159123 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fg5mw" (UID: "1f06bd76-391b-4d80-ba76-a992ee54241a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 06:50:25 crc kubenswrapper[4642]: I0128 06:50:25.889728 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-9dsht" podStartSLOduration=125.889694785 podStartE2EDuration="2m5.889694785s" podCreationTimestamp="2026-01-28 06:48:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 06:50:25.857180738 +0000 UTC m=+149.089269548" watchObservedRunningTime="2026-01-28 06:50:25.889694785 +0000 UTC m=+149.121783594" Jan 28 06:50:25 crc kubenswrapper[4642]: I0128 06:50:25.891159 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hwwqg" podStartSLOduration=125.891152157 podStartE2EDuration="2m5.891152157s" podCreationTimestamp="2026-01-28 06:48:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 06:50:25.88902763 +0000 UTC m=+149.121116439" watchObservedRunningTime="2026-01-28 06:50:25.891152157 +0000 UTC m=+149.123240965" Jan 28 06:50:25 crc kubenswrapper[4642]: I0128 06:50:25.979595 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 06:50:25 crc kubenswrapper[4642]: E0128 06:50:25.979886 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 06:50:26.479848756 +0000 UTC m=+149.711937565 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 06:50:25 crc kubenswrapper[4642]: I0128 06:50:25.979954 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fg5mw\" (UID: \"1f06bd76-391b-4d80-ba76-a992ee54241a\") " pod="openshift-image-registry/image-registry-697d97f7c8-fg5mw" Jan 28 06:50:25 crc kubenswrapper[4642]: E0128 06:50:25.981321 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 06:50:26.481311167 +0000 UTC m=+149.713399976 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fg5mw" (UID: "1f06bd76-391b-4d80-ba76-a992ee54241a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 06:50:26 crc kubenswrapper[4642]: I0128 06:50:26.081719 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 06:50:26 crc kubenswrapper[4642]: E0128 06:50:26.082226 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 06:50:26.582206264 +0000 UTC m=+149.814295074 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 06:50:26 crc kubenswrapper[4642]: I0128 06:50:26.184837 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fg5mw\" (UID: \"1f06bd76-391b-4d80-ba76-a992ee54241a\") " pod="openshift-image-registry/image-registry-697d97f7c8-fg5mw" Jan 28 06:50:26 crc kubenswrapper[4642]: E0128 06:50:26.185442 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 06:50:26.685430343 +0000 UTC m=+149.917519152 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fg5mw" (UID: "1f06bd76-391b-4d80-ba76-a992ee54241a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 06:50:26 crc kubenswrapper[4642]: I0128 06:50:26.284504 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-glqwl" event={"ID":"6364f994-c14f-4cb9-ab55-ad13ad0973f1","Type":"ContainerStarted","Data":"6431df3dddf69f08dcf5bf97a5dac08038a4872e26de48ce5e33053b96392baa"} Jan 28 06:50:26 crc kubenswrapper[4642]: I0128 06:50:26.288666 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 06:50:26 crc kubenswrapper[4642]: E0128 06:50:26.288925 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 06:50:26.788911445 +0000 UTC m=+150.021000255 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 06:50:26 crc kubenswrapper[4642]: I0128 06:50:26.319240 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-965rk" event={"ID":"ab246574-4025-47da-a132-1d1e72b35a00","Type":"ContainerStarted","Data":"a417b9b24d811f44d5802c81373a18a801aa7cfdaa6850f2e277887bbdac72f0"} Jan 28 06:50:26 crc kubenswrapper[4642]: I0128 06:50:26.319573 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-965rk" Jan 28 06:50:26 crc kubenswrapper[4642]: I0128 06:50:26.319585 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-965rk" event={"ID":"ab246574-4025-47da-a132-1d1e72b35a00","Type":"ContainerStarted","Data":"53f77fca7253f5374987afb92b9534da91f86cedbd690edc761a95d211efc77b"} Jan 28 06:50:26 crc kubenswrapper[4642]: I0128 06:50:26.328897 4642 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-965rk container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Jan 28 06:50:26 crc kubenswrapper[4642]: I0128 06:50:26.328939 4642 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-965rk" podUID="ab246574-4025-47da-a132-1d1e72b35a00" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" Jan 28 06:50:26 crc kubenswrapper[4642]: I0128 06:50:26.335831 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493045-ndfr5" event={"ID":"d980a7da-b442-446b-8bde-56e17d70b28b","Type":"ContainerStarted","Data":"83b475246567f73e3201e910ebf0c39bd1b9da1544ed5785ebc3d776eb47d479"} Jan 28 06:50:26 crc kubenswrapper[4642]: I0128 06:50:26.335867 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493045-ndfr5" event={"ID":"d980a7da-b442-446b-8bde-56e17d70b28b","Type":"ContainerStarted","Data":"39cf3954429937a5ef0ae5a75531fbc7ced8e3b95b28b33d8847ccc7c3408f80"} Jan 28 06:50:26 crc kubenswrapper[4642]: I0128 06:50:26.343850 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-965rk" podStartSLOduration=126.34384017 podStartE2EDuration="2m6.34384017s" podCreationTimestamp="2026-01-28 06:48:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 06:50:26.343111989 +0000 UTC m=+149.575200798" watchObservedRunningTime="2026-01-28 06:50:26.34384017 +0000 UTC m=+149.575928978" Jan 28 06:50:26 crc kubenswrapper[4642]: I0128 06:50:26.374234 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29493045-ndfr5" podStartSLOduration=126.374217295 podStartE2EDuration="2m6.374217295s" podCreationTimestamp="2026-01-28 06:48:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 06:50:26.373705412 +0000 UTC m=+149.605794222" watchObservedRunningTime="2026-01-28 06:50:26.374217295 +0000 UTC m=+149.606306105" Jan 28 06:50:26 crc kubenswrapper[4642]: I0128 06:50:26.382441 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-mmdr5" event={"ID":"a84eeb48-6e52-4989-b622-f1031dad991a","Type":"ContainerStarted","Data":"46d4ed4e85433c9f611834d208a8c9f65665d3ad65e7e7c85a6d1808036443ab"} Jan 28 06:50:26 crc kubenswrapper[4642]: I0128 06:50:26.382503 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-mmdr5" event={"ID":"a84eeb48-6e52-4989-b622-f1031dad991a","Type":"ContainerStarted","Data":"a7fc36857bd00b5608cbf20200e4825bce2c0ded948d8dbf1c9784413ba944db"} Jan 28 06:50:26 crc kubenswrapper[4642]: I0128 06:50:26.392095 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fg5mw\" (UID: \"1f06bd76-391b-4d80-ba76-a992ee54241a\") " pod="openshift-image-registry/image-registry-697d97f7c8-fg5mw" Jan 28 06:50:26 crc kubenswrapper[4642]: E0128 06:50:26.392534 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 06:50:26.892523303 +0000 UTC m=+150.124612113 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fg5mw" (UID: "1f06bd76-391b-4d80-ba76-a992ee54241a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 06:50:26 crc kubenswrapper[4642]: I0128 06:50:26.397266 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-t2ncd" event={"ID":"1711b0d7-ac23-40bd-b522-46a7148d7a6f","Type":"ContainerStarted","Data":"2a1e5cf563e8054612ced4733b95747c28d78a15e6480491ee85b98555766a8b"} Jan 28 06:50:26 crc kubenswrapper[4642]: I0128 06:50:26.437945 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7rghx" event={"ID":"70c953e5-75c5-4a06-aacd-913ddfbea45f","Type":"ContainerStarted","Data":"359ac690fd3cf16d65d7787d7c2a2408b2d54eecfde82fbf4da9177c8fdc52ff"} Jan 28 06:50:26 crc kubenswrapper[4642]: I0128 06:50:26.438004 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7rghx" event={"ID":"70c953e5-75c5-4a06-aacd-913ddfbea45f","Type":"ContainerStarted","Data":"92df00df657df651153f24929db56a81be10a185bd314098365d5afa68c26576"} Jan 28 06:50:26 crc kubenswrapper[4642]: I0128 06:50:26.438741 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7rghx" Jan 28 06:50:26 crc kubenswrapper[4642]: I0128 06:50:26.439805 4642 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-7rghx container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:5443/healthz\": dial tcp 10.217.0.35:5443: connect: connection refused" start-of-body= Jan 28 06:50:26 crc kubenswrapper[4642]: I0128 06:50:26.439847 4642 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7rghx" podUID="70c953e5-75c5-4a06-aacd-913ddfbea45f" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.35:5443/healthz\": dial tcp 10.217.0.35:5443: connect: connection refused" Jan 28 06:50:26 crc kubenswrapper[4642]: I0128 06:50:26.454503 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-b8v8n" event={"ID":"e24a32c1-9c23-41a2-b3a2-58ba818a0534","Type":"ContainerStarted","Data":"b11a7ebcbb985ac57d17a31b4823d0335e53c6eccddeaf62f95887ab40a7e6cf"} Jan 28 06:50:26 crc kubenswrapper[4642]: I0128 06:50:26.454560 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-b8v8n" event={"ID":"e24a32c1-9c23-41a2-b3a2-58ba818a0534","Type":"ContainerStarted","Data":"bf2747b54f731fac75ee71701ea70b9e2d49a962a87adaee96a69b30a28d5031"} Jan 28 06:50:26 crc kubenswrapper[4642]: I0128 06:50:26.455796 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qzf9d" event={"ID":"78e696e7-07de-444b-8c5c-1bc69534b3b8","Type":"ContainerStarted","Data":"86e6c74dac1cbd67dfe6267a993e2aaf4b3375f17a266b1a5e6a3127567197a8"} Jan 28 06:50:26 crc kubenswrapper[4642]: I0128 06:50:26.455820 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qzf9d" event={"ID":"78e696e7-07de-444b-8c5c-1bc69534b3b8","Type":"ContainerStarted","Data":"1a70c46493a2d40764869ec1fb0416606f76f7a240cc502c269d2c08dd76ac59"} Jan 28 06:50:26 crc kubenswrapper[4642]: I0128 06:50:26.471366 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g9fbj" event={"ID":"3d07c074-7835-41f1-b39a-b159f851e000","Type":"ContainerStarted","Data":"8249fd3c451c9cab4c33393df39b462bf6cd7f99c2ab08330680e65351a087de"} Jan 28 06:50:26 crc kubenswrapper[4642]: I0128 06:50:26.477094 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7rghx" podStartSLOduration=126.477081417 podStartE2EDuration="2m6.477081417s" podCreationTimestamp="2026-01-28 06:48:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 06:50:26.476342467 +0000 UTC m=+149.708431276" watchObservedRunningTime="2026-01-28 06:50:26.477081417 +0000 UTC m=+149.709170226" Jan 28 06:50:26 crc kubenswrapper[4642]: I0128 06:50:26.482847 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4jjwv" event={"ID":"bfaa1a5c-0e92-4979-af54-075acbc06ea7","Type":"ContainerStarted","Data":"4b1d0da1b37720c265f40e58986714ad366faa6cc596a09aa243fb0cac7a22c8"} Jan 28 06:50:26 crc kubenswrapper[4642]: I0128 06:50:26.483518 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4jjwv" Jan 28 06:50:26 crc kubenswrapper[4642]: I0128 06:50:26.485580 4642 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-4jjwv container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Jan 28 06:50:26 crc kubenswrapper[4642]: I0128 06:50:26.485637 4642 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4jjwv" podUID="bfaa1a5c-0e92-4979-af54-075acbc06ea7" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Jan 28 06:50:26 crc kubenswrapper[4642]: I0128 06:50:26.490159 4642 patch_prober.go:28] interesting pod/router-default-5444994796-9dsht container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 06:50:26 crc kubenswrapper[4642]: [-]has-synced failed: reason withheld Jan 28 06:50:26 crc kubenswrapper[4642]: [+]process-running ok Jan 28 06:50:26 crc kubenswrapper[4642]: healthz check failed Jan 28 06:50:26 crc kubenswrapper[4642]: I0128 06:50:26.490220 4642 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9dsht" podUID="7b7c9f44-73e2-4109-827b-bed3a722a78c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 06:50:26 crc kubenswrapper[4642]: I0128 06:50:26.493755 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 06:50:26 crc kubenswrapper[4642]: E0128 06:50:26.494858 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 06:50:26.994842 +0000 UTC m=+150.226930808 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 06:50:26 crc kubenswrapper[4642]: I0128 06:50:26.504322 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-h76tc" event={"ID":"86ebb583-2f1b-4a28-b1b8-81123c2484d7","Type":"ContainerStarted","Data":"8761f71f0276973d671b7a6c9cc41a1ee9f09e0f66994c4bca8a341f657a206b"} Jan 28 06:50:26 crc kubenswrapper[4642]: I0128 06:50:26.504366 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-h76tc" event={"ID":"86ebb583-2f1b-4a28-b1b8-81123c2484d7","Type":"ContainerStarted","Data":"9327a0dcf40b64b11f11f021061f22934f2e9cdda0d850f115894ad60a548c33"} Jan 28 06:50:26 crc kubenswrapper[4642]: I0128 06:50:26.510162 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-cc8qm" event={"ID":"ab318e2b-3fdb-4d5d-87df-5242468f1932","Type":"ContainerStarted","Data":"487d8580a7d5a6dd552e80d5fc8aea433a8e41f335c47fb0470f65d84bf3557f"} Jan 28 06:50:26 crc kubenswrapper[4642]: I0128 06:50:26.510246 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-cc8qm" event={"ID":"ab318e2b-3fdb-4d5d-87df-5242468f1932","Type":"ContainerStarted","Data":"20c22daa8911616ef26adaf045e8ffe116fdb23571e765b1d00e09164e26ef76"} Jan 28 06:50:26 crc kubenswrapper[4642]: I0128 06:50:26.522176 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qzf9d" podStartSLOduration=126.522166223 podStartE2EDuration="2m6.522166223s" podCreationTimestamp="2026-01-28 06:48:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 06:50:26.515617052 +0000 UTC m=+149.747705861" watchObservedRunningTime="2026-01-28 06:50:26.522166223 +0000 UTC m=+149.754255032" Jan 28 06:50:26 crc kubenswrapper[4642]: I0128 06:50:26.523243 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-ggxxq" event={"ID":"10725107-b912-41e8-b667-5d42e2fa5cd8","Type":"ContainerStarted","Data":"4ba5fb5aab94764277424de0c62b8177dd3596f3d5c84a81b5ab9360bc639fa2"} Jan 28 06:50:26 crc kubenswrapper[4642]: I0128 06:50:26.543532 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xvw8z" event={"ID":"0e3520a4-607d-48a0-ad47-32d5fe2efc54","Type":"ContainerStarted","Data":"23b0ee994ce87247d0f9786fe1e0b678ff76ee02f96ea480fb679c7cdc83d6bb"} Jan 28 06:50:26 crc kubenswrapper[4642]: I0128 06:50:26.543566 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xvw8z" event={"ID":"0e3520a4-607d-48a0-ad47-32d5fe2efc54","Type":"ContainerStarted","Data":"b0df82e4664bcb96d243cb28295f93b89c5c34d7b92f4eef9bb85f1c8d68aedf"} Jan 28 06:50:26 crc kubenswrapper[4642]: I0128 06:50:26.563802 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-nl4rt" event={"ID":"9a63bb09-02c3-415a-b7b5-577d3136029a","Type":"ContainerStarted","Data":"8579f9c4c569f7a511f07bb52521d7eaeb98792e2509675fb35c0577b06f4a04"} Jan 28 06:50:26 crc kubenswrapper[4642]: I0128 06:50:26.564082 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-nl4rt" event={"ID":"9a63bb09-02c3-415a-b7b5-577d3136029a","Type":"ContainerStarted","Data":"20ebbd420efd71e6e781b2d938804105fe9c417ee356908b0b33b30a9bb50cca"} Jan 28 06:50:26 crc kubenswrapper[4642]: I0128 06:50:26.578401 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nsxdk" event={"ID":"fa7dcf61-b17b-42d7-8b79-e4bd9af04a31","Type":"ContainerStarted","Data":"7ab9978dca1b3ca9a2a46298e500dd9869ce3ba09717aa5d4be68d7180bcdbd6"} Jan 28 06:50:26 crc kubenswrapper[4642]: I0128 06:50:26.578442 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nsxdk" event={"ID":"fa7dcf61-b17b-42d7-8b79-e4bd9af04a31","Type":"ContainerStarted","Data":"c62e16264c1fe9c0ff9dbd00edd3513d49d1b090bb623a057c7c9e44bf876767"} Jan 28 06:50:26 crc kubenswrapper[4642]: I0128 06:50:26.579022 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nsxdk" Jan 28 06:50:26 crc kubenswrapper[4642]: I0128 06:50:26.589724 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-m6fnq" event={"ID":"399e3aa1-f241-4695-ba69-c8e2900b7bb2","Type":"ContainerStarted","Data":"987fc079ea4531fad2efdd8ef3b4e45c8ab37d74110c8ba8cc6575992509205a"} Jan 28 06:50:26 crc kubenswrapper[4642]: I0128 06:50:26.589752 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-m6fnq" event={"ID":"399e3aa1-f241-4695-ba69-c8e2900b7bb2","Type":"ContainerStarted","Data":"35009ff482982824118db8bbe2f5d2e12e2fc92b03a0e273f14376bdaefcc21d"} Jan 28 06:50:26 crc kubenswrapper[4642]: I0128 06:50:26.591070 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8w4vg" event={"ID":"28e7930c-b0c7-4ef7-975d-fe130a30089c","Type":"ContainerStarted","Data":"b67ed5fcb8defb1a759079dcf8f3447b41ffc9dea858749ca57bca12f573cac5"} Jan 28 06:50:26 crc kubenswrapper[4642]: I0128 06:50:26.596935 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fg5mw\" (UID: \"1f06bd76-391b-4d80-ba76-a992ee54241a\") " pod="openshift-image-registry/image-registry-697d97f7c8-fg5mw" Jan 28 06:50:26 crc kubenswrapper[4642]: E0128 06:50:26.598566 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 06:50:27.098548055 +0000 UTC m=+150.330636864 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fg5mw" (UID: "1f06bd76-391b-4d80-ba76-a992ee54241a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 06:50:26 crc kubenswrapper[4642]: I0128 06:50:26.602380 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g9fbj" podStartSLOduration=126.602363041 podStartE2EDuration="2m6.602363041s" podCreationTimestamp="2026-01-28 06:48:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 06:50:26.583521535 +0000 UTC m=+149.815610345" watchObservedRunningTime="2026-01-28 06:50:26.602363041 +0000 UTC m=+149.834451851" Jan 28 06:50:26 crc kubenswrapper[4642]: I0128 06:50:26.622622 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-v4lf2" event={"ID":"96101a91-8390-4743-821e-7bc7fe37f8e4","Type":"ContainerStarted","Data":"b42402e207a3ae7e6f31e77ce9c6f543141e7afe91c65f20962a9137495b5e97"} Jan 28 06:50:26 crc kubenswrapper[4642]: I0128 06:50:26.622658 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-v4lf2" event={"ID":"96101a91-8390-4743-821e-7bc7fe37f8e4","Type":"ContainerStarted","Data":"21b1a42c7dd74f65f8580a1a54da013c9caf0a5c73c0c4d6c912845c5adabf4b"} Jan 28 06:50:26 crc kubenswrapper[4642]: I0128 06:50:26.645507 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-h6qvh" event={"ID":"ae263a90-0a71-40b8-bda1-ec21b3680994","Type":"ContainerStarted","Data":"27e2d49325b0e4235d7fcc731e9e77adb3fb3f61dc012c9494b773431a54a5ca"} Jan 28 06:50:26 crc kubenswrapper[4642]: I0128 06:50:26.664423 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-t7v5l" Jan 28 06:50:26 crc kubenswrapper[4642]: I0128 06:50:26.686136 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-ggxxq" podStartSLOduration=126.686122302 podStartE2EDuration="2m6.686122302s" podCreationTimestamp="2026-01-28 06:48:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 06:50:26.6405124 +0000 UTC m=+149.872601209" watchObservedRunningTime="2026-01-28 06:50:26.686122302 +0000 UTC m=+149.918211112" Jan 28 06:50:26 crc kubenswrapper[4642]: I0128 06:50:26.705635 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 06:50:26 crc kubenswrapper[4642]: E0128 06:50:26.709067 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 06:50:27.209055636 +0000 UTC m=+150.441144445 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 06:50:26 crc kubenswrapper[4642]: I0128 06:50:26.734955 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xvw8z" podStartSLOduration=126.734938838 podStartE2EDuration="2m6.734938838s" podCreationTimestamp="2026-01-28 06:48:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 06:50:26.734495644 +0000 UTC m=+149.966584453" watchObservedRunningTime="2026-01-28 06:50:26.734938838 +0000 UTC m=+149.967027647" Jan 28 06:50:26 crc kubenswrapper[4642]: I0128 06:50:26.736314 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-h76tc" podStartSLOduration=126.736309346 podStartE2EDuration="2m6.736309346s" podCreationTimestamp="2026-01-28 06:48:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 06:50:26.687716632 +0000 UTC m=+149.919805441" watchObservedRunningTime="2026-01-28 06:50:26.736309346 +0000 UTC m=+149.968398155" Jan 28 06:50:26 crc kubenswrapper[4642]: I0128 06:50:26.790856 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nsxdk" podStartSLOduration=126.79083941 podStartE2EDuration="2m6.79083941s" podCreationTimestamp="2026-01-28 06:48:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 06:50:26.789552028 +0000 UTC m=+150.021640837" watchObservedRunningTime="2026-01-28 06:50:26.79083941 +0000 UTC m=+150.022928219" Jan 28 06:50:26 crc kubenswrapper[4642]: I0128 06:50:26.810976 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fg5mw\" (UID: \"1f06bd76-391b-4d80-ba76-a992ee54241a\") " pod="openshift-image-registry/image-registry-697d97f7c8-fg5mw" Jan 28 06:50:26 crc kubenswrapper[4642]: E0128 06:50:26.811335 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 06:50:27.311323616 +0000 UTC m=+150.543412424 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fg5mw" (UID: "1f06bd76-391b-4d80-ba76-a992ee54241a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 06:50:26 crc kubenswrapper[4642]: I0128 06:50:26.821553 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-nl4rt" podStartSLOduration=126.821535737 podStartE2EDuration="2m6.821535737s" podCreationTimestamp="2026-01-28 06:48:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 06:50:26.820237705 +0000 UTC m=+150.052326514" watchObservedRunningTime="2026-01-28 06:50:26.821535737 +0000 UTC m=+150.053624546" Jan 28 06:50:26 crc kubenswrapper[4642]: I0128 06:50:26.846368 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4jjwv" podStartSLOduration=126.846353023 podStartE2EDuration="2m6.846353023s" podCreationTimestamp="2026-01-28 06:48:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 06:50:26.844234699 +0000 UTC m=+150.076323508" watchObservedRunningTime="2026-01-28 06:50:26.846353023 +0000 UTC m=+150.078441832" Jan 28 06:50:26 crc kubenswrapper[4642]: I0128 06:50:26.861726 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-28 06:45:25 +0000 UTC, rotation deadline is 2026-11-20 08:25:21.084486105 +0000 UTC Jan 28 06:50:26 crc kubenswrapper[4642]: I0128 06:50:26.861798 4642 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7105h34m54.22269042s for next certificate rotation Jan 28 06:50:26 crc kubenswrapper[4642]: I0128 06:50:26.894319 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-v4lf2" podStartSLOduration=6.894303219 podStartE2EDuration="6.894303219s" podCreationTimestamp="2026-01-28 06:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 06:50:26.891819207 +0000 UTC m=+150.123908015" watchObservedRunningTime="2026-01-28 06:50:26.894303219 +0000 UTC m=+150.126392029" Jan 28 06:50:26 crc kubenswrapper[4642]: I0128 06:50:26.894559 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-cc8qm" podStartSLOduration=126.894554473 podStartE2EDuration="2m6.894554473s" podCreationTimestamp="2026-01-28 06:48:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 06:50:26.873713676 +0000 UTC m=+150.105802486" watchObservedRunningTime="2026-01-28 06:50:26.894554473 +0000 UTC m=+150.126643281" Jan 28 06:50:26 crc kubenswrapper[4642]: I0128 06:50:26.912029 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 06:50:26 crc kubenswrapper[4642]: E0128 06:50:26.912322 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 06:50:27.412306028 +0000 UTC m=+150.644394836 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 06:50:26 crc kubenswrapper[4642]: I0128 06:50:26.912588 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fg5mw\" (UID: \"1f06bd76-391b-4d80-ba76-a992ee54241a\") " pod="openshift-image-registry/image-registry-697d97f7c8-fg5mw" Jan 28 06:50:26 crc kubenswrapper[4642]: E0128 06:50:26.912877 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 06:50:27.412867633 +0000 UTC m=+150.644956442 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fg5mw" (UID: "1f06bd76-391b-4d80-ba76-a992ee54241a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 06:50:26 crc kubenswrapper[4642]: I0128 06:50:26.941306 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qgssm"] Jan 28 06:50:26 crc kubenswrapper[4642]: I0128 06:50:26.942152 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qgssm" Jan 28 06:50:26 crc kubenswrapper[4642]: I0128 06:50:26.944477 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 28 06:50:26 crc kubenswrapper[4642]: I0128 06:50:26.968622 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-h6qvh" podStartSLOduration=126.968601302 podStartE2EDuration="2m6.968601302s" podCreationTimestamp="2026-01-28 06:48:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 06:50:26.966300302 +0000 UTC m=+150.198389112" watchObservedRunningTime="2026-01-28 06:50:26.968601302 +0000 UTC m=+150.200690111" Jan 28 06:50:26 crc kubenswrapper[4642]: I0128 06:50:26.968815 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qgssm"] Jan 28 06:50:26 crc kubenswrapper[4642]: I0128 06:50:26.998988 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8w4vg" podStartSLOduration=126.998972797 podStartE2EDuration="2m6.998972797s" podCreationTimestamp="2026-01-28 06:48:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 06:50:26.996972555 +0000 UTC m=+150.229061364" watchObservedRunningTime="2026-01-28 06:50:26.998972797 +0000 UTC m=+150.231061607" Jan 28 06:50:27 crc kubenswrapper[4642]: I0128 06:50:27.013547 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 06:50:27 crc kubenswrapper[4642]: E0128 06:50:27.013697 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 06:50:27.513680707 +0000 UTC m=+150.745769516 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 06:50:27 crc kubenswrapper[4642]: I0128 06:50:27.013817 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fg5mw\" (UID: \"1f06bd76-391b-4d80-ba76-a992ee54241a\") " pod="openshift-image-registry/image-registry-697d97f7c8-fg5mw" Jan 28 06:50:27 crc kubenswrapper[4642]: I0128 06:50:27.013875 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46ff4beb-e52c-4d77-96a2-0dcde4c8c516-utilities\") pod \"community-operators-qgssm\" (UID: \"46ff4beb-e52c-4d77-96a2-0dcde4c8c516\") " pod="openshift-marketplace/community-operators-qgssm" Jan 28 06:50:27 crc kubenswrapper[4642]: I0128 06:50:27.013975 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7d6q\" (UniqueName: \"kubernetes.io/projected/46ff4beb-e52c-4d77-96a2-0dcde4c8c516-kube-api-access-k7d6q\") pod \"community-operators-qgssm\" (UID: \"46ff4beb-e52c-4d77-96a2-0dcde4c8c516\") " pod="openshift-marketplace/community-operators-qgssm" Jan 28 06:50:27 crc kubenswrapper[4642]: I0128 06:50:27.014001 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46ff4beb-e52c-4d77-96a2-0dcde4c8c516-catalog-content\") pod \"community-operators-qgssm\" (UID: \"46ff4beb-e52c-4d77-96a2-0dcde4c8c516\") " pod="openshift-marketplace/community-operators-qgssm" Jan 28 06:50:27 crc kubenswrapper[4642]: E0128 06:50:27.014032 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 06:50:27.514026227 +0000 UTC m=+150.746115036 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fg5mw" (UID: "1f06bd76-391b-4d80-ba76-a992ee54241a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 06:50:27 crc kubenswrapper[4642]: I0128 06:50:27.029810 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-m6fnq" podStartSLOduration=127.029794251 podStartE2EDuration="2m7.029794251s" podCreationTimestamp="2026-01-28 06:48:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 06:50:27.027889166 +0000 UTC m=+150.259977975" watchObservedRunningTime="2026-01-28 06:50:27.029794251 +0000 UTC m=+150.261883059" Jan 28 06:50:27 crc kubenswrapper[4642]: I0128 06:50:27.114738 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 06:50:27 crc kubenswrapper[4642]: E0128 06:50:27.114874 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 06:50:27.614837246 +0000 UTC m=+150.846926055 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 06:50:27 crc kubenswrapper[4642]: I0128 06:50:27.114946 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fg5mw\" (UID: \"1f06bd76-391b-4d80-ba76-a992ee54241a\") " pod="openshift-image-registry/image-registry-697d97f7c8-fg5mw" Jan 28 06:50:27 crc kubenswrapper[4642]: I0128 06:50:27.114980 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46ff4beb-e52c-4d77-96a2-0dcde4c8c516-utilities\") pod \"community-operators-qgssm\" (UID: \"46ff4beb-e52c-4d77-96a2-0dcde4c8c516\") " pod="openshift-marketplace/community-operators-qgssm" Jan 28 06:50:27 crc kubenswrapper[4642]: I0128 06:50:27.115043 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7d6q\" (UniqueName: \"kubernetes.io/projected/46ff4beb-e52c-4d77-96a2-0dcde4c8c516-kube-api-access-k7d6q\") pod \"community-operators-qgssm\" (UID: \"46ff4beb-e52c-4d77-96a2-0dcde4c8c516\") " pod="openshift-marketplace/community-operators-qgssm" Jan 28 06:50:27 crc kubenswrapper[4642]: I0128 06:50:27.115074 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46ff4beb-e52c-4d77-96a2-0dcde4c8c516-catalog-content\") pod \"community-operators-qgssm\" (UID: \"46ff4beb-e52c-4d77-96a2-0dcde4c8c516\") " pod="openshift-marketplace/community-operators-qgssm" Jan 28 06:50:27 crc kubenswrapper[4642]: I0128 06:50:27.115472 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46ff4beb-e52c-4d77-96a2-0dcde4c8c516-catalog-content\") pod \"community-operators-qgssm\" (UID: \"46ff4beb-e52c-4d77-96a2-0dcde4c8c516\") " pod="openshift-marketplace/community-operators-qgssm" Jan 28 06:50:27 crc kubenswrapper[4642]: E0128 06:50:27.115703 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 06:50:27.615693275 +0000 UTC m=+150.847782085 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fg5mw" (UID: "1f06bd76-391b-4d80-ba76-a992ee54241a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 06:50:27 crc kubenswrapper[4642]: I0128 06:50:27.116412 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46ff4beb-e52c-4d77-96a2-0dcde4c8c516-utilities\") pod \"community-operators-qgssm\" (UID: \"46ff4beb-e52c-4d77-96a2-0dcde4c8c516\") " pod="openshift-marketplace/community-operators-qgssm" Jan 28 06:50:27 crc kubenswrapper[4642]: I0128 06:50:27.134796 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bsvlt"] Jan 28 06:50:27 crc kubenswrapper[4642]: I0128 06:50:27.135771 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bsvlt" Jan 28 06:50:27 crc kubenswrapper[4642]: I0128 06:50:27.142579 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 28 06:50:27 crc kubenswrapper[4642]: I0128 06:50:27.156303 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7d6q\" (UniqueName: \"kubernetes.io/projected/46ff4beb-e52c-4d77-96a2-0dcde4c8c516-kube-api-access-k7d6q\") pod \"community-operators-qgssm\" (UID: \"46ff4beb-e52c-4d77-96a2-0dcde4c8c516\") " pod="openshift-marketplace/community-operators-qgssm" Jan 28 06:50:27 crc kubenswrapper[4642]: I0128 06:50:27.177738 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bsvlt"] Jan 28 06:50:27 crc kubenswrapper[4642]: I0128 06:50:27.216507 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 06:50:27 crc kubenswrapper[4642]: E0128 06:50:27.216643 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 06:50:27.716622618 +0000 UTC m=+150.948711418 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 06:50:27 crc kubenswrapper[4642]: I0128 06:50:27.216676 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5k5nw\" (UniqueName: \"kubernetes.io/projected/a4be5139-f33d-4cb9-829c-cfe1116c1305-kube-api-access-5k5nw\") pod \"certified-operators-bsvlt\" (UID: \"a4be5139-f33d-4cb9-829c-cfe1116c1305\") " pod="openshift-marketplace/certified-operators-bsvlt" Jan 28 06:50:27 crc kubenswrapper[4642]: I0128 06:50:27.216758 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4be5139-f33d-4cb9-829c-cfe1116c1305-catalog-content\") pod \"certified-operators-bsvlt\" (UID: \"a4be5139-f33d-4cb9-829c-cfe1116c1305\") " pod="openshift-marketplace/certified-operators-bsvlt" Jan 28 06:50:27 crc kubenswrapper[4642]: I0128 06:50:27.216870 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fg5mw\" (UID: \"1f06bd76-391b-4d80-ba76-a992ee54241a\") " pod="openshift-image-registry/image-registry-697d97f7c8-fg5mw" Jan 28 06:50:27 crc kubenswrapper[4642]: I0128 06:50:27.216900 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4be5139-f33d-4cb9-829c-cfe1116c1305-utilities\") pod \"certified-operators-bsvlt\" (UID: \"a4be5139-f33d-4cb9-829c-cfe1116c1305\") " pod="openshift-marketplace/certified-operators-bsvlt" Jan 28 06:50:27 crc kubenswrapper[4642]: E0128 06:50:27.217137 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 06:50:27.717129963 +0000 UTC m=+150.949218772 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fg5mw" (UID: "1f06bd76-391b-4d80-ba76-a992ee54241a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 06:50:27 crc kubenswrapper[4642]: I0128 06:50:27.255390 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qgssm" Jan 28 06:50:27 crc kubenswrapper[4642]: I0128 06:50:27.317350 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 06:50:27 crc kubenswrapper[4642]: I0128 06:50:27.317575 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4be5139-f33d-4cb9-829c-cfe1116c1305-utilities\") pod \"certified-operators-bsvlt\" (UID: \"a4be5139-f33d-4cb9-829c-cfe1116c1305\") " pod="openshift-marketplace/certified-operators-bsvlt" Jan 28 06:50:27 crc kubenswrapper[4642]: I0128 06:50:27.317611 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5k5nw\" (UniqueName: \"kubernetes.io/projected/a4be5139-f33d-4cb9-829c-cfe1116c1305-kube-api-access-5k5nw\") pod \"certified-operators-bsvlt\" (UID: \"a4be5139-f33d-4cb9-829c-cfe1116c1305\") " pod="openshift-marketplace/certified-operators-bsvlt" Jan 28 06:50:27 crc kubenswrapper[4642]: I0128 06:50:27.317656 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4be5139-f33d-4cb9-829c-cfe1116c1305-catalog-content\") pod \"certified-operators-bsvlt\" (UID: \"a4be5139-f33d-4cb9-829c-cfe1116c1305\") " pod="openshift-marketplace/certified-operators-bsvlt" Jan 28 06:50:27 crc kubenswrapper[4642]: I0128 06:50:27.318064 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4be5139-f33d-4cb9-829c-cfe1116c1305-catalog-content\") pod \"certified-operators-bsvlt\" (UID: \"a4be5139-f33d-4cb9-829c-cfe1116c1305\") " pod="openshift-marketplace/certified-operators-bsvlt" Jan 28 06:50:27 crc kubenswrapper[4642]: E0128 06:50:27.318136 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 06:50:27.818122052 +0000 UTC m=+151.050210861 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 06:50:27 crc kubenswrapper[4642]: I0128 06:50:27.318355 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4be5139-f33d-4cb9-829c-cfe1116c1305-utilities\") pod \"certified-operators-bsvlt\" (UID: \"a4be5139-f33d-4cb9-829c-cfe1116c1305\") " pod="openshift-marketplace/certified-operators-bsvlt" Jan 28 06:50:27 crc kubenswrapper[4642]: I0128 06:50:27.334798 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7jkmz"] Jan 28 06:50:27 crc kubenswrapper[4642]: I0128 06:50:27.335587 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7jkmz" Jan 28 06:50:27 crc kubenswrapper[4642]: I0128 06:50:27.362424 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7jkmz"] Jan 28 06:50:27 crc kubenswrapper[4642]: I0128 06:50:27.379370 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5k5nw\" (UniqueName: \"kubernetes.io/projected/a4be5139-f33d-4cb9-829c-cfe1116c1305-kube-api-access-5k5nw\") pod \"certified-operators-bsvlt\" (UID: \"a4be5139-f33d-4cb9-829c-cfe1116c1305\") " pod="openshift-marketplace/certified-operators-bsvlt" Jan 28 06:50:27 crc kubenswrapper[4642]: I0128 06:50:27.420519 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc531134-3ad4-4937-ab7f-8c3fac78dae6-utilities\") pod \"community-operators-7jkmz\" (UID: \"fc531134-3ad4-4937-ab7f-8c3fac78dae6\") " pod="openshift-marketplace/community-operators-7jkmz" Jan 28 06:50:27 crc kubenswrapper[4642]: I0128 06:50:27.420867 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqc4s\" (UniqueName: \"kubernetes.io/projected/fc531134-3ad4-4937-ab7f-8c3fac78dae6-kube-api-access-fqc4s\") pod \"community-operators-7jkmz\" (UID: \"fc531134-3ad4-4937-ab7f-8c3fac78dae6\") " pod="openshift-marketplace/community-operators-7jkmz" Jan 28 06:50:27 crc kubenswrapper[4642]: I0128 06:50:27.420895 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fg5mw\" (UID: \"1f06bd76-391b-4d80-ba76-a992ee54241a\") " pod="openshift-image-registry/image-registry-697d97f7c8-fg5mw" Jan 28 06:50:27 crc kubenswrapper[4642]: I0128 06:50:27.420916 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc531134-3ad4-4937-ab7f-8c3fac78dae6-catalog-content\") pod \"community-operators-7jkmz\" (UID: \"fc531134-3ad4-4937-ab7f-8c3fac78dae6\") " pod="openshift-marketplace/community-operators-7jkmz" Jan 28 06:50:27 crc kubenswrapper[4642]: E0128 06:50:27.432414 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 06:50:27.93238128 +0000 UTC m=+151.164470089 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fg5mw" (UID: "1f06bd76-391b-4d80-ba76-a992ee54241a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 06:50:27 crc kubenswrapper[4642]: I0128 06:50:27.446507 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bsvlt" Jan 28 06:50:27 crc kubenswrapper[4642]: I0128 06:50:27.482127 4642 patch_prober.go:28] interesting pod/router-default-5444994796-9dsht container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 06:50:27 crc kubenswrapper[4642]: [-]has-synced failed: reason withheld Jan 28 06:50:27 crc kubenswrapper[4642]: [+]process-running ok Jan 28 06:50:27 crc kubenswrapper[4642]: healthz check failed Jan 28 06:50:27 crc kubenswrapper[4642]: I0128 06:50:27.482177 4642 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9dsht" podUID="7b7c9f44-73e2-4109-827b-bed3a722a78c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 06:50:27 crc kubenswrapper[4642]: I0128 06:50:27.524839 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 06:50:27 crc kubenswrapper[4642]: I0128 06:50:27.525113 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqc4s\" (UniqueName: \"kubernetes.io/projected/fc531134-3ad4-4937-ab7f-8c3fac78dae6-kube-api-access-fqc4s\") pod \"community-operators-7jkmz\" (UID: \"fc531134-3ad4-4937-ab7f-8c3fac78dae6\") " pod="openshift-marketplace/community-operators-7jkmz" Jan 28 06:50:27 crc kubenswrapper[4642]: I0128 06:50:27.525155 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc531134-3ad4-4937-ab7f-8c3fac78dae6-catalog-content\") pod \"community-operators-7jkmz\" (UID: \"fc531134-3ad4-4937-ab7f-8c3fac78dae6\") " pod="openshift-marketplace/community-operators-7jkmz" Jan 28 06:50:27 crc kubenswrapper[4642]: I0128 06:50:27.525210 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc531134-3ad4-4937-ab7f-8c3fac78dae6-utilities\") pod \"community-operators-7jkmz\" (UID: \"fc531134-3ad4-4937-ab7f-8c3fac78dae6\") " pod="openshift-marketplace/community-operators-7jkmz" Jan 28 06:50:27 crc kubenswrapper[4642]: I0128 06:50:27.525642 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc531134-3ad4-4937-ab7f-8c3fac78dae6-utilities\") pod \"community-operators-7jkmz\" (UID: \"fc531134-3ad4-4937-ab7f-8c3fac78dae6\") " pod="openshift-marketplace/community-operators-7jkmz" Jan 28 06:50:27 crc kubenswrapper[4642]: E0128 06:50:27.525725 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 06:50:28.025711417 +0000 UTC m=+151.257800226 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 06:50:27 crc kubenswrapper[4642]: I0128 06:50:27.526171 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc531134-3ad4-4937-ab7f-8c3fac78dae6-catalog-content\") pod \"community-operators-7jkmz\" (UID: \"fc531134-3ad4-4937-ab7f-8c3fac78dae6\") " pod="openshift-marketplace/community-operators-7jkmz" Jan 28 06:50:27 crc kubenswrapper[4642]: I0128 06:50:27.539062 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-c56jc"] Jan 28 06:50:27 crc kubenswrapper[4642]: I0128 06:50:27.540130 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c56jc" Jan 28 06:50:27 crc kubenswrapper[4642]: I0128 06:50:27.545587 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-c56jc"] Jan 28 06:50:27 crc kubenswrapper[4642]: I0128 06:50:27.565999 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqc4s\" (UniqueName: \"kubernetes.io/projected/fc531134-3ad4-4937-ab7f-8c3fac78dae6-kube-api-access-fqc4s\") pod \"community-operators-7jkmz\" (UID: \"fc531134-3ad4-4937-ab7f-8c3fac78dae6\") " pod="openshift-marketplace/community-operators-7jkmz" Jan 28 06:50:27 crc kubenswrapper[4642]: I0128 06:50:27.627951 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fg5mw\" (UID: \"1f06bd76-391b-4d80-ba76-a992ee54241a\") " pod="openshift-image-registry/image-registry-697d97f7c8-fg5mw" Jan 28 06:50:27 crc kubenswrapper[4642]: I0128 06:50:27.628027 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14c19495-c3cb-4f36-9d69-c9a893c766f5-utilities\") pod \"certified-operators-c56jc\" (UID: \"14c19495-c3cb-4f36-9d69-c9a893c766f5\") " pod="openshift-marketplace/certified-operators-c56jc" Jan 28 06:50:27 crc kubenswrapper[4642]: I0128 06:50:27.628045 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14c19495-c3cb-4f36-9d69-c9a893c766f5-catalog-content\") pod \"certified-operators-c56jc\" (UID: \"14c19495-c3cb-4f36-9d69-c9a893c766f5\") " pod="openshift-marketplace/certified-operators-c56jc" Jan 28 06:50:27 crc kubenswrapper[4642]: I0128 06:50:27.628074 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cq8lx\" (UniqueName: \"kubernetes.io/projected/14c19495-c3cb-4f36-9d69-c9a893c766f5-kube-api-access-cq8lx\") pod \"certified-operators-c56jc\" (UID: \"14c19495-c3cb-4f36-9d69-c9a893c766f5\") " pod="openshift-marketplace/certified-operators-c56jc" Jan 28 06:50:27 crc kubenswrapper[4642]: E0128 06:50:27.628373 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 06:50:28.12835805 +0000 UTC m=+151.360446858 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fg5mw" (UID: "1f06bd76-391b-4d80-ba76-a992ee54241a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 06:50:27 crc kubenswrapper[4642]: I0128 06:50:27.690036 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g9fbj" event={"ID":"3d07c074-7835-41f1-b39a-b159f851e000","Type":"ContainerStarted","Data":"2573d0150c44a07dba6c398427345241656cf8414ccd74427c055fde294f41d5"} Jan 28 06:50:27 crc kubenswrapper[4642]: I0128 06:50:27.695925 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-b8v8n" event={"ID":"e24a32c1-9c23-41a2-b3a2-58ba818a0534","Type":"ContainerStarted","Data":"424160100e41470bef9e6cefb29e57184d0857ddd634640af5084bac20c03993"} Jan 28 06:50:27 crc kubenswrapper[4642]: I0128 06:50:27.705438 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7jkmz" Jan 28 06:50:27 crc kubenswrapper[4642]: I0128 06:50:27.712649 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-h6qvh" Jan 28 06:50:27 crc kubenswrapper[4642]: I0128 06:50:27.714686 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-h6qvh" Jan 28 06:50:27 crc kubenswrapper[4642]: I0128 06:50:27.728274 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jwpbc" event={"ID":"51a36588-71bc-409b-b36b-2f76917e5c40","Type":"ContainerStarted","Data":"67e0371092e73fa05281836d6a002006fd99900a0666f2cf910eb6a07bf48dcc"} Jan 28 06:50:27 crc kubenswrapper[4642]: I0128 06:50:27.728809 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 06:50:27 crc kubenswrapper[4642]: E0128 06:50:27.728940 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 06:50:28.228914208 +0000 UTC m=+151.461003017 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 06:50:27 crc kubenswrapper[4642]: I0128 06:50:27.729082 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fg5mw\" (UID: \"1f06bd76-391b-4d80-ba76-a992ee54241a\") " pod="openshift-image-registry/image-registry-697d97f7c8-fg5mw" Jan 28 06:50:27 crc kubenswrapper[4642]: I0128 06:50:27.729149 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14c19495-c3cb-4f36-9d69-c9a893c766f5-utilities\") pod \"certified-operators-c56jc\" (UID: \"14c19495-c3cb-4f36-9d69-c9a893c766f5\") " pod="openshift-marketplace/certified-operators-c56jc" Jan 28 06:50:27 crc kubenswrapper[4642]: I0128 06:50:27.729172 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14c19495-c3cb-4f36-9d69-c9a893c766f5-catalog-content\") pod \"certified-operators-c56jc\" (UID: \"14c19495-c3cb-4f36-9d69-c9a893c766f5\") " pod="openshift-marketplace/certified-operators-c56jc" Jan 28 06:50:27 crc kubenswrapper[4642]: I0128 06:50:27.729238 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cq8lx\" (UniqueName: \"kubernetes.io/projected/14c19495-c3cb-4f36-9d69-c9a893c766f5-kube-api-access-cq8lx\") pod \"certified-operators-c56jc\" (UID: \"14c19495-c3cb-4f36-9d69-c9a893c766f5\") " pod="openshift-marketplace/certified-operators-c56jc" Jan 28 06:50:27 crc kubenswrapper[4642]: I0128 06:50:27.731072 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14c19495-c3cb-4f36-9d69-c9a893c766f5-catalog-content\") pod \"certified-operators-c56jc\" (UID: \"14c19495-c3cb-4f36-9d69-c9a893c766f5\") " pod="openshift-marketplace/certified-operators-c56jc" Jan 28 06:50:27 crc kubenswrapper[4642]: I0128 06:50:27.731142 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14c19495-c3cb-4f36-9d69-c9a893c766f5-utilities\") pod \"certified-operators-c56jc\" (UID: \"14c19495-c3cb-4f36-9d69-c9a893c766f5\") " pod="openshift-marketplace/certified-operators-c56jc" Jan 28 06:50:27 crc kubenswrapper[4642]: E0128 06:50:27.731436 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 06:50:28.231427377 +0000 UTC m=+151.463516186 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fg5mw" (UID: "1f06bd76-391b-4d80-ba76-a992ee54241a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 06:50:27 crc kubenswrapper[4642]: I0128 06:50:27.749715 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nsxdk" event={"ID":"fa7dcf61-b17b-42d7-8b79-e4bd9af04a31","Type":"ContainerStarted","Data":"a9c846e8b5da119e1cb467a89770185637aeb86a0e9f90575bdd5d2c9d33b440"} Jan 28 06:50:27 crc kubenswrapper[4642]: I0128 06:50:27.775429 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-glqwl" event={"ID":"6364f994-c14f-4cb9-ab55-ad13ad0973f1","Type":"ContainerStarted","Data":"0f9a2346cd67269c2e95f12f4d71af032c7f90511d67966601de4951f1daf70f"} Jan 28 06:50:27 crc kubenswrapper[4642]: I0128 06:50:27.790634 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-m6fnq" event={"ID":"399e3aa1-f241-4695-ba69-c8e2900b7bb2","Type":"ContainerStarted","Data":"fc7f9fc6699b17904f9c900071c0f19a69df1f82a385ebb76bd7fdf81126921b"} Jan 28 06:50:27 crc kubenswrapper[4642]: I0128 06:50:27.802358 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cq8lx\" (UniqueName: \"kubernetes.io/projected/14c19495-c3cb-4f36-9d69-c9a893c766f5-kube-api-access-cq8lx\") pod \"certified-operators-c56jc\" (UID: \"14c19495-c3cb-4f36-9d69-c9a893c766f5\") " pod="openshift-marketplace/certified-operators-c56jc" Jan 28 06:50:27 crc kubenswrapper[4642]: I0128 06:50:27.813374 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-t2ncd" event={"ID":"1711b0d7-ac23-40bd-b522-46a7148d7a6f","Type":"ContainerStarted","Data":"d197f23d14a5de27b0e29fca38dd155517363246cdd43aa6949fede74e42fca0"} Jan 28 06:50:27 crc kubenswrapper[4642]: I0128 06:50:27.813416 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-t2ncd" event={"ID":"1711b0d7-ac23-40bd-b522-46a7148d7a6f","Type":"ContainerStarted","Data":"8c989ae6d2c51d30a5aa7769b8f309701689ab20c544e95d85ee15328577f08e"} Jan 28 06:50:27 crc kubenswrapper[4642]: I0128 06:50:27.818923 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-h76tc" event={"ID":"86ebb583-2f1b-4a28-b1b8-81123c2484d7","Type":"ContainerStarted","Data":"7bb7610d5a9259082601472e5e956355f9466d00818874fd4e6094a5e1458e3b"} Jan 28 06:50:27 crc kubenswrapper[4642]: I0128 06:50:27.832030 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 06:50:27 crc kubenswrapper[4642]: E0128 06:50:27.833415 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 06:50:28.333394531 +0000 UTC m=+151.565483339 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 06:50:27 crc kubenswrapper[4642]: I0128 06:50:27.834438 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-b8v8n" podStartSLOduration=127.834421072 podStartE2EDuration="2m7.834421072s" podCreationTimestamp="2026-01-28 06:48:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 06:50:27.775101497 +0000 UTC m=+151.007190307" watchObservedRunningTime="2026-01-28 06:50:27.834421072 +0000 UTC m=+151.066509882" Jan 28 06:50:27 crc kubenswrapper[4642]: I0128 06:50:27.835151 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jwpbc" podStartSLOduration=127.835146086 podStartE2EDuration="2m7.835146086s" podCreationTimestamp="2026-01-28 06:48:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 06:50:27.833272262 +0000 UTC m=+151.065361070" watchObservedRunningTime="2026-01-28 06:50:27.835146086 +0000 UTC m=+151.067234895" Jan 28 06:50:27 crc kubenswrapper[4642]: I0128 06:50:27.852290 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-mmdr5" event={"ID":"a84eeb48-6e52-4989-b622-f1031dad991a","Type":"ContainerStarted","Data":"1f58a19661c77cf179f91a0d066b826921df052f58dc59697aa49af120d623d8"} Jan 28 06:50:27 crc kubenswrapper[4642]: I0128 06:50:27.852334 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-mmdr5" Jan 28 06:50:27 crc kubenswrapper[4642]: I0128 06:50:27.854550 4642 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-965rk container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Jan 28 06:50:27 crc kubenswrapper[4642]: I0128 06:50:27.854586 4642 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-965rk" podUID="ab246574-4025-47da-a132-1d1e72b35a00" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" Jan 28 06:50:27 crc kubenswrapper[4642]: I0128 06:50:27.875012 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4jjwv" Jan 28 06:50:27 crc kubenswrapper[4642]: I0128 06:50:27.877387 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-glqwl" podStartSLOduration=127.877372024 podStartE2EDuration="2m7.877372024s" podCreationTimestamp="2026-01-28 06:48:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 06:50:27.875304654 +0000 UTC m=+151.107393462" watchObservedRunningTime="2026-01-28 06:50:27.877372024 +0000 UTC m=+151.109460832" Jan 28 06:50:27 crc kubenswrapper[4642]: I0128 06:50:27.889045 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c56jc" Jan 28 06:50:27 crc kubenswrapper[4642]: I0128 06:50:27.926493 4642 patch_prober.go:28] interesting pod/apiserver-76f77b778f-h6qvh container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 28 06:50:27 crc kubenswrapper[4642]: [+]log ok Jan 28 06:50:27 crc kubenswrapper[4642]: [+]etcd ok Jan 28 06:50:27 crc kubenswrapper[4642]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 28 06:50:27 crc kubenswrapper[4642]: [+]poststarthook/generic-apiserver-start-informers ok Jan 28 06:50:27 crc kubenswrapper[4642]: [+]poststarthook/max-in-flight-filter ok Jan 28 06:50:27 crc kubenswrapper[4642]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 28 06:50:27 crc kubenswrapper[4642]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 28 06:50:27 crc kubenswrapper[4642]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 28 06:50:27 crc kubenswrapper[4642]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 28 06:50:27 crc kubenswrapper[4642]: [+]poststarthook/project.openshift.io-projectcache ok Jan 28 06:50:27 crc kubenswrapper[4642]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 28 06:50:27 crc kubenswrapper[4642]: [+]poststarthook/openshift.io-startinformers ok Jan 28 06:50:27 crc kubenswrapper[4642]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 28 06:50:27 crc kubenswrapper[4642]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 28 06:50:27 crc kubenswrapper[4642]: livez check failed Jan 28 06:50:27 crc kubenswrapper[4642]: I0128 06:50:27.926554 4642 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-h6qvh" podUID="ae263a90-0a71-40b8-bda1-ec21b3680994" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 06:50:27 crc kubenswrapper[4642]: I0128 06:50:27.933575 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fg5mw\" (UID: \"1f06bd76-391b-4d80-ba76-a992ee54241a\") " pod="openshift-image-registry/image-registry-697d97f7c8-fg5mw" Jan 28 06:50:27 crc kubenswrapper[4642]: E0128 06:50:27.935905 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 06:50:28.435891401 +0000 UTC m=+151.667980211 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fg5mw" (UID: "1f06bd76-391b-4d80-ba76-a992ee54241a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 06:50:27 crc kubenswrapper[4642]: I0128 06:50:27.964096 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-mmdr5" podStartSLOduration=7.964077536 podStartE2EDuration="7.964077536s" podCreationTimestamp="2026-01-28 06:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 06:50:27.962962047 +0000 UTC m=+151.195050857" watchObservedRunningTime="2026-01-28 06:50:27.964077536 +0000 UTC m=+151.196166345" Jan 28 06:50:28 crc kubenswrapper[4642]: I0128 06:50:28.035885 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 06:50:28 crc kubenswrapper[4642]: E0128 06:50:28.036438 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 06:50:28.536421011 +0000 UTC m=+151.768509821 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 06:50:28 crc kubenswrapper[4642]: I0128 06:50:28.076714 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jwpbc" Jan 28 06:50:28 crc kubenswrapper[4642]: I0128 06:50:28.076766 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jwpbc" Jan 28 06:50:28 crc kubenswrapper[4642]: I0128 06:50:28.098560 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jwpbc" Jan 28 06:50:28 crc kubenswrapper[4642]: E0128 06:50:28.115493 4642 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd980a7da_b442_446b_8bde_56e17d70b28b.slice/crio-conmon-83b475246567f73e3201e910ebf0c39bd1b9da1544ed5785ebc3d776eb47d479.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd980a7da_b442_446b_8bde_56e17d70b28b.slice/crio-83b475246567f73e3201e910ebf0c39bd1b9da1544ed5785ebc3d776eb47d479.scope\": RecentStats: unable to find data in memory cache]" Jan 28 06:50:28 crc kubenswrapper[4642]: I0128 06:50:28.136841 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fg5mw\" (UID: \"1f06bd76-391b-4d80-ba76-a992ee54241a\") " pod="openshift-image-registry/image-registry-697d97f7c8-fg5mw" Jan 28 06:50:28 crc kubenswrapper[4642]: E0128 06:50:28.137153 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 06:50:28.637140188 +0000 UTC m=+151.869228997 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fg5mw" (UID: "1f06bd76-391b-4d80-ba76-a992ee54241a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 06:50:28 crc kubenswrapper[4642]: I0128 06:50:28.150902 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bsvlt"] Jan 28 06:50:28 crc kubenswrapper[4642]: I0128 06:50:28.195598 4642 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 28 06:50:28 crc kubenswrapper[4642]: I0128 06:50:28.232272 4642 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-28T06:50:28.195620403Z","Handler":null,"Name":""} Jan 28 06:50:28 crc kubenswrapper[4642]: I0128 06:50:28.235418 4642 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 28 06:50:28 crc kubenswrapper[4642]: I0128 06:50:28.235486 4642 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 28 06:50:28 crc kubenswrapper[4642]: I0128 06:50:28.237301 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qgssm"] Jan 28 06:50:28 crc kubenswrapper[4642]: I0128 06:50:28.238015 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 06:50:28 crc kubenswrapper[4642]: I0128 06:50:28.243048 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 28 06:50:28 crc kubenswrapper[4642]: I0128 06:50:28.306285 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7rghx" Jan 28 06:50:28 crc kubenswrapper[4642]: I0128 06:50:28.339804 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fg5mw\" (UID: \"1f06bd76-391b-4d80-ba76-a992ee54241a\") " pod="openshift-image-registry/image-registry-697d97f7c8-fg5mw" Jan 28 06:50:28 crc kubenswrapper[4642]: I0128 06:50:28.344397 4642 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 06:50:28 crc kubenswrapper[4642]: I0128 06:50:28.344425 4642 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fg5mw\" (UID: \"1f06bd76-391b-4d80-ba76-a992ee54241a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-fg5mw" Jan 28 06:50:28 crc kubenswrapper[4642]: I0128 06:50:28.439231 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fg5mw\" (UID: \"1f06bd76-391b-4d80-ba76-a992ee54241a\") " pod="openshift-image-registry/image-registry-697d97f7c8-fg5mw" Jan 28 06:50:28 crc kubenswrapper[4642]: I0128 06:50:28.466134 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-tqfl8" Jan 28 06:50:28 crc kubenswrapper[4642]: I0128 06:50:28.490269 4642 patch_prober.go:28] interesting pod/router-default-5444994796-9dsht container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 06:50:28 crc kubenswrapper[4642]: [-]has-synced failed: reason withheld Jan 28 06:50:28 crc kubenswrapper[4642]: [+]process-running ok Jan 28 06:50:28 crc kubenswrapper[4642]: healthz check failed Jan 28 06:50:28 crc kubenswrapper[4642]: I0128 06:50:28.490310 4642 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9dsht" podUID="7b7c9f44-73e2-4109-827b-bed3a722a78c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 06:50:28 crc kubenswrapper[4642]: I0128 06:50:28.517065 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-c56jc"] Jan 28 06:50:28 crc kubenswrapper[4642]: I0128 06:50:28.536622 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7jkmz"] Jan 28 06:50:28 crc kubenswrapper[4642]: I0128 06:50:28.554925 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-fg5mw" Jan 28 06:50:28 crc kubenswrapper[4642]: W0128 06:50:28.560373 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc531134_3ad4_4937_ab7f_8c3fac78dae6.slice/crio-ab040461605dcb5470d9a9c03baab6590a835383f2821842fa36345a12dca1c9 WatchSource:0}: Error finding container ab040461605dcb5470d9a9c03baab6590a835383f2821842fa36345a12dca1c9: Status 404 returned error can't find the container with id ab040461605dcb5470d9a9c03baab6590a835383f2821842fa36345a12dca1c9 Jan 28 06:50:28 crc kubenswrapper[4642]: I0128 06:50:28.817653 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fg5mw"] Jan 28 06:50:28 crc kubenswrapper[4642]: I0128 06:50:28.853711 4642 generic.go:334] "Generic (PLEG): container finished" podID="14c19495-c3cb-4f36-9d69-c9a893c766f5" containerID="b23b39dee5d81e56971c69505e3d35500db2d9d960b5fccbf2cfcb630dbb5b3e" exitCode=0 Jan 28 06:50:28 crc kubenswrapper[4642]: I0128 06:50:28.853771 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c56jc" event={"ID":"14c19495-c3cb-4f36-9d69-c9a893c766f5","Type":"ContainerDied","Data":"b23b39dee5d81e56971c69505e3d35500db2d9d960b5fccbf2cfcb630dbb5b3e"} Jan 28 06:50:28 crc kubenswrapper[4642]: I0128 06:50:28.853840 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c56jc" event={"ID":"14c19495-c3cb-4f36-9d69-c9a893c766f5","Type":"ContainerStarted","Data":"720d07046df446f01a72620679f03ec7a29fc522affefdc6e3d506873f5cff81"} Jan 28 06:50:28 crc kubenswrapper[4642]: I0128 06:50:28.855309 4642 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 06:50:28 crc kubenswrapper[4642]: I0128 06:50:28.856468 4642 generic.go:334] "Generic (PLEG): container finished" podID="fc531134-3ad4-4937-ab7f-8c3fac78dae6" containerID="1b75892557e9ab3d90e16675e7a4e01bf670f7f89a9bfdc8ed169c0e56e002c8" exitCode=0 Jan 28 06:50:28 crc kubenswrapper[4642]: I0128 06:50:28.856489 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7jkmz" event={"ID":"fc531134-3ad4-4937-ab7f-8c3fac78dae6","Type":"ContainerDied","Data":"1b75892557e9ab3d90e16675e7a4e01bf670f7f89a9bfdc8ed169c0e56e002c8"} Jan 28 06:50:28 crc kubenswrapper[4642]: I0128 06:50:28.856514 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7jkmz" event={"ID":"fc531134-3ad4-4937-ab7f-8c3fac78dae6","Type":"ContainerStarted","Data":"ab040461605dcb5470d9a9c03baab6590a835383f2821842fa36345a12dca1c9"} Jan 28 06:50:28 crc kubenswrapper[4642]: I0128 06:50:28.858104 4642 generic.go:334] "Generic (PLEG): container finished" podID="46ff4beb-e52c-4d77-96a2-0dcde4c8c516" containerID="c70efe38afa75888ebb53c0bbed0c932d5495056f6a07ba922210376c5988b2d" exitCode=0 Jan 28 06:50:28 crc kubenswrapper[4642]: I0128 06:50:28.858176 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qgssm" event={"ID":"46ff4beb-e52c-4d77-96a2-0dcde4c8c516","Type":"ContainerDied","Data":"c70efe38afa75888ebb53c0bbed0c932d5495056f6a07ba922210376c5988b2d"} Jan 28 06:50:28 crc kubenswrapper[4642]: I0128 06:50:28.858223 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qgssm" event={"ID":"46ff4beb-e52c-4d77-96a2-0dcde4c8c516","Type":"ContainerStarted","Data":"60d7f53884b14ea88aa498ba7b371ed3f3458e675dcda661eb40969f5c9eda64"} Jan 28 06:50:28 crc kubenswrapper[4642]: W0128 06:50:28.859620 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f06bd76_391b_4d80_ba76_a992ee54241a.slice/crio-6b7d1d00b651d16f996e43a6d5ba0a163eda64334f5cd8018ddf39ee6ea513f5 WatchSource:0}: Error finding container 6b7d1d00b651d16f996e43a6d5ba0a163eda64334f5cd8018ddf39ee6ea513f5: Status 404 returned error can't find the container with id 6b7d1d00b651d16f996e43a6d5ba0a163eda64334f5cd8018ddf39ee6ea513f5 Jan 28 06:50:28 crc kubenswrapper[4642]: I0128 06:50:28.860318 4642 generic.go:334] "Generic (PLEG): container finished" podID="d980a7da-b442-446b-8bde-56e17d70b28b" containerID="83b475246567f73e3201e910ebf0c39bd1b9da1544ed5785ebc3d776eb47d479" exitCode=0 Jan 28 06:50:28 crc kubenswrapper[4642]: I0128 06:50:28.860358 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493045-ndfr5" event={"ID":"d980a7da-b442-446b-8bde-56e17d70b28b","Type":"ContainerDied","Data":"83b475246567f73e3201e910ebf0c39bd1b9da1544ed5785ebc3d776eb47d479"} Jan 28 06:50:28 crc kubenswrapper[4642]: I0128 06:50:28.861690 4642 generic.go:334] "Generic (PLEG): container finished" podID="a4be5139-f33d-4cb9-829c-cfe1116c1305" containerID="29bced9bb267e10a1e7d93cad436adf7afecc2dd1e8f2a23e112d32129a42314" exitCode=0 Jan 28 06:50:28 crc kubenswrapper[4642]: I0128 06:50:28.861755 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bsvlt" event={"ID":"a4be5139-f33d-4cb9-829c-cfe1116c1305","Type":"ContainerDied","Data":"29bced9bb267e10a1e7d93cad436adf7afecc2dd1e8f2a23e112d32129a42314"} Jan 28 06:50:28 crc kubenswrapper[4642]: I0128 06:50:28.861783 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bsvlt" event={"ID":"a4be5139-f33d-4cb9-829c-cfe1116c1305","Type":"ContainerStarted","Data":"5cfb88e7738a7228f57500aad6298643ba0107a3ea1666b810b2a530af005711"} Jan 28 06:50:28 crc kubenswrapper[4642]: I0128 06:50:28.865967 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-t2ncd" event={"ID":"1711b0d7-ac23-40bd-b522-46a7148d7a6f","Type":"ContainerStarted","Data":"180a67581ee1ad5c1121aa15766d5fb229aa5f2c1faf0d526b7b5209d195c9d8"} Jan 28 06:50:28 crc kubenswrapper[4642]: I0128 06:50:28.865996 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-t2ncd" event={"ID":"1711b0d7-ac23-40bd-b522-46a7148d7a6f","Type":"ContainerStarted","Data":"95c2ab9846575e2a9c77495e13658985e1ab219050b6560a206d44ef71cc7182"} Jan 28 06:50:28 crc kubenswrapper[4642]: I0128 06:50:28.872630 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jwpbc" Jan 28 06:50:28 crc kubenswrapper[4642]: I0128 06:50:28.903480 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-t2ncd" podStartSLOduration=8.903435459 podStartE2EDuration="8.903435459s" podCreationTimestamp="2026-01-28 06:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 06:50:28.901681008 +0000 UTC m=+152.133769817" watchObservedRunningTime="2026-01-28 06:50:28.903435459 +0000 UTC m=+152.135524258" Jan 28 06:50:28 crc kubenswrapper[4642]: I0128 06:50:28.919538 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-sv75h"] Jan 28 06:50:28 crc kubenswrapper[4642]: I0128 06:50:28.930470 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sv75h" Jan 28 06:50:28 crc kubenswrapper[4642]: I0128 06:50:28.932637 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 28 06:50:28 crc kubenswrapper[4642]: I0128 06:50:28.935117 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sv75h"] Jan 28 06:50:28 crc kubenswrapper[4642]: I0128 06:50:28.954608 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd03d734-1b6b-4f56-bb31-19bcb87a0250-catalog-content\") pod \"redhat-marketplace-sv75h\" (UID: \"bd03d734-1b6b-4f56-bb31-19bcb87a0250\") " pod="openshift-marketplace/redhat-marketplace-sv75h" Jan 28 06:50:28 crc kubenswrapper[4642]: I0128 06:50:28.954657 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd03d734-1b6b-4f56-bb31-19bcb87a0250-utilities\") pod \"redhat-marketplace-sv75h\" (UID: \"bd03d734-1b6b-4f56-bb31-19bcb87a0250\") " pod="openshift-marketplace/redhat-marketplace-sv75h" Jan 28 06:50:28 crc kubenswrapper[4642]: I0128 06:50:28.954674 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5td7\" (UniqueName: \"kubernetes.io/projected/bd03d734-1b6b-4f56-bb31-19bcb87a0250-kube-api-access-f5td7\") pod \"redhat-marketplace-sv75h\" (UID: \"bd03d734-1b6b-4f56-bb31-19bcb87a0250\") " pod="openshift-marketplace/redhat-marketplace-sv75h" Jan 28 06:50:29 crc kubenswrapper[4642]: I0128 06:50:29.055623 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd03d734-1b6b-4f56-bb31-19bcb87a0250-catalog-content\") pod \"redhat-marketplace-sv75h\" (UID: \"bd03d734-1b6b-4f56-bb31-19bcb87a0250\") " pod="openshift-marketplace/redhat-marketplace-sv75h" Jan 28 06:50:29 crc kubenswrapper[4642]: I0128 06:50:29.055666 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd03d734-1b6b-4f56-bb31-19bcb87a0250-utilities\") pod \"redhat-marketplace-sv75h\" (UID: \"bd03d734-1b6b-4f56-bb31-19bcb87a0250\") " pod="openshift-marketplace/redhat-marketplace-sv75h" Jan 28 06:50:29 crc kubenswrapper[4642]: I0128 06:50:29.055686 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5td7\" (UniqueName: \"kubernetes.io/projected/bd03d734-1b6b-4f56-bb31-19bcb87a0250-kube-api-access-f5td7\") pod \"redhat-marketplace-sv75h\" (UID: \"bd03d734-1b6b-4f56-bb31-19bcb87a0250\") " pod="openshift-marketplace/redhat-marketplace-sv75h" Jan 28 06:50:29 crc kubenswrapper[4642]: I0128 06:50:29.056421 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd03d734-1b6b-4f56-bb31-19bcb87a0250-catalog-content\") pod \"redhat-marketplace-sv75h\" (UID: \"bd03d734-1b6b-4f56-bb31-19bcb87a0250\") " pod="openshift-marketplace/redhat-marketplace-sv75h" Jan 28 06:50:29 crc kubenswrapper[4642]: I0128 06:50:29.056684 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd03d734-1b6b-4f56-bb31-19bcb87a0250-utilities\") pod \"redhat-marketplace-sv75h\" (UID: \"bd03d734-1b6b-4f56-bb31-19bcb87a0250\") " pod="openshift-marketplace/redhat-marketplace-sv75h" Jan 28 06:50:29 crc kubenswrapper[4642]: I0128 06:50:29.078148 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5td7\" (UniqueName: \"kubernetes.io/projected/bd03d734-1b6b-4f56-bb31-19bcb87a0250-kube-api-access-f5td7\") pod \"redhat-marketplace-sv75h\" (UID: \"bd03d734-1b6b-4f56-bb31-19bcb87a0250\") " pod="openshift-marketplace/redhat-marketplace-sv75h" Jan 28 06:50:29 crc kubenswrapper[4642]: I0128 06:50:29.104440 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 28 06:50:29 crc kubenswrapper[4642]: I0128 06:50:29.258899 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sv75h" Jan 28 06:50:29 crc kubenswrapper[4642]: I0128 06:50:29.319271 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-sdvcx"] Jan 28 06:50:29 crc kubenswrapper[4642]: I0128 06:50:29.320313 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sdvcx" Jan 28 06:50:29 crc kubenswrapper[4642]: I0128 06:50:29.327704 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sdvcx"] Jan 28 06:50:29 crc kubenswrapper[4642]: I0128 06:50:29.360599 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/399d0f01-0fd0-407e-a63d-e7c900f24452-utilities\") pod \"redhat-marketplace-sdvcx\" (UID: \"399d0f01-0fd0-407e-a63d-e7c900f24452\") " pod="openshift-marketplace/redhat-marketplace-sdvcx" Jan 28 06:50:29 crc kubenswrapper[4642]: I0128 06:50:29.360671 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/399d0f01-0fd0-407e-a63d-e7c900f24452-catalog-content\") pod \"redhat-marketplace-sdvcx\" (UID: \"399d0f01-0fd0-407e-a63d-e7c900f24452\") " pod="openshift-marketplace/redhat-marketplace-sdvcx" Jan 28 06:50:29 crc kubenswrapper[4642]: I0128 06:50:29.360711 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsjjt\" (UniqueName: \"kubernetes.io/projected/399d0f01-0fd0-407e-a63d-e7c900f24452-kube-api-access-fsjjt\") pod \"redhat-marketplace-sdvcx\" (UID: \"399d0f01-0fd0-407e-a63d-e7c900f24452\") " pod="openshift-marketplace/redhat-marketplace-sdvcx" Jan 28 06:50:29 crc kubenswrapper[4642]: I0128 06:50:29.445474 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sv75h"] Jan 28 06:50:29 crc kubenswrapper[4642]: I0128 06:50:29.462657 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/399d0f01-0fd0-407e-a63d-e7c900f24452-utilities\") pod \"redhat-marketplace-sdvcx\" (UID: \"399d0f01-0fd0-407e-a63d-e7c900f24452\") " pod="openshift-marketplace/redhat-marketplace-sdvcx" Jan 28 06:50:29 crc kubenswrapper[4642]: I0128 06:50:29.462707 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/399d0f01-0fd0-407e-a63d-e7c900f24452-catalog-content\") pod \"redhat-marketplace-sdvcx\" (UID: \"399d0f01-0fd0-407e-a63d-e7c900f24452\") " pod="openshift-marketplace/redhat-marketplace-sdvcx" Jan 28 06:50:29 crc kubenswrapper[4642]: I0128 06:50:29.462749 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsjjt\" (UniqueName: \"kubernetes.io/projected/399d0f01-0fd0-407e-a63d-e7c900f24452-kube-api-access-fsjjt\") pod \"redhat-marketplace-sdvcx\" (UID: \"399d0f01-0fd0-407e-a63d-e7c900f24452\") " pod="openshift-marketplace/redhat-marketplace-sdvcx" Jan 28 06:50:29 crc kubenswrapper[4642]: I0128 06:50:29.463623 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/399d0f01-0fd0-407e-a63d-e7c900f24452-utilities\") pod \"redhat-marketplace-sdvcx\" (UID: \"399d0f01-0fd0-407e-a63d-e7c900f24452\") " pod="openshift-marketplace/redhat-marketplace-sdvcx" Jan 28 06:50:29 crc kubenswrapper[4642]: I0128 06:50:29.463689 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/399d0f01-0fd0-407e-a63d-e7c900f24452-catalog-content\") pod \"redhat-marketplace-sdvcx\" (UID: \"399d0f01-0fd0-407e-a63d-e7c900f24452\") " pod="openshift-marketplace/redhat-marketplace-sdvcx" Jan 28 06:50:29 crc kubenswrapper[4642]: I0128 06:50:29.479720 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsjjt\" (UniqueName: \"kubernetes.io/projected/399d0f01-0fd0-407e-a63d-e7c900f24452-kube-api-access-fsjjt\") pod \"redhat-marketplace-sdvcx\" (UID: \"399d0f01-0fd0-407e-a63d-e7c900f24452\") " pod="openshift-marketplace/redhat-marketplace-sdvcx" Jan 28 06:50:29 crc kubenswrapper[4642]: I0128 06:50:29.484142 4642 patch_prober.go:28] interesting pod/router-default-5444994796-9dsht container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 06:50:29 crc kubenswrapper[4642]: [-]has-synced failed: reason withheld Jan 28 06:50:29 crc kubenswrapper[4642]: [+]process-running ok Jan 28 06:50:29 crc kubenswrapper[4642]: healthz check failed Jan 28 06:50:29 crc kubenswrapper[4642]: I0128 06:50:29.484211 4642 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9dsht" podUID="7b7c9f44-73e2-4109-827b-bed3a722a78c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 06:50:29 crc kubenswrapper[4642]: I0128 06:50:29.642411 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sdvcx" Jan 28 06:50:29 crc kubenswrapper[4642]: I0128 06:50:29.878329 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-fg5mw" event={"ID":"1f06bd76-391b-4d80-ba76-a992ee54241a","Type":"ContainerStarted","Data":"1e1d34cf5c7e64257213213484e2133c1a3d05c8668f138ce538851d32f483d6"} Jan 28 06:50:29 crc kubenswrapper[4642]: I0128 06:50:29.878373 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-fg5mw" event={"ID":"1f06bd76-391b-4d80-ba76-a992ee54241a","Type":"ContainerStarted","Data":"6b7d1d00b651d16f996e43a6d5ba0a163eda64334f5cd8018ddf39ee6ea513f5"} Jan 28 06:50:29 crc kubenswrapper[4642]: I0128 06:50:29.878413 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-fg5mw" Jan 28 06:50:29 crc kubenswrapper[4642]: I0128 06:50:29.882979 4642 generic.go:334] "Generic (PLEG): container finished" podID="bd03d734-1b6b-4f56-bb31-19bcb87a0250" containerID="63a90fe72aae5ef74166010e6d29732114bac7891331dfae1b42946811dcd881" exitCode=0 Jan 28 06:50:29 crc kubenswrapper[4642]: I0128 06:50:29.883130 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sv75h" event={"ID":"bd03d734-1b6b-4f56-bb31-19bcb87a0250","Type":"ContainerDied","Data":"63a90fe72aae5ef74166010e6d29732114bac7891331dfae1b42946811dcd881"} Jan 28 06:50:29 crc kubenswrapper[4642]: I0128 06:50:29.883178 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sv75h" event={"ID":"bd03d734-1b6b-4f56-bb31-19bcb87a0250","Type":"ContainerStarted","Data":"ae653b857e42a05b66d316809aa5f800a426c71fd39b64772a36af740e3df473"} Jan 28 06:50:29 crc kubenswrapper[4642]: I0128 06:50:29.895497 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-fg5mw" podStartSLOduration=129.895485709 podStartE2EDuration="2m9.895485709s" podCreationTimestamp="2026-01-28 06:48:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 06:50:29.892218543 +0000 UTC m=+153.124307351" watchObservedRunningTime="2026-01-28 06:50:29.895485709 +0000 UTC m=+153.127574518" Jan 28 06:50:30 crc kubenswrapper[4642]: I0128 06:50:30.050111 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sdvcx"] Jan 28 06:50:30 crc kubenswrapper[4642]: W0128 06:50:30.077824 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod399d0f01_0fd0_407e_a63d_e7c900f24452.slice/crio-b7be0a6d26808c75c54f36d7fedc1918cccc881bf4e1b0a0a3d8c768b14e2747 WatchSource:0}: Error finding container b7be0a6d26808c75c54f36d7fedc1918cccc881bf4e1b0a0a3d8c768b14e2747: Status 404 returned error can't find the container with id b7be0a6d26808c75c54f36d7fedc1918cccc881bf4e1b0a0a3d8c768b14e2747 Jan 28 06:50:30 crc kubenswrapper[4642]: I0128 06:50:30.090204 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493045-ndfr5" Jan 28 06:50:30 crc kubenswrapper[4642]: I0128 06:50:30.124148 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ps9zm"] Jan 28 06:50:30 crc kubenswrapper[4642]: E0128 06:50:30.124773 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d980a7da-b442-446b-8bde-56e17d70b28b" containerName="collect-profiles" Jan 28 06:50:30 crc kubenswrapper[4642]: I0128 06:50:30.129453 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="d980a7da-b442-446b-8bde-56e17d70b28b" containerName="collect-profiles" Jan 28 06:50:30 crc kubenswrapper[4642]: I0128 06:50:30.129652 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="d980a7da-b442-446b-8bde-56e17d70b28b" containerName="collect-profiles" Jan 28 06:50:30 crc kubenswrapper[4642]: I0128 06:50:30.130329 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ps9zm"] Jan 28 06:50:30 crc kubenswrapper[4642]: I0128 06:50:30.130481 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ps9zm" Jan 28 06:50:30 crc kubenswrapper[4642]: I0128 06:50:30.132213 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 28 06:50:30 crc kubenswrapper[4642]: I0128 06:50:30.284521 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d980a7da-b442-446b-8bde-56e17d70b28b-secret-volume\") pod \"d980a7da-b442-446b-8bde-56e17d70b28b\" (UID: \"d980a7da-b442-446b-8bde-56e17d70b28b\") " Jan 28 06:50:30 crc kubenswrapper[4642]: I0128 06:50:30.284565 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbv4c\" (UniqueName: \"kubernetes.io/projected/d980a7da-b442-446b-8bde-56e17d70b28b-kube-api-access-sbv4c\") pod \"d980a7da-b442-446b-8bde-56e17d70b28b\" (UID: \"d980a7da-b442-446b-8bde-56e17d70b28b\") " Jan 28 06:50:30 crc kubenswrapper[4642]: I0128 06:50:30.284663 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d980a7da-b442-446b-8bde-56e17d70b28b-config-volume\") pod \"d980a7da-b442-446b-8bde-56e17d70b28b\" (UID: \"d980a7da-b442-446b-8bde-56e17d70b28b\") " Jan 28 06:50:30 crc kubenswrapper[4642]: I0128 06:50:30.284805 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjvrv\" (UniqueName: \"kubernetes.io/projected/8f1e7744-f44d-4430-915b-59821d507da1-kube-api-access-rjvrv\") pod \"redhat-operators-ps9zm\" (UID: \"8f1e7744-f44d-4430-915b-59821d507da1\") " pod="openshift-marketplace/redhat-operators-ps9zm" Jan 28 06:50:30 crc kubenswrapper[4642]: I0128 06:50:30.284870 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f1e7744-f44d-4430-915b-59821d507da1-utilities\") pod \"redhat-operators-ps9zm\" (UID: \"8f1e7744-f44d-4430-915b-59821d507da1\") " pod="openshift-marketplace/redhat-operators-ps9zm" Jan 28 06:50:30 crc kubenswrapper[4642]: I0128 06:50:30.284960 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f1e7744-f44d-4430-915b-59821d507da1-catalog-content\") pod \"redhat-operators-ps9zm\" (UID: \"8f1e7744-f44d-4430-915b-59821d507da1\") " pod="openshift-marketplace/redhat-operators-ps9zm" Jan 28 06:50:30 crc kubenswrapper[4642]: I0128 06:50:30.286427 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d980a7da-b442-446b-8bde-56e17d70b28b-config-volume" (OuterVolumeSpecName: "config-volume") pod "d980a7da-b442-446b-8bde-56e17d70b28b" (UID: "d980a7da-b442-446b-8bde-56e17d70b28b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:50:30 crc kubenswrapper[4642]: I0128 06:50:30.291411 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d980a7da-b442-446b-8bde-56e17d70b28b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d980a7da-b442-446b-8bde-56e17d70b28b" (UID: "d980a7da-b442-446b-8bde-56e17d70b28b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 06:50:30 crc kubenswrapper[4642]: I0128 06:50:30.291737 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d980a7da-b442-446b-8bde-56e17d70b28b-kube-api-access-sbv4c" (OuterVolumeSpecName: "kube-api-access-sbv4c") pod "d980a7da-b442-446b-8bde-56e17d70b28b" (UID: "d980a7da-b442-446b-8bde-56e17d70b28b"). InnerVolumeSpecName "kube-api-access-sbv4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 06:50:30 crc kubenswrapper[4642]: I0128 06:50:30.387533 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f1e7744-f44d-4430-915b-59821d507da1-utilities\") pod \"redhat-operators-ps9zm\" (UID: \"8f1e7744-f44d-4430-915b-59821d507da1\") " pod="openshift-marketplace/redhat-operators-ps9zm" Jan 28 06:50:30 crc kubenswrapper[4642]: I0128 06:50:30.387756 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f1e7744-f44d-4430-915b-59821d507da1-catalog-content\") pod \"redhat-operators-ps9zm\" (UID: \"8f1e7744-f44d-4430-915b-59821d507da1\") " pod="openshift-marketplace/redhat-operators-ps9zm" Jan 28 06:50:30 crc kubenswrapper[4642]: I0128 06:50:30.387789 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjvrv\" (UniqueName: \"kubernetes.io/projected/8f1e7744-f44d-4430-915b-59821d507da1-kube-api-access-rjvrv\") pod \"redhat-operators-ps9zm\" (UID: \"8f1e7744-f44d-4430-915b-59821d507da1\") " pod="openshift-marketplace/redhat-operators-ps9zm" Jan 28 06:50:30 crc kubenswrapper[4642]: I0128 06:50:30.388035 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f1e7744-f44d-4430-915b-59821d507da1-utilities\") pod \"redhat-operators-ps9zm\" (UID: \"8f1e7744-f44d-4430-915b-59821d507da1\") " pod="openshift-marketplace/redhat-operators-ps9zm" Jan 28 06:50:30 crc kubenswrapper[4642]: I0128 06:50:30.388418 4642 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d980a7da-b442-446b-8bde-56e17d70b28b-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 06:50:30 crc kubenswrapper[4642]: I0128 06:50:30.388428 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f1e7744-f44d-4430-915b-59821d507da1-catalog-content\") pod \"redhat-operators-ps9zm\" (UID: \"8f1e7744-f44d-4430-915b-59821d507da1\") " pod="openshift-marketplace/redhat-operators-ps9zm" Jan 28 06:50:30 crc kubenswrapper[4642]: I0128 06:50:30.388443 4642 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d980a7da-b442-446b-8bde-56e17d70b28b-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 28 06:50:30 crc kubenswrapper[4642]: I0128 06:50:30.388547 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sbv4c\" (UniqueName: \"kubernetes.io/projected/d980a7da-b442-446b-8bde-56e17d70b28b-kube-api-access-sbv4c\") on node \"crc\" DevicePath \"\"" Jan 28 06:50:30 crc kubenswrapper[4642]: I0128 06:50:30.404752 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjvrv\" (UniqueName: \"kubernetes.io/projected/8f1e7744-f44d-4430-915b-59821d507da1-kube-api-access-rjvrv\") pod \"redhat-operators-ps9zm\" (UID: \"8f1e7744-f44d-4430-915b-59821d507da1\") " pod="openshift-marketplace/redhat-operators-ps9zm" Jan 28 06:50:30 crc kubenswrapper[4642]: I0128 06:50:30.443856 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ps9zm" Jan 28 06:50:30 crc kubenswrapper[4642]: I0128 06:50:30.480582 4642 patch_prober.go:28] interesting pod/router-default-5444994796-9dsht container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 06:50:30 crc kubenswrapper[4642]: [-]has-synced failed: reason withheld Jan 28 06:50:30 crc kubenswrapper[4642]: [+]process-running ok Jan 28 06:50:30 crc kubenswrapper[4642]: healthz check failed Jan 28 06:50:30 crc kubenswrapper[4642]: I0128 06:50:30.480646 4642 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9dsht" podUID="7b7c9f44-73e2-4109-827b-bed3a722a78c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 06:50:30 crc kubenswrapper[4642]: I0128 06:50:30.521657 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-s2kvw"] Jan 28 06:50:30 crc kubenswrapper[4642]: I0128 06:50:30.522986 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s2kvw" Jan 28 06:50:30 crc kubenswrapper[4642]: I0128 06:50:30.531010 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s2kvw"] Jan 28 06:50:30 crc kubenswrapper[4642]: I0128 06:50:30.693016 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ps9zm"] Jan 28 06:50:30 crc kubenswrapper[4642]: I0128 06:50:30.693883 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a83f4e9-662c-4d05-b296-59e7ca842b15-catalog-content\") pod \"redhat-operators-s2kvw\" (UID: \"0a83f4e9-662c-4d05-b296-59e7ca842b15\") " pod="openshift-marketplace/redhat-operators-s2kvw" Jan 28 06:50:30 crc kubenswrapper[4642]: I0128 06:50:30.693932 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a83f4e9-662c-4d05-b296-59e7ca842b15-utilities\") pod \"redhat-operators-s2kvw\" (UID: \"0a83f4e9-662c-4d05-b296-59e7ca842b15\") " pod="openshift-marketplace/redhat-operators-s2kvw" Jan 28 06:50:30 crc kubenswrapper[4642]: I0128 06:50:30.693977 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6xvp\" (UniqueName: \"kubernetes.io/projected/0a83f4e9-662c-4d05-b296-59e7ca842b15-kube-api-access-x6xvp\") pod \"redhat-operators-s2kvw\" (UID: \"0a83f4e9-662c-4d05-b296-59e7ca842b15\") " pod="openshift-marketplace/redhat-operators-s2kvw" Jan 28 06:50:30 crc kubenswrapper[4642]: W0128 06:50:30.701853 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f1e7744_f44d_4430_915b_59821d507da1.slice/crio-4fdfeeb6a5f3b120698f1ff3bfc8ed152db78426a1bf0d277cc1f4ccfb16ac14 WatchSource:0}: Error finding container 4fdfeeb6a5f3b120698f1ff3bfc8ed152db78426a1bf0d277cc1f4ccfb16ac14: Status 404 returned error can't find the container with id 4fdfeeb6a5f3b120698f1ff3bfc8ed152db78426a1bf0d277cc1f4ccfb16ac14 Jan 28 06:50:30 crc kubenswrapper[4642]: I0128 06:50:30.795808 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a83f4e9-662c-4d05-b296-59e7ca842b15-catalog-content\") pod \"redhat-operators-s2kvw\" (UID: \"0a83f4e9-662c-4d05-b296-59e7ca842b15\") " pod="openshift-marketplace/redhat-operators-s2kvw" Jan 28 06:50:30 crc kubenswrapper[4642]: I0128 06:50:30.795880 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a83f4e9-662c-4d05-b296-59e7ca842b15-utilities\") pod \"redhat-operators-s2kvw\" (UID: \"0a83f4e9-662c-4d05-b296-59e7ca842b15\") " pod="openshift-marketplace/redhat-operators-s2kvw" Jan 28 06:50:30 crc kubenswrapper[4642]: I0128 06:50:30.795933 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6xvp\" (UniqueName: \"kubernetes.io/projected/0a83f4e9-662c-4d05-b296-59e7ca842b15-kube-api-access-x6xvp\") pod \"redhat-operators-s2kvw\" (UID: \"0a83f4e9-662c-4d05-b296-59e7ca842b15\") " pod="openshift-marketplace/redhat-operators-s2kvw" Jan 28 06:50:30 crc kubenswrapper[4642]: I0128 06:50:30.796310 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a83f4e9-662c-4d05-b296-59e7ca842b15-catalog-content\") pod \"redhat-operators-s2kvw\" (UID: \"0a83f4e9-662c-4d05-b296-59e7ca842b15\") " pod="openshift-marketplace/redhat-operators-s2kvw" Jan 28 06:50:30 crc kubenswrapper[4642]: I0128 06:50:30.796495 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a83f4e9-662c-4d05-b296-59e7ca842b15-utilities\") pod \"redhat-operators-s2kvw\" (UID: \"0a83f4e9-662c-4d05-b296-59e7ca842b15\") " pod="openshift-marketplace/redhat-operators-s2kvw" Jan 28 06:50:30 crc kubenswrapper[4642]: I0128 06:50:30.811794 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6xvp\" (UniqueName: \"kubernetes.io/projected/0a83f4e9-662c-4d05-b296-59e7ca842b15-kube-api-access-x6xvp\") pod \"redhat-operators-s2kvw\" (UID: \"0a83f4e9-662c-4d05-b296-59e7ca842b15\") " pod="openshift-marketplace/redhat-operators-s2kvw" Jan 28 06:50:30 crc kubenswrapper[4642]: I0128 06:50:30.847777 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s2kvw" Jan 28 06:50:30 crc kubenswrapper[4642]: I0128 06:50:30.895034 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ps9zm" event={"ID":"8f1e7744-f44d-4430-915b-59821d507da1","Type":"ContainerStarted","Data":"4fdfeeb6a5f3b120698f1ff3bfc8ed152db78426a1bf0d277cc1f4ccfb16ac14"} Jan 28 06:50:30 crc kubenswrapper[4642]: I0128 06:50:30.898052 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493045-ndfr5" event={"ID":"d980a7da-b442-446b-8bde-56e17d70b28b","Type":"ContainerDied","Data":"39cf3954429937a5ef0ae5a75531fbc7ced8e3b95b28b33d8847ccc7c3408f80"} Jan 28 06:50:30 crc kubenswrapper[4642]: I0128 06:50:30.898084 4642 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39cf3954429937a5ef0ae5a75531fbc7ced8e3b95b28b33d8847ccc7c3408f80" Jan 28 06:50:30 crc kubenswrapper[4642]: I0128 06:50:30.898154 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493045-ndfr5" Jan 28 06:50:30 crc kubenswrapper[4642]: I0128 06:50:30.902945 4642 generic.go:334] "Generic (PLEG): container finished" podID="399d0f01-0fd0-407e-a63d-e7c900f24452" containerID="3649b91fad03dff53bf74fe00fad77fc754489ab5fac193b48eb6a0bed7b6937" exitCode=0 Jan 28 06:50:30 crc kubenswrapper[4642]: I0128 06:50:30.903021 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sdvcx" event={"ID":"399d0f01-0fd0-407e-a63d-e7c900f24452","Type":"ContainerDied","Data":"3649b91fad03dff53bf74fe00fad77fc754489ab5fac193b48eb6a0bed7b6937"} Jan 28 06:50:30 crc kubenswrapper[4642]: I0128 06:50:30.903050 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sdvcx" event={"ID":"399d0f01-0fd0-407e-a63d-e7c900f24452","Type":"ContainerStarted","Data":"b7be0a6d26808c75c54f36d7fedc1918cccc881bf4e1b0a0a3d8c768b14e2747"} Jan 28 06:50:31 crc kubenswrapper[4642]: I0128 06:50:31.240223 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s2kvw"] Jan 28 06:50:31 crc kubenswrapper[4642]: I0128 06:50:31.480929 4642 patch_prober.go:28] interesting pod/router-default-5444994796-9dsht container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 06:50:31 crc kubenswrapper[4642]: [-]has-synced failed: reason withheld Jan 28 06:50:31 crc kubenswrapper[4642]: [+]process-running ok Jan 28 06:50:31 crc kubenswrapper[4642]: healthz check failed Jan 28 06:50:31 crc kubenswrapper[4642]: I0128 06:50:31.480984 4642 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9dsht" podUID="7b7c9f44-73e2-4109-827b-bed3a722a78c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 06:50:31 crc kubenswrapper[4642]: I0128 06:50:31.914467 4642 generic.go:334] "Generic (PLEG): container finished" podID="8f1e7744-f44d-4430-915b-59821d507da1" containerID="5cd4336221342b67043af1e26cfbc7e85f41145bb6fd5c2bde811f721c94e123" exitCode=0 Jan 28 06:50:31 crc kubenswrapper[4642]: I0128 06:50:31.914754 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ps9zm" event={"ID":"8f1e7744-f44d-4430-915b-59821d507da1","Type":"ContainerDied","Data":"5cd4336221342b67043af1e26cfbc7e85f41145bb6fd5c2bde811f721c94e123"} Jan 28 06:50:31 crc kubenswrapper[4642]: I0128 06:50:31.918062 4642 generic.go:334] "Generic (PLEG): container finished" podID="0a83f4e9-662c-4d05-b296-59e7ca842b15" containerID="d44d816f2a624fd271fc884be4f9a8c41ff3556ef98c8c375cbc5e9e8dd3457c" exitCode=0 Jan 28 06:50:31 crc kubenswrapper[4642]: I0128 06:50:31.918098 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s2kvw" event={"ID":"0a83f4e9-662c-4d05-b296-59e7ca842b15","Type":"ContainerDied","Data":"d44d816f2a624fd271fc884be4f9a8c41ff3556ef98c8c375cbc5e9e8dd3457c"} Jan 28 06:50:31 crc kubenswrapper[4642]: I0128 06:50:31.918117 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s2kvw" event={"ID":"0a83f4e9-662c-4d05-b296-59e7ca842b15","Type":"ContainerStarted","Data":"9783e5639a22eaba150e2b4718d26c797cd36998386385e566ca8e44366a5523"} Jan 28 06:50:32 crc kubenswrapper[4642]: I0128 06:50:32.479299 4642 patch_prober.go:28] interesting pod/router-default-5444994796-9dsht container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 06:50:32 crc kubenswrapper[4642]: [-]has-synced failed: reason withheld Jan 28 06:50:32 crc kubenswrapper[4642]: [+]process-running ok Jan 28 06:50:32 crc kubenswrapper[4642]: healthz check failed Jan 28 06:50:32 crc kubenswrapper[4642]: I0128 06:50:32.479355 4642 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9dsht" podUID="7b7c9f44-73e2-4109-827b-bed3a722a78c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 06:50:32 crc kubenswrapper[4642]: I0128 06:50:32.504343 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-k2t68" Jan 28 06:50:32 crc kubenswrapper[4642]: I0128 06:50:32.528097 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 28 06:50:32 crc kubenswrapper[4642]: I0128 06:50:32.528775 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 06:50:32 crc kubenswrapper[4642]: I0128 06:50:32.531654 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 28 06:50:32 crc kubenswrapper[4642]: I0128 06:50:32.531930 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 28 06:50:32 crc kubenswrapper[4642]: I0128 06:50:32.534233 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 28 06:50:32 crc kubenswrapper[4642]: I0128 06:50:32.623949 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/45cb677e-e454-4836-935f-1940dbf660f1-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"45cb677e-e454-4836-935f-1940dbf660f1\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 06:50:32 crc kubenswrapper[4642]: I0128 06:50:32.624128 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/45cb677e-e454-4836-935f-1940dbf660f1-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"45cb677e-e454-4836-935f-1940dbf660f1\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 06:50:32 crc kubenswrapper[4642]: I0128 06:50:32.716351 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-h6qvh" Jan 28 06:50:32 crc kubenswrapper[4642]: I0128 06:50:32.725925 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/45cb677e-e454-4836-935f-1940dbf660f1-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"45cb677e-e454-4836-935f-1940dbf660f1\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 06:50:32 crc kubenswrapper[4642]: I0128 06:50:32.725985 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/45cb677e-e454-4836-935f-1940dbf660f1-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"45cb677e-e454-4836-935f-1940dbf660f1\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 06:50:32 crc kubenswrapper[4642]: I0128 06:50:32.727123 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/45cb677e-e454-4836-935f-1940dbf660f1-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"45cb677e-e454-4836-935f-1940dbf660f1\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 06:50:32 crc kubenswrapper[4642]: I0128 06:50:32.727159 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-h6qvh" Jan 28 06:50:32 crc kubenswrapper[4642]: I0128 06:50:32.748260 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/45cb677e-e454-4836-935f-1940dbf660f1-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"45cb677e-e454-4836-935f-1940dbf660f1\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 06:50:32 crc kubenswrapper[4642]: I0128 06:50:32.757551 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 28 06:50:32 crc kubenswrapper[4642]: I0128 06:50:32.758217 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 06:50:32 crc kubenswrapper[4642]: I0128 06:50:32.765545 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 28 06:50:32 crc kubenswrapper[4642]: I0128 06:50:32.767226 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 28 06:50:32 crc kubenswrapper[4642]: I0128 06:50:32.781796 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 28 06:50:32 crc kubenswrapper[4642]: I0128 06:50:32.812069 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-zrrr7" Jan 28 06:50:32 crc kubenswrapper[4642]: I0128 06:50:32.812115 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-zrrr7" Jan 28 06:50:32 crc kubenswrapper[4642]: I0128 06:50:32.818729 4642 patch_prober.go:28] interesting pod/console-f9d7485db-zrrr7 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.31:8443/health\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Jan 28 06:50:32 crc kubenswrapper[4642]: I0128 06:50:32.818783 4642 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-zrrr7" podUID="4d5c1bf9-0f8d-4363-8afc-f764165812c8" containerName="console" probeResult="failure" output="Get \"https://10.217.0.31:8443/health\": dial tcp 10.217.0.31:8443: connect: connection refused" Jan 28 06:50:32 crc kubenswrapper[4642]: I0128 06:50:32.846621 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 06:50:32 crc kubenswrapper[4642]: I0128 06:50:32.931228 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d9429ecc-022e-4e10-a174-f54648494e0f-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"d9429ecc-022e-4e10-a174-f54648494e0f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 06:50:32 crc kubenswrapper[4642]: I0128 06:50:32.931375 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d9429ecc-022e-4e10-a174-f54648494e0f-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"d9429ecc-022e-4e10-a174-f54648494e0f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 06:50:33 crc kubenswrapper[4642]: I0128 06:50:33.033085 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d9429ecc-022e-4e10-a174-f54648494e0f-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"d9429ecc-022e-4e10-a174-f54648494e0f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 06:50:33 crc kubenswrapper[4642]: I0128 06:50:33.033176 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d9429ecc-022e-4e10-a174-f54648494e0f-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"d9429ecc-022e-4e10-a174-f54648494e0f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 06:50:33 crc kubenswrapper[4642]: I0128 06:50:33.033286 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d9429ecc-022e-4e10-a174-f54648494e0f-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"d9429ecc-022e-4e10-a174-f54648494e0f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 06:50:33 crc kubenswrapper[4642]: I0128 06:50:33.054246 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d9429ecc-022e-4e10-a174-f54648494e0f-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"d9429ecc-022e-4e10-a174-f54648494e0f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 06:50:33 crc kubenswrapper[4642]: I0128 06:50:33.095977 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 06:50:33 crc kubenswrapper[4642]: I0128 06:50:33.478051 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-9dsht" Jan 28 06:50:33 crc kubenswrapper[4642]: I0128 06:50:33.481102 4642 patch_prober.go:28] interesting pod/router-default-5444994796-9dsht container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 06:50:33 crc kubenswrapper[4642]: [-]has-synced failed: reason withheld Jan 28 06:50:33 crc kubenswrapper[4642]: [+]process-running ok Jan 28 06:50:33 crc kubenswrapper[4642]: healthz check failed Jan 28 06:50:33 crc kubenswrapper[4642]: I0128 06:50:33.481165 4642 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9dsht" podUID="7b7c9f44-73e2-4109-827b-bed3a722a78c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 06:50:33 crc kubenswrapper[4642]: I0128 06:50:33.504827 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-965rk" Jan 28 06:50:34 crc kubenswrapper[4642]: I0128 06:50:34.485945 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-9dsht" Jan 28 06:50:34 crc kubenswrapper[4642]: I0128 06:50:34.489148 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-9dsht" Jan 28 06:50:35 crc kubenswrapper[4642]: I0128 06:50:35.370666 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-mmdr5" Jan 28 06:50:37 crc kubenswrapper[4642]: I0128 06:50:37.154258 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 28 06:50:37 crc kubenswrapper[4642]: W0128 06:50:37.159690 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podd9429ecc_022e_4e10_a174_f54648494e0f.slice/crio-478e4c6f2a97244054ac690d279facd2973b494f13f23160505c780523d4cdb0 WatchSource:0}: Error finding container 478e4c6f2a97244054ac690d279facd2973b494f13f23160505c780523d4cdb0: Status 404 returned error can't find the container with id 478e4c6f2a97244054ac690d279facd2973b494f13f23160505c780523d4cdb0 Jan 28 06:50:37 crc kubenswrapper[4642]: I0128 06:50:37.172239 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 28 06:50:37 crc kubenswrapper[4642]: W0128 06:50:37.179160 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod45cb677e_e454_4836_935f_1940dbf660f1.slice/crio-a359cf8a6877b2bcec13eaa8a1f24288323407dfe2acbe2762f3cd95d90c4521 WatchSource:0}: Error finding container a359cf8a6877b2bcec13eaa8a1f24288323407dfe2acbe2762f3cd95d90c4521: Status 404 returned error can't find the container with id a359cf8a6877b2bcec13eaa8a1f24288323407dfe2acbe2762f3cd95d90c4521 Jan 28 06:50:38 crc kubenswrapper[4642]: I0128 06:50:38.011828 4642 generic.go:334] "Generic (PLEG): container finished" podID="d9429ecc-022e-4e10-a174-f54648494e0f" containerID="88f96d121828357c540272e572ec752a9246464bea4878a7d51efcc861d7a933" exitCode=0 Jan 28 06:50:38 crc kubenswrapper[4642]: I0128 06:50:38.011915 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"d9429ecc-022e-4e10-a174-f54648494e0f","Type":"ContainerDied","Data":"88f96d121828357c540272e572ec752a9246464bea4878a7d51efcc861d7a933"} Jan 28 06:50:38 crc kubenswrapper[4642]: I0128 06:50:38.012164 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"d9429ecc-022e-4e10-a174-f54648494e0f","Type":"ContainerStarted","Data":"478e4c6f2a97244054ac690d279facd2973b494f13f23160505c780523d4cdb0"} Jan 28 06:50:38 crc kubenswrapper[4642]: I0128 06:50:38.017112 4642 generic.go:334] "Generic (PLEG): container finished" podID="45cb677e-e454-4836-935f-1940dbf660f1" containerID="e472b128996b0f877c9fe988e3a594a9bdaaa8f8d848fa1c16689db6b04c0b3e" exitCode=0 Jan 28 06:50:38 crc kubenswrapper[4642]: I0128 06:50:38.017138 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"45cb677e-e454-4836-935f-1940dbf660f1","Type":"ContainerDied","Data":"e472b128996b0f877c9fe988e3a594a9bdaaa8f8d848fa1c16689db6b04c0b3e"} Jan 28 06:50:38 crc kubenswrapper[4642]: I0128 06:50:38.017154 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"45cb677e-e454-4836-935f-1940dbf660f1","Type":"ContainerStarted","Data":"a359cf8a6877b2bcec13eaa8a1f24288323407dfe2acbe2762f3cd95d90c4521"} Jan 28 06:50:38 crc kubenswrapper[4642]: I0128 06:50:38.199535 4642 patch_prober.go:28] interesting pod/machine-config-daemon-hdsmf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 06:50:38 crc kubenswrapper[4642]: I0128 06:50:38.199586 4642 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 06:50:39 crc kubenswrapper[4642]: I0128 06:50:39.285371 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 06:50:39 crc kubenswrapper[4642]: I0128 06:50:39.342104 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 06:50:39 crc kubenswrapper[4642]: I0128 06:50:39.433780 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d9429ecc-022e-4e10-a174-f54648494e0f-kube-api-access\") pod \"d9429ecc-022e-4e10-a174-f54648494e0f\" (UID: \"d9429ecc-022e-4e10-a174-f54648494e0f\") " Jan 28 06:50:39 crc kubenswrapper[4642]: I0128 06:50:39.433852 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/45cb677e-e454-4836-935f-1940dbf660f1-kubelet-dir\") pod \"45cb677e-e454-4836-935f-1940dbf660f1\" (UID: \"45cb677e-e454-4836-935f-1940dbf660f1\") " Jan 28 06:50:39 crc kubenswrapper[4642]: I0128 06:50:39.433871 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d9429ecc-022e-4e10-a174-f54648494e0f-kubelet-dir\") pod \"d9429ecc-022e-4e10-a174-f54648494e0f\" (UID: \"d9429ecc-022e-4e10-a174-f54648494e0f\") " Jan 28 06:50:39 crc kubenswrapper[4642]: I0128 06:50:39.433961 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/45cb677e-e454-4836-935f-1940dbf660f1-kube-api-access\") pod \"45cb677e-e454-4836-935f-1940dbf660f1\" (UID: \"45cb677e-e454-4836-935f-1940dbf660f1\") " Jan 28 06:50:39 crc kubenswrapper[4642]: I0128 06:50:39.434211 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45cb677e-e454-4836-935f-1940dbf660f1-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "45cb677e-e454-4836-935f-1940dbf660f1" (UID: "45cb677e-e454-4836-935f-1940dbf660f1"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 06:50:39 crc kubenswrapper[4642]: I0128 06:50:39.434246 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9429ecc-022e-4e10-a174-f54648494e0f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "d9429ecc-022e-4e10-a174-f54648494e0f" (UID: "d9429ecc-022e-4e10-a174-f54648494e0f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 06:50:39 crc kubenswrapper[4642]: I0128 06:50:39.434325 4642 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/45cb677e-e454-4836-935f-1940dbf660f1-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 28 06:50:39 crc kubenswrapper[4642]: I0128 06:50:39.439060 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9429ecc-022e-4e10-a174-f54648494e0f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d9429ecc-022e-4e10-a174-f54648494e0f" (UID: "d9429ecc-022e-4e10-a174-f54648494e0f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 06:50:39 crc kubenswrapper[4642]: I0128 06:50:39.439298 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45cb677e-e454-4836-935f-1940dbf660f1-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "45cb677e-e454-4836-935f-1940dbf660f1" (UID: "45cb677e-e454-4836-935f-1940dbf660f1"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 06:50:39 crc kubenswrapper[4642]: I0128 06:50:39.534839 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d9429ecc-022e-4e10-a174-f54648494e0f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 06:50:39 crc kubenswrapper[4642]: I0128 06:50:39.534877 4642 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d9429ecc-022e-4e10-a174-f54648494e0f-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 28 06:50:39 crc kubenswrapper[4642]: I0128 06:50:39.534887 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/45cb677e-e454-4836-935f-1940dbf660f1-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 06:50:40 crc kubenswrapper[4642]: I0128 06:50:40.027534 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"45cb677e-e454-4836-935f-1940dbf660f1","Type":"ContainerDied","Data":"a359cf8a6877b2bcec13eaa8a1f24288323407dfe2acbe2762f3cd95d90c4521"} Jan 28 06:50:40 crc kubenswrapper[4642]: I0128 06:50:40.027850 4642 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a359cf8a6877b2bcec13eaa8a1f24288323407dfe2acbe2762f3cd95d90c4521" Jan 28 06:50:40 crc kubenswrapper[4642]: I0128 06:50:40.027551 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 06:50:40 crc kubenswrapper[4642]: I0128 06:50:40.029383 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"d9429ecc-022e-4e10-a174-f54648494e0f","Type":"ContainerDied","Data":"478e4c6f2a97244054ac690d279facd2973b494f13f23160505c780523d4cdb0"} Jan 28 06:50:40 crc kubenswrapper[4642]: I0128 06:50:40.029417 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 06:50:40 crc kubenswrapper[4642]: I0128 06:50:40.029423 4642 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="478e4c6f2a97244054ac690d279facd2973b494f13f23160505c780523d4cdb0" Jan 28 06:50:42 crc kubenswrapper[4642]: I0128 06:50:42.570733 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e7ad39da-99cf-4851-be79-a7d38df54055-metrics-certs\") pod \"network-metrics-daemon-bpz6r\" (UID: \"e7ad39da-99cf-4851-be79-a7d38df54055\") " pod="openshift-multus/network-metrics-daemon-bpz6r" Jan 28 06:50:42 crc kubenswrapper[4642]: I0128 06:50:42.576599 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e7ad39da-99cf-4851-be79-a7d38df54055-metrics-certs\") pod \"network-metrics-daemon-bpz6r\" (UID: \"e7ad39da-99cf-4851-be79-a7d38df54055\") " pod="openshift-multus/network-metrics-daemon-bpz6r" Jan 28 06:50:42 crc kubenswrapper[4642]: I0128 06:50:42.607480 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bpz6r" Jan 28 06:50:42 crc kubenswrapper[4642]: I0128 06:50:42.813381 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-zrrr7" Jan 28 06:50:42 crc kubenswrapper[4642]: I0128 06:50:42.816542 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-zrrr7" Jan 28 06:50:44 crc kubenswrapper[4642]: I0128 06:50:44.815346 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-t9tps"] Jan 28 06:50:44 crc kubenswrapper[4642]: I0128 06:50:44.815615 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-t9tps" podUID="3ed61f23-b0b6-4a75-9f9c-44a992a64d23" containerName="controller-manager" containerID="cri-o://949d30e971a64837560d12686c36196753107cfc7b0b3ca8660db110af38f233" gracePeriod=30 Jan 28 06:50:44 crc kubenswrapper[4642]: I0128 06:50:44.817511 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-5m9j5"] Jan 28 06:50:44 crc kubenswrapper[4642]: I0128 06:50:44.817672 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5m9j5" podUID="a4310bc4-d7e7-4f8b-832b-57bceda71a45" containerName="route-controller-manager" containerID="cri-o://0539d90d377dce9a8ffd4fd92e903761054a18c41d6001ab10ccde7944c39791" gracePeriod=30 Jan 28 06:50:45 crc kubenswrapper[4642]: I0128 06:50:45.055573 4642 generic.go:334] "Generic (PLEG): container finished" podID="a4310bc4-d7e7-4f8b-832b-57bceda71a45" containerID="0539d90d377dce9a8ffd4fd92e903761054a18c41d6001ab10ccde7944c39791" exitCode=0 Jan 28 06:50:45 crc kubenswrapper[4642]: I0128 06:50:45.055619 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5m9j5" event={"ID":"a4310bc4-d7e7-4f8b-832b-57bceda71a45","Type":"ContainerDied","Data":"0539d90d377dce9a8ffd4fd92e903761054a18c41d6001ab10ccde7944c39791"} Jan 28 06:50:45 crc kubenswrapper[4642]: I0128 06:50:45.057572 4642 generic.go:334] "Generic (PLEG): container finished" podID="3ed61f23-b0b6-4a75-9f9c-44a992a64d23" containerID="949d30e971a64837560d12686c36196753107cfc7b0b3ca8660db110af38f233" exitCode=0 Jan 28 06:50:45 crc kubenswrapper[4642]: I0128 06:50:45.057656 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-t9tps" event={"ID":"3ed61f23-b0b6-4a75-9f9c-44a992a64d23","Type":"ContainerDied","Data":"949d30e971a64837560d12686c36196753107cfc7b0b3ca8660db110af38f233"} Jan 28 06:50:48 crc kubenswrapper[4642]: I0128 06:50:48.300519 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-t9tps" Jan 28 06:50:48 crc kubenswrapper[4642]: I0128 06:50:48.310748 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5m9j5" Jan 28 06:50:48 crc kubenswrapper[4642]: I0128 06:50:48.331456 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-78bc4cb977-psz6q"] Jan 28 06:50:48 crc kubenswrapper[4642]: E0128 06:50:48.331696 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ed61f23-b0b6-4a75-9f9c-44a992a64d23" containerName="controller-manager" Jan 28 06:50:48 crc kubenswrapper[4642]: I0128 06:50:48.331715 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ed61f23-b0b6-4a75-9f9c-44a992a64d23" containerName="controller-manager" Jan 28 06:50:48 crc kubenswrapper[4642]: E0128 06:50:48.331726 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45cb677e-e454-4836-935f-1940dbf660f1" containerName="pruner" Jan 28 06:50:48 crc kubenswrapper[4642]: I0128 06:50:48.331732 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="45cb677e-e454-4836-935f-1940dbf660f1" containerName="pruner" Jan 28 06:50:48 crc kubenswrapper[4642]: E0128 06:50:48.331743 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9429ecc-022e-4e10-a174-f54648494e0f" containerName="pruner" Jan 28 06:50:48 crc kubenswrapper[4642]: I0128 06:50:48.331748 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9429ecc-022e-4e10-a174-f54648494e0f" containerName="pruner" Jan 28 06:50:48 crc kubenswrapper[4642]: E0128 06:50:48.331756 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4310bc4-d7e7-4f8b-832b-57bceda71a45" containerName="route-controller-manager" Jan 28 06:50:48 crc kubenswrapper[4642]: I0128 06:50:48.331762 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4310bc4-d7e7-4f8b-832b-57bceda71a45" containerName="route-controller-manager" Jan 28 06:50:48 crc kubenswrapper[4642]: I0128 06:50:48.331851 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ed61f23-b0b6-4a75-9f9c-44a992a64d23" containerName="controller-manager" Jan 28 06:50:48 crc kubenswrapper[4642]: I0128 06:50:48.331863 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9429ecc-022e-4e10-a174-f54648494e0f" containerName="pruner" Jan 28 06:50:48 crc kubenswrapper[4642]: I0128 06:50:48.331870 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="45cb677e-e454-4836-935f-1940dbf660f1" containerName="pruner" Jan 28 06:50:48 crc kubenswrapper[4642]: I0128 06:50:48.331877 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4310bc4-d7e7-4f8b-832b-57bceda71a45" containerName="route-controller-manager" Jan 28 06:50:48 crc kubenswrapper[4642]: I0128 06:50:48.332271 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-78bc4cb977-psz6q" Jan 28 06:50:48 crc kubenswrapper[4642]: I0128 06:50:48.345262 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-78bc4cb977-psz6q"] Jan 28 06:50:48 crc kubenswrapper[4642]: I0128 06:50:48.439463 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4310bc4-d7e7-4f8b-832b-57bceda71a45-serving-cert\") pod \"a4310bc4-d7e7-4f8b-832b-57bceda71a45\" (UID: \"a4310bc4-d7e7-4f8b-832b-57bceda71a45\") " Jan 28 06:50:48 crc kubenswrapper[4642]: I0128 06:50:48.439523 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a4310bc4-d7e7-4f8b-832b-57bceda71a45-client-ca\") pod \"a4310bc4-d7e7-4f8b-832b-57bceda71a45\" (UID: \"a4310bc4-d7e7-4f8b-832b-57bceda71a45\") " Jan 28 06:50:48 crc kubenswrapper[4642]: I0128 06:50:48.439552 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4310bc4-d7e7-4f8b-832b-57bceda71a45-config\") pod \"a4310bc4-d7e7-4f8b-832b-57bceda71a45\" (UID: \"a4310bc4-d7e7-4f8b-832b-57bceda71a45\") " Jan 28 06:50:48 crc kubenswrapper[4642]: I0128 06:50:48.439570 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3ed61f23-b0b6-4a75-9f9c-44a992a64d23-proxy-ca-bundles\") pod \"3ed61f23-b0b6-4a75-9f9c-44a992a64d23\" (UID: \"3ed61f23-b0b6-4a75-9f9c-44a992a64d23\") " Jan 28 06:50:48 crc kubenswrapper[4642]: I0128 06:50:48.439600 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ed61f23-b0b6-4a75-9f9c-44a992a64d23-config\") pod \"3ed61f23-b0b6-4a75-9f9c-44a992a64d23\" (UID: \"3ed61f23-b0b6-4a75-9f9c-44a992a64d23\") " Jan 28 06:50:48 crc kubenswrapper[4642]: I0128 06:50:48.439621 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3ed61f23-b0b6-4a75-9f9c-44a992a64d23-client-ca\") pod \"3ed61f23-b0b6-4a75-9f9c-44a992a64d23\" (UID: \"3ed61f23-b0b6-4a75-9f9c-44a992a64d23\") " Jan 28 06:50:48 crc kubenswrapper[4642]: I0128 06:50:48.439649 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-psvvh\" (UniqueName: \"kubernetes.io/projected/3ed61f23-b0b6-4a75-9f9c-44a992a64d23-kube-api-access-psvvh\") pod \"3ed61f23-b0b6-4a75-9f9c-44a992a64d23\" (UID: \"3ed61f23-b0b6-4a75-9f9c-44a992a64d23\") " Jan 28 06:50:48 crc kubenswrapper[4642]: I0128 06:50:48.439685 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ed61f23-b0b6-4a75-9f9c-44a992a64d23-serving-cert\") pod \"3ed61f23-b0b6-4a75-9f9c-44a992a64d23\" (UID: \"3ed61f23-b0b6-4a75-9f9c-44a992a64d23\") " Jan 28 06:50:48 crc kubenswrapper[4642]: I0128 06:50:48.439714 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vj69n\" (UniqueName: \"kubernetes.io/projected/a4310bc4-d7e7-4f8b-832b-57bceda71a45-kube-api-access-vj69n\") pod \"a4310bc4-d7e7-4f8b-832b-57bceda71a45\" (UID: \"a4310bc4-d7e7-4f8b-832b-57bceda71a45\") " Jan 28 06:50:48 crc kubenswrapper[4642]: I0128 06:50:48.439857 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrhf4\" (UniqueName: \"kubernetes.io/projected/18a08ae1-6b1f-4cf6-adb1-2d0cd9e61858-kube-api-access-qrhf4\") pod \"controller-manager-78bc4cb977-psz6q\" (UID: \"18a08ae1-6b1f-4cf6-adb1-2d0cd9e61858\") " pod="openshift-controller-manager/controller-manager-78bc4cb977-psz6q" Jan 28 06:50:48 crc kubenswrapper[4642]: I0128 06:50:48.439945 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18a08ae1-6b1f-4cf6-adb1-2d0cd9e61858-serving-cert\") pod \"controller-manager-78bc4cb977-psz6q\" (UID: \"18a08ae1-6b1f-4cf6-adb1-2d0cd9e61858\") " pod="openshift-controller-manager/controller-manager-78bc4cb977-psz6q" Jan 28 06:50:48 crc kubenswrapper[4642]: I0128 06:50:48.439970 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/18a08ae1-6b1f-4cf6-adb1-2d0cd9e61858-proxy-ca-bundles\") pod \"controller-manager-78bc4cb977-psz6q\" (UID: \"18a08ae1-6b1f-4cf6-adb1-2d0cd9e61858\") " pod="openshift-controller-manager/controller-manager-78bc4cb977-psz6q" Jan 28 06:50:48 crc kubenswrapper[4642]: I0128 06:50:48.440020 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18a08ae1-6b1f-4cf6-adb1-2d0cd9e61858-config\") pod \"controller-manager-78bc4cb977-psz6q\" (UID: \"18a08ae1-6b1f-4cf6-adb1-2d0cd9e61858\") " pod="openshift-controller-manager/controller-manager-78bc4cb977-psz6q" Jan 28 06:50:48 crc kubenswrapper[4642]: I0128 06:50:48.440052 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/18a08ae1-6b1f-4cf6-adb1-2d0cd9e61858-client-ca\") pod \"controller-manager-78bc4cb977-psz6q\" (UID: \"18a08ae1-6b1f-4cf6-adb1-2d0cd9e61858\") " pod="openshift-controller-manager/controller-manager-78bc4cb977-psz6q" Jan 28 06:50:48 crc kubenswrapper[4642]: I0128 06:50:48.440409 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4310bc4-d7e7-4f8b-832b-57bceda71a45-client-ca" (OuterVolumeSpecName: "client-ca") pod "a4310bc4-d7e7-4f8b-832b-57bceda71a45" (UID: "a4310bc4-d7e7-4f8b-832b-57bceda71a45"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:50:48 crc kubenswrapper[4642]: I0128 06:50:48.440609 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ed61f23-b0b6-4a75-9f9c-44a992a64d23-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "3ed61f23-b0b6-4a75-9f9c-44a992a64d23" (UID: "3ed61f23-b0b6-4a75-9f9c-44a992a64d23"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:50:48 crc kubenswrapper[4642]: I0128 06:50:48.440660 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ed61f23-b0b6-4a75-9f9c-44a992a64d23-config" (OuterVolumeSpecName: "config") pod "3ed61f23-b0b6-4a75-9f9c-44a992a64d23" (UID: "3ed61f23-b0b6-4a75-9f9c-44a992a64d23"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:50:48 crc kubenswrapper[4642]: I0128 06:50:48.441221 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ed61f23-b0b6-4a75-9f9c-44a992a64d23-client-ca" (OuterVolumeSpecName: "client-ca") pod "3ed61f23-b0b6-4a75-9f9c-44a992a64d23" (UID: "3ed61f23-b0b6-4a75-9f9c-44a992a64d23"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:50:48 crc kubenswrapper[4642]: I0128 06:50:48.441392 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4310bc4-d7e7-4f8b-832b-57bceda71a45-config" (OuterVolumeSpecName: "config") pod "a4310bc4-d7e7-4f8b-832b-57bceda71a45" (UID: "a4310bc4-d7e7-4f8b-832b-57bceda71a45"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:50:48 crc kubenswrapper[4642]: I0128 06:50:48.446346 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4310bc4-d7e7-4f8b-832b-57bceda71a45-kube-api-access-vj69n" (OuterVolumeSpecName: "kube-api-access-vj69n") pod "a4310bc4-d7e7-4f8b-832b-57bceda71a45" (UID: "a4310bc4-d7e7-4f8b-832b-57bceda71a45"). InnerVolumeSpecName "kube-api-access-vj69n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 06:50:48 crc kubenswrapper[4642]: I0128 06:50:48.446528 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4310bc4-d7e7-4f8b-832b-57bceda71a45-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a4310bc4-d7e7-4f8b-832b-57bceda71a45" (UID: "a4310bc4-d7e7-4f8b-832b-57bceda71a45"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 06:50:48 crc kubenswrapper[4642]: I0128 06:50:48.446937 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ed61f23-b0b6-4a75-9f9c-44a992a64d23-kube-api-access-psvvh" (OuterVolumeSpecName: "kube-api-access-psvvh") pod "3ed61f23-b0b6-4a75-9f9c-44a992a64d23" (UID: "3ed61f23-b0b6-4a75-9f9c-44a992a64d23"). InnerVolumeSpecName "kube-api-access-psvvh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 06:50:48 crc kubenswrapper[4642]: I0128 06:50:48.447409 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ed61f23-b0b6-4a75-9f9c-44a992a64d23-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "3ed61f23-b0b6-4a75-9f9c-44a992a64d23" (UID: "3ed61f23-b0b6-4a75-9f9c-44a992a64d23"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 06:50:48 crc kubenswrapper[4642]: I0128 06:50:48.508072 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-bpz6r"] Jan 28 06:50:48 crc kubenswrapper[4642]: W0128 06:50:48.538609 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode7ad39da_99cf_4851_be79_a7d38df54055.slice/crio-71eb387094f81717e4f2ea7e1a9db939f8256db8de6b6ebc23f43d3e65049bc4 WatchSource:0}: Error finding container 71eb387094f81717e4f2ea7e1a9db939f8256db8de6b6ebc23f43d3e65049bc4: Status 404 returned error can't find the container with id 71eb387094f81717e4f2ea7e1a9db939f8256db8de6b6ebc23f43d3e65049bc4 Jan 28 06:50:48 crc kubenswrapper[4642]: I0128 06:50:48.540712 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18a08ae1-6b1f-4cf6-adb1-2d0cd9e61858-serving-cert\") pod \"controller-manager-78bc4cb977-psz6q\" (UID: \"18a08ae1-6b1f-4cf6-adb1-2d0cd9e61858\") " pod="openshift-controller-manager/controller-manager-78bc4cb977-psz6q" Jan 28 06:50:48 crc kubenswrapper[4642]: I0128 06:50:48.540758 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/18a08ae1-6b1f-4cf6-adb1-2d0cd9e61858-proxy-ca-bundles\") pod \"controller-manager-78bc4cb977-psz6q\" (UID: \"18a08ae1-6b1f-4cf6-adb1-2d0cd9e61858\") " pod="openshift-controller-manager/controller-manager-78bc4cb977-psz6q" Jan 28 06:50:48 crc kubenswrapper[4642]: I0128 06:50:48.540809 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18a08ae1-6b1f-4cf6-adb1-2d0cd9e61858-config\") pod \"controller-manager-78bc4cb977-psz6q\" (UID: \"18a08ae1-6b1f-4cf6-adb1-2d0cd9e61858\") " pod="openshift-controller-manager/controller-manager-78bc4cb977-psz6q" Jan 28 06:50:48 crc kubenswrapper[4642]: I0128 06:50:48.540840 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/18a08ae1-6b1f-4cf6-adb1-2d0cd9e61858-client-ca\") pod \"controller-manager-78bc4cb977-psz6q\" (UID: \"18a08ae1-6b1f-4cf6-adb1-2d0cd9e61858\") " pod="openshift-controller-manager/controller-manager-78bc4cb977-psz6q" Jan 28 06:50:48 crc kubenswrapper[4642]: I0128 06:50:48.540871 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrhf4\" (UniqueName: \"kubernetes.io/projected/18a08ae1-6b1f-4cf6-adb1-2d0cd9e61858-kube-api-access-qrhf4\") pod \"controller-manager-78bc4cb977-psz6q\" (UID: \"18a08ae1-6b1f-4cf6-adb1-2d0cd9e61858\") " pod="openshift-controller-manager/controller-manager-78bc4cb977-psz6q" Jan 28 06:50:48 crc kubenswrapper[4642]: I0128 06:50:48.540935 4642 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3ed61f23-b0b6-4a75-9f9c-44a992a64d23-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 06:50:48 crc kubenswrapper[4642]: I0128 06:50:48.540948 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-psvvh\" (UniqueName: \"kubernetes.io/projected/3ed61f23-b0b6-4a75-9f9c-44a992a64d23-kube-api-access-psvvh\") on node \"crc\" DevicePath \"\"" Jan 28 06:50:48 crc kubenswrapper[4642]: I0128 06:50:48.540958 4642 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ed61f23-b0b6-4a75-9f9c-44a992a64d23-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 06:50:48 crc kubenswrapper[4642]: I0128 06:50:48.540968 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vj69n\" (UniqueName: \"kubernetes.io/projected/a4310bc4-d7e7-4f8b-832b-57bceda71a45-kube-api-access-vj69n\") on node \"crc\" DevicePath \"\"" Jan 28 06:50:48 crc kubenswrapper[4642]: I0128 06:50:48.540977 4642 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4310bc4-d7e7-4f8b-832b-57bceda71a45-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 06:50:48 crc kubenswrapper[4642]: I0128 06:50:48.540985 4642 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a4310bc4-d7e7-4f8b-832b-57bceda71a45-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 06:50:48 crc kubenswrapper[4642]: I0128 06:50:48.541420 4642 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4310bc4-d7e7-4f8b-832b-57bceda71a45-config\") on node \"crc\" DevicePath \"\"" Jan 28 06:50:48 crc kubenswrapper[4642]: I0128 06:50:48.542051 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/18a08ae1-6b1f-4cf6-adb1-2d0cd9e61858-client-ca\") pod \"controller-manager-78bc4cb977-psz6q\" (UID: \"18a08ae1-6b1f-4cf6-adb1-2d0cd9e61858\") " pod="openshift-controller-manager/controller-manager-78bc4cb977-psz6q" Jan 28 06:50:48 crc kubenswrapper[4642]: I0128 06:50:48.542072 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/18a08ae1-6b1f-4cf6-adb1-2d0cd9e61858-proxy-ca-bundles\") pod \"controller-manager-78bc4cb977-psz6q\" (UID: \"18a08ae1-6b1f-4cf6-adb1-2d0cd9e61858\") " pod="openshift-controller-manager/controller-manager-78bc4cb977-psz6q" Jan 28 06:50:48 crc kubenswrapper[4642]: I0128 06:50:48.542150 4642 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3ed61f23-b0b6-4a75-9f9c-44a992a64d23-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 28 06:50:48 crc kubenswrapper[4642]: I0128 06:50:48.542172 4642 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ed61f23-b0b6-4a75-9f9c-44a992a64d23-config\") on node \"crc\" DevicePath \"\"" Jan 28 06:50:48 crc kubenswrapper[4642]: I0128 06:50:48.542972 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18a08ae1-6b1f-4cf6-adb1-2d0cd9e61858-config\") pod \"controller-manager-78bc4cb977-psz6q\" (UID: \"18a08ae1-6b1f-4cf6-adb1-2d0cd9e61858\") " pod="openshift-controller-manager/controller-manager-78bc4cb977-psz6q" Jan 28 06:50:48 crc kubenswrapper[4642]: I0128 06:50:48.544035 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18a08ae1-6b1f-4cf6-adb1-2d0cd9e61858-serving-cert\") pod \"controller-manager-78bc4cb977-psz6q\" (UID: \"18a08ae1-6b1f-4cf6-adb1-2d0cd9e61858\") " pod="openshift-controller-manager/controller-manager-78bc4cb977-psz6q" Jan 28 06:50:48 crc kubenswrapper[4642]: I0128 06:50:48.559782 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-fg5mw" Jan 28 06:50:48 crc kubenswrapper[4642]: I0128 06:50:48.561741 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrhf4\" (UniqueName: \"kubernetes.io/projected/18a08ae1-6b1f-4cf6-adb1-2d0cd9e61858-kube-api-access-qrhf4\") pod \"controller-manager-78bc4cb977-psz6q\" (UID: \"18a08ae1-6b1f-4cf6-adb1-2d0cd9e61858\") " pod="openshift-controller-manager/controller-manager-78bc4cb977-psz6q" Jan 28 06:50:48 crc kubenswrapper[4642]: I0128 06:50:48.659453 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-78bc4cb977-psz6q" Jan 28 06:50:48 crc kubenswrapper[4642]: I0128 06:50:48.815814 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-78bc4cb977-psz6q"] Jan 28 06:50:48 crc kubenswrapper[4642]: W0128 06:50:48.822763 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod18a08ae1_6b1f_4cf6_adb1_2d0cd9e61858.slice/crio-cd76ce02490f8f7a0fb2c8bb438d1c07a16a7f912e909a596fa8f00889cba99a WatchSource:0}: Error finding container cd76ce02490f8f7a0fb2c8bb438d1c07a16a7f912e909a596fa8f00889cba99a: Status 404 returned error can't find the container with id cd76ce02490f8f7a0fb2c8bb438d1c07a16a7f912e909a596fa8f00889cba99a Jan 28 06:50:49 crc kubenswrapper[4642]: I0128 06:50:49.080139 4642 generic.go:334] "Generic (PLEG): container finished" podID="14c19495-c3cb-4f36-9d69-c9a893c766f5" containerID="7d00e975d90274615fcb6bfd247bd8bde01c3c6dacf914f42219a847b29be9b9" exitCode=0 Jan 28 06:50:49 crc kubenswrapper[4642]: I0128 06:50:49.080230 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c56jc" event={"ID":"14c19495-c3cb-4f36-9d69-c9a893c766f5","Type":"ContainerDied","Data":"7d00e975d90274615fcb6bfd247bd8bde01c3c6dacf914f42219a847b29be9b9"} Jan 28 06:50:49 crc kubenswrapper[4642]: I0128 06:50:49.082078 4642 generic.go:334] "Generic (PLEG): container finished" podID="bd03d734-1b6b-4f56-bb31-19bcb87a0250" containerID="1372f6e5fc16b214b7227c199fa4ca959da0a3bc4d41cd2ccf5da306356c3add" exitCode=0 Jan 28 06:50:49 crc kubenswrapper[4642]: I0128 06:50:49.082136 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sv75h" event={"ID":"bd03d734-1b6b-4f56-bb31-19bcb87a0250","Type":"ContainerDied","Data":"1372f6e5fc16b214b7227c199fa4ca959da0a3bc4d41cd2ccf5da306356c3add"} Jan 28 06:50:49 crc kubenswrapper[4642]: I0128 06:50:49.083657 4642 generic.go:334] "Generic (PLEG): container finished" podID="46ff4beb-e52c-4d77-96a2-0dcde4c8c516" containerID="4e61c37e6054682ba02c423cf3a754a2084ab573094068b472db7b8939be74cc" exitCode=0 Jan 28 06:50:49 crc kubenswrapper[4642]: I0128 06:50:49.083689 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qgssm" event={"ID":"46ff4beb-e52c-4d77-96a2-0dcde4c8c516","Type":"ContainerDied","Data":"4e61c37e6054682ba02c423cf3a754a2084ab573094068b472db7b8939be74cc"} Jan 28 06:50:49 crc kubenswrapper[4642]: I0128 06:50:49.084839 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-78bc4cb977-psz6q" event={"ID":"18a08ae1-6b1f-4cf6-adb1-2d0cd9e61858","Type":"ContainerStarted","Data":"601b299fa9fb0c67340c35e001b17f5aa5a776b4339a2ab62947a7d372bc6acd"} Jan 28 06:50:49 crc kubenswrapper[4642]: I0128 06:50:49.084888 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-78bc4cb977-psz6q" event={"ID":"18a08ae1-6b1f-4cf6-adb1-2d0cd9e61858","Type":"ContainerStarted","Data":"cd76ce02490f8f7a0fb2c8bb438d1c07a16a7f912e909a596fa8f00889cba99a"} Jan 28 06:50:49 crc kubenswrapper[4642]: I0128 06:50:49.084903 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-78bc4cb977-psz6q" Jan 28 06:50:49 crc kubenswrapper[4642]: I0128 06:50:49.088660 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-78bc4cb977-psz6q" Jan 28 06:50:49 crc kubenswrapper[4642]: I0128 06:50:49.089588 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5m9j5" event={"ID":"a4310bc4-d7e7-4f8b-832b-57bceda71a45","Type":"ContainerDied","Data":"54463251fc2cf840e27384500d94711273888e542012bf8e41db5764d78c4c91"} Jan 28 06:50:49 crc kubenswrapper[4642]: I0128 06:50:49.089619 4642 scope.go:117] "RemoveContainer" containerID="0539d90d377dce9a8ffd4fd92e903761054a18c41d6001ab10ccde7944c39791" Jan 28 06:50:49 crc kubenswrapper[4642]: I0128 06:50:49.089713 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5m9j5" Jan 28 06:50:49 crc kubenswrapper[4642]: I0128 06:50:49.094923 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-t9tps" Jan 28 06:50:49 crc kubenswrapper[4642]: I0128 06:50:49.094921 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-t9tps" event={"ID":"3ed61f23-b0b6-4a75-9f9c-44a992a64d23","Type":"ContainerDied","Data":"22326decbba305092dfaf37ab087c858c2100971c0e195dad2033196a3adf056"} Jan 28 06:50:49 crc kubenswrapper[4642]: I0128 06:50:49.104248 4642 generic.go:334] "Generic (PLEG): container finished" podID="8f1e7744-f44d-4430-915b-59821d507da1" containerID="df9fe6ecacf0625c219021d2a1208998007b6196c960d3b1a6159e1924c49204" exitCode=0 Jan 28 06:50:49 crc kubenswrapper[4642]: I0128 06:50:49.105720 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ps9zm" event={"ID":"8f1e7744-f44d-4430-915b-59821d507da1","Type":"ContainerDied","Data":"df9fe6ecacf0625c219021d2a1208998007b6196c960d3b1a6159e1924c49204"} Jan 28 06:50:49 crc kubenswrapper[4642]: I0128 06:50:49.107004 4642 generic.go:334] "Generic (PLEG): container finished" podID="a4be5139-f33d-4cb9-829c-cfe1116c1305" containerID="e8932f18cf2283617e4e09e5994b3d217308d40b6950a94b2b1367fc704fde5f" exitCode=0 Jan 28 06:50:49 crc kubenswrapper[4642]: I0128 06:50:49.107049 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bsvlt" event={"ID":"a4be5139-f33d-4cb9-829c-cfe1116c1305","Type":"ContainerDied","Data":"e8932f18cf2283617e4e09e5994b3d217308d40b6950a94b2b1367fc704fde5f"} Jan 28 06:50:49 crc kubenswrapper[4642]: I0128 06:50:49.121967 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-bpz6r" event={"ID":"e7ad39da-99cf-4851-be79-a7d38df54055","Type":"ContainerStarted","Data":"2564628d90229120cb8f565881d456f04525e55fe1a86c83a8de043abbae7786"} Jan 28 06:50:49 crc kubenswrapper[4642]: I0128 06:50:49.121992 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-bpz6r" event={"ID":"e7ad39da-99cf-4851-be79-a7d38df54055","Type":"ContainerStarted","Data":"fb1b6ceb4b39031707e3395e0196fb52ffbc9b794a297af57ce0718bc1f52837"} Jan 28 06:50:49 crc kubenswrapper[4642]: I0128 06:50:49.122004 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-bpz6r" event={"ID":"e7ad39da-99cf-4851-be79-a7d38df54055","Type":"ContainerStarted","Data":"71eb387094f81717e4f2ea7e1a9db939f8256db8de6b6ebc23f43d3e65049bc4"} Jan 28 06:50:49 crc kubenswrapper[4642]: I0128 06:50:49.126170 4642 generic.go:334] "Generic (PLEG): container finished" podID="fc531134-3ad4-4937-ab7f-8c3fac78dae6" containerID="253ef214fdd2ea6e5a92a4d0c0bb03498be3dc1e2c7cba686c4991749ecb2a7a" exitCode=0 Jan 28 06:50:49 crc kubenswrapper[4642]: I0128 06:50:49.126261 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7jkmz" event={"ID":"fc531134-3ad4-4937-ab7f-8c3fac78dae6","Type":"ContainerDied","Data":"253ef214fdd2ea6e5a92a4d0c0bb03498be3dc1e2c7cba686c4991749ecb2a7a"} Jan 28 06:50:49 crc kubenswrapper[4642]: I0128 06:50:49.130851 4642 generic.go:334] "Generic (PLEG): container finished" podID="0a83f4e9-662c-4d05-b296-59e7ca842b15" containerID="f58ccf664ac98e8781413e5c54ef727a6ccda6aa9d4ccca9f47b770a2f46a32a" exitCode=0 Jan 28 06:50:49 crc kubenswrapper[4642]: I0128 06:50:49.130994 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s2kvw" event={"ID":"0a83f4e9-662c-4d05-b296-59e7ca842b15","Type":"ContainerDied","Data":"f58ccf664ac98e8781413e5c54ef727a6ccda6aa9d4ccca9f47b770a2f46a32a"} Jan 28 06:50:49 crc kubenswrapper[4642]: I0128 06:50:49.145374 4642 generic.go:334] "Generic (PLEG): container finished" podID="399d0f01-0fd0-407e-a63d-e7c900f24452" containerID="c7ac32f493aa94cf7827b5e33cadbc01e20d788f80ce30822441a93584884d77" exitCode=0 Jan 28 06:50:49 crc kubenswrapper[4642]: I0128 06:50:49.145435 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sdvcx" event={"ID":"399d0f01-0fd0-407e-a63d-e7c900f24452","Type":"ContainerDied","Data":"c7ac32f493aa94cf7827b5e33cadbc01e20d788f80ce30822441a93584884d77"} Jan 28 06:50:49 crc kubenswrapper[4642]: I0128 06:50:49.147590 4642 scope.go:117] "RemoveContainer" containerID="949d30e971a64837560d12686c36196753107cfc7b0b3ca8660db110af38f233" Jan 28 06:50:49 crc kubenswrapper[4642]: I0128 06:50:49.192591 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-78bc4cb977-psz6q" podStartSLOduration=5.192574393 podStartE2EDuration="5.192574393s" podCreationTimestamp="2026-01-28 06:50:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 06:50:49.179678012 +0000 UTC m=+172.411766822" watchObservedRunningTime="2026-01-28 06:50:49.192574393 +0000 UTC m=+172.424663203" Jan 28 06:50:49 crc kubenswrapper[4642]: I0128 06:50:49.204173 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-t9tps"] Jan 28 06:50:49 crc kubenswrapper[4642]: I0128 06:50:49.205891 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-t9tps"] Jan 28 06:50:49 crc kubenswrapper[4642]: I0128 06:50:49.269425 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-bpz6r" podStartSLOduration=149.269407505 podStartE2EDuration="2m29.269407505s" podCreationTimestamp="2026-01-28 06:48:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 06:50:49.267747081 +0000 UTC m=+172.499835890" watchObservedRunningTime="2026-01-28 06:50:49.269407505 +0000 UTC m=+172.501496313" Jan 28 06:50:49 crc kubenswrapper[4642]: I0128 06:50:49.299539 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-5m9j5"] Jan 28 06:50:49 crc kubenswrapper[4642]: I0128 06:50:49.302759 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-5m9j5"] Jan 28 06:50:50 crc kubenswrapper[4642]: I0128 06:50:50.152943 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s2kvw" event={"ID":"0a83f4e9-662c-4d05-b296-59e7ca842b15","Type":"ContainerStarted","Data":"1ce16c422ae748fe9295037bfb981f4eac1f5c4d2fcaed5d2d07b48fdc68b4c8"} Jan 28 06:50:50 crc kubenswrapper[4642]: I0128 06:50:50.157420 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bsvlt" event={"ID":"a4be5139-f33d-4cb9-829c-cfe1116c1305","Type":"ContainerStarted","Data":"c4c6b60c32d2c8da76a328af6e7bdacac70b4f8ead6192dbf9e2c55ad4718e5d"} Jan 28 06:50:50 crc kubenswrapper[4642]: I0128 06:50:50.159860 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sv75h" event={"ID":"bd03d734-1b6b-4f56-bb31-19bcb87a0250","Type":"ContainerStarted","Data":"ce5ed18a12f3a09fb50ca4995f92c25febe840f58670239b767e7e60fbc44c53"} Jan 28 06:50:50 crc kubenswrapper[4642]: I0128 06:50:50.167530 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-s2kvw" podStartSLOduration=2.081592239 podStartE2EDuration="20.167520186s" podCreationTimestamp="2026-01-28 06:50:30 +0000 UTC" firstStartedPulling="2026-01-28 06:50:31.920050691 +0000 UTC m=+155.152139500" lastFinishedPulling="2026-01-28 06:50:50.005978638 +0000 UTC m=+173.238067447" observedRunningTime="2026-01-28 06:50:50.165907953 +0000 UTC m=+173.397996782" watchObservedRunningTime="2026-01-28 06:50:50.167520186 +0000 UTC m=+173.399608995" Jan 28 06:50:50 crc kubenswrapper[4642]: I0128 06:50:50.181445 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bsvlt" podStartSLOduration=2.252972009 podStartE2EDuration="23.181426486s" podCreationTimestamp="2026-01-28 06:50:27 +0000 UTC" firstStartedPulling="2026-01-28 06:50:28.863333418 +0000 UTC m=+152.095422217" lastFinishedPulling="2026-01-28 06:50:49.791787885 +0000 UTC m=+173.023876694" observedRunningTime="2026-01-28 06:50:50.177812077 +0000 UTC m=+173.409900887" watchObservedRunningTime="2026-01-28 06:50:50.181426486 +0000 UTC m=+173.413515296" Jan 28 06:50:50 crc kubenswrapper[4642]: I0128 06:50:50.197726 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-sv75h" podStartSLOduration=2.231794284 podStartE2EDuration="22.197709739s" podCreationTimestamp="2026-01-28 06:50:28 +0000 UTC" firstStartedPulling="2026-01-28 06:50:29.88522736 +0000 UTC m=+153.117316170" lastFinishedPulling="2026-01-28 06:50:49.851142816 +0000 UTC m=+173.083231625" observedRunningTime="2026-01-28 06:50:50.195871431 +0000 UTC m=+173.427960240" watchObservedRunningTime="2026-01-28 06:50:50.197709739 +0000 UTC m=+173.429798548" Jan 28 06:50:50 crc kubenswrapper[4642]: I0128 06:50:50.849019 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-s2kvw" Jan 28 06:50:50 crc kubenswrapper[4642]: I0128 06:50:50.849306 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-s2kvw" Jan 28 06:50:51 crc kubenswrapper[4642]: I0128 06:50:51.104975 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ed61f23-b0b6-4a75-9f9c-44a992a64d23" path="/var/lib/kubelet/pods/3ed61f23-b0b6-4a75-9f9c-44a992a64d23/volumes" Jan 28 06:50:51 crc kubenswrapper[4642]: I0128 06:50:51.105550 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4310bc4-d7e7-4f8b-832b-57bceda71a45" path="/var/lib/kubelet/pods/a4310bc4-d7e7-4f8b-832b-57bceda71a45/volumes" Jan 28 06:50:51 crc kubenswrapper[4642]: I0128 06:50:51.168972 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c56jc" event={"ID":"14c19495-c3cb-4f36-9d69-c9a893c766f5","Type":"ContainerStarted","Data":"cf784a9c03c1702e653b26653c017559f0b965453501d8964a9de8940626f6d4"} Jan 28 06:50:51 crc kubenswrapper[4642]: I0128 06:50:51.172113 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7jkmz" event={"ID":"fc531134-3ad4-4937-ab7f-8c3fac78dae6","Type":"ContainerStarted","Data":"0b268d79b1447bba2a8e196425f5bc88e1f87f19c60a9c5457e4891c61f37671"} Jan 28 06:50:51 crc kubenswrapper[4642]: I0128 06:50:51.175281 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qgssm" event={"ID":"46ff4beb-e52c-4d77-96a2-0dcde4c8c516","Type":"ContainerStarted","Data":"9a4f2a4a4a51d678b3b607501d73aeeb68695f503d4ff64e0bba6cd3e096ebf6"} Jan 28 06:50:51 crc kubenswrapper[4642]: I0128 06:50:51.178745 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sdvcx" event={"ID":"399d0f01-0fd0-407e-a63d-e7c900f24452","Type":"ContainerStarted","Data":"1bb4c90a7f4198f0d7681bdc37b564caf91259a747f9c0c7bf67d3731d209b8d"} Jan 28 06:50:51 crc kubenswrapper[4642]: I0128 06:50:51.182383 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ps9zm" event={"ID":"8f1e7744-f44d-4430-915b-59821d507da1","Type":"ContainerStarted","Data":"10b1cb7a9a54d17a2a80282801b2159ac4fa81cbab270e39da3158d43568609d"} Jan 28 06:50:51 crc kubenswrapper[4642]: I0128 06:50:51.187839 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-c56jc" podStartSLOduration=2.8431788879999997 podStartE2EDuration="24.187824468s" podCreationTimestamp="2026-01-28 06:50:27 +0000 UTC" firstStartedPulling="2026-01-28 06:50:28.855011813 +0000 UTC m=+152.087100622" lastFinishedPulling="2026-01-28 06:50:50.199657393 +0000 UTC m=+173.431746202" observedRunningTime="2026-01-28 06:50:51.185725499 +0000 UTC m=+174.417814308" watchObservedRunningTime="2026-01-28 06:50:51.187824468 +0000 UTC m=+174.419913277" Jan 28 06:50:51 crc kubenswrapper[4642]: I0128 06:50:51.204249 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-sdvcx" podStartSLOduration=2.860424019 podStartE2EDuration="22.20422982s" podCreationTimestamp="2026-01-28 06:50:29 +0000 UTC" firstStartedPulling="2026-01-28 06:50:30.907007498 +0000 UTC m=+154.139096308" lastFinishedPulling="2026-01-28 06:50:50.2508133 +0000 UTC m=+173.482902109" observedRunningTime="2026-01-28 06:50:51.203643366 +0000 UTC m=+174.435732175" watchObservedRunningTime="2026-01-28 06:50:51.20422982 +0000 UTC m=+174.436318629" Jan 28 06:50:51 crc kubenswrapper[4642]: I0128 06:50:51.220613 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qgssm" podStartSLOduration=3.778065342 podStartE2EDuration="25.220584987s" podCreationTimestamp="2026-01-28 06:50:26 +0000 UTC" firstStartedPulling="2026-01-28 06:50:28.859138076 +0000 UTC m=+152.091226884" lastFinishedPulling="2026-01-28 06:50:50.30165772 +0000 UTC m=+173.533746529" observedRunningTime="2026-01-28 06:50:51.218310949 +0000 UTC m=+174.450399759" watchObservedRunningTime="2026-01-28 06:50:51.220584987 +0000 UTC m=+174.452673797" Jan 28 06:50:51 crc kubenswrapper[4642]: I0128 06:50:51.244601 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ps9zm" podStartSLOduration=2.932440413 podStartE2EDuration="21.244569388s" podCreationTimestamp="2026-01-28 06:50:30 +0000 UTC" firstStartedPulling="2026-01-28 06:50:31.917605461 +0000 UTC m=+155.149694270" lastFinishedPulling="2026-01-28 06:50:50.229734435 +0000 UTC m=+173.461823245" observedRunningTime="2026-01-28 06:50:51.241250013 +0000 UTC m=+174.473338823" watchObservedRunningTime="2026-01-28 06:50:51.244569388 +0000 UTC m=+174.476658198" Jan 28 06:50:51 crc kubenswrapper[4642]: I0128 06:50:51.258497 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7jkmz" podStartSLOduration=2.930274957 podStartE2EDuration="24.258472323s" podCreationTimestamp="2026-01-28 06:50:27 +0000 UTC" firstStartedPulling="2026-01-28 06:50:28.857794808 +0000 UTC m=+152.089883618" lastFinishedPulling="2026-01-28 06:50:50.185992176 +0000 UTC m=+173.418080984" observedRunningTime="2026-01-28 06:50:51.25708836 +0000 UTC m=+174.489177168" watchObservedRunningTime="2026-01-28 06:50:51.258472323 +0000 UTC m=+174.490561131" Jan 28 06:50:51 crc kubenswrapper[4642]: I0128 06:50:51.320571 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c588587d7-r4wkq"] Jan 28 06:50:51 crc kubenswrapper[4642]: I0128 06:50:51.321140 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c588587d7-r4wkq" Jan 28 06:50:51 crc kubenswrapper[4642]: I0128 06:50:51.322355 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 28 06:50:51 crc kubenswrapper[4642]: I0128 06:50:51.322558 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 28 06:50:51 crc kubenswrapper[4642]: I0128 06:50:51.322774 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 28 06:50:51 crc kubenswrapper[4642]: I0128 06:50:51.322872 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 28 06:50:51 crc kubenswrapper[4642]: I0128 06:50:51.323325 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 28 06:50:51 crc kubenswrapper[4642]: I0128 06:50:51.323587 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 28 06:50:51 crc kubenswrapper[4642]: I0128 06:50:51.334696 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c588587d7-r4wkq"] Jan 28 06:50:51 crc kubenswrapper[4642]: I0128 06:50:51.386491 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/edac8762-d5ab-42ef-b810-8de869bfc50e-serving-cert\") pod \"route-controller-manager-7c588587d7-r4wkq\" (UID: \"edac8762-d5ab-42ef-b810-8de869bfc50e\") " pod="openshift-route-controller-manager/route-controller-manager-7c588587d7-r4wkq" Jan 28 06:50:51 crc kubenswrapper[4642]: I0128 06:50:51.386544 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edac8762-d5ab-42ef-b810-8de869bfc50e-config\") pod \"route-controller-manager-7c588587d7-r4wkq\" (UID: \"edac8762-d5ab-42ef-b810-8de869bfc50e\") " pod="openshift-route-controller-manager/route-controller-manager-7c588587d7-r4wkq" Jan 28 06:50:51 crc kubenswrapper[4642]: I0128 06:50:51.386571 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mq8gs\" (UniqueName: \"kubernetes.io/projected/edac8762-d5ab-42ef-b810-8de869bfc50e-kube-api-access-mq8gs\") pod \"route-controller-manager-7c588587d7-r4wkq\" (UID: \"edac8762-d5ab-42ef-b810-8de869bfc50e\") " pod="openshift-route-controller-manager/route-controller-manager-7c588587d7-r4wkq" Jan 28 06:50:51 crc kubenswrapper[4642]: I0128 06:50:51.386600 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/edac8762-d5ab-42ef-b810-8de869bfc50e-client-ca\") pod \"route-controller-manager-7c588587d7-r4wkq\" (UID: \"edac8762-d5ab-42ef-b810-8de869bfc50e\") " pod="openshift-route-controller-manager/route-controller-manager-7c588587d7-r4wkq" Jan 28 06:50:51 crc kubenswrapper[4642]: I0128 06:50:51.487377 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/edac8762-d5ab-42ef-b810-8de869bfc50e-serving-cert\") pod \"route-controller-manager-7c588587d7-r4wkq\" (UID: \"edac8762-d5ab-42ef-b810-8de869bfc50e\") " pod="openshift-route-controller-manager/route-controller-manager-7c588587d7-r4wkq" Jan 28 06:50:51 crc kubenswrapper[4642]: I0128 06:50:51.487430 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edac8762-d5ab-42ef-b810-8de869bfc50e-config\") pod \"route-controller-manager-7c588587d7-r4wkq\" (UID: \"edac8762-d5ab-42ef-b810-8de869bfc50e\") " pod="openshift-route-controller-manager/route-controller-manager-7c588587d7-r4wkq" Jan 28 06:50:51 crc kubenswrapper[4642]: I0128 06:50:51.487455 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mq8gs\" (UniqueName: \"kubernetes.io/projected/edac8762-d5ab-42ef-b810-8de869bfc50e-kube-api-access-mq8gs\") pod \"route-controller-manager-7c588587d7-r4wkq\" (UID: \"edac8762-d5ab-42ef-b810-8de869bfc50e\") " pod="openshift-route-controller-manager/route-controller-manager-7c588587d7-r4wkq" Jan 28 06:50:51 crc kubenswrapper[4642]: I0128 06:50:51.487508 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/edac8762-d5ab-42ef-b810-8de869bfc50e-client-ca\") pod \"route-controller-manager-7c588587d7-r4wkq\" (UID: \"edac8762-d5ab-42ef-b810-8de869bfc50e\") " pod="openshift-route-controller-manager/route-controller-manager-7c588587d7-r4wkq" Jan 28 06:50:51 crc kubenswrapper[4642]: I0128 06:50:51.488414 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/edac8762-d5ab-42ef-b810-8de869bfc50e-client-ca\") pod \"route-controller-manager-7c588587d7-r4wkq\" (UID: \"edac8762-d5ab-42ef-b810-8de869bfc50e\") " pod="openshift-route-controller-manager/route-controller-manager-7c588587d7-r4wkq" Jan 28 06:50:51 crc kubenswrapper[4642]: I0128 06:50:51.488545 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edac8762-d5ab-42ef-b810-8de869bfc50e-config\") pod \"route-controller-manager-7c588587d7-r4wkq\" (UID: \"edac8762-d5ab-42ef-b810-8de869bfc50e\") " pod="openshift-route-controller-manager/route-controller-manager-7c588587d7-r4wkq" Jan 28 06:50:51 crc kubenswrapper[4642]: I0128 06:50:51.493879 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/edac8762-d5ab-42ef-b810-8de869bfc50e-serving-cert\") pod \"route-controller-manager-7c588587d7-r4wkq\" (UID: \"edac8762-d5ab-42ef-b810-8de869bfc50e\") " pod="openshift-route-controller-manager/route-controller-manager-7c588587d7-r4wkq" Jan 28 06:50:51 crc kubenswrapper[4642]: I0128 06:50:51.502552 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mq8gs\" (UniqueName: \"kubernetes.io/projected/edac8762-d5ab-42ef-b810-8de869bfc50e-kube-api-access-mq8gs\") pod \"route-controller-manager-7c588587d7-r4wkq\" (UID: \"edac8762-d5ab-42ef-b810-8de869bfc50e\") " pod="openshift-route-controller-manager/route-controller-manager-7c588587d7-r4wkq" Jan 28 06:50:51 crc kubenswrapper[4642]: I0128 06:50:51.635905 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c588587d7-r4wkq" Jan 28 06:50:51 crc kubenswrapper[4642]: I0128 06:50:51.850731 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c588587d7-r4wkq"] Jan 28 06:50:51 crc kubenswrapper[4642]: W0128 06:50:51.869517 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podedac8762_d5ab_42ef_b810_8de869bfc50e.slice/crio-2e6f880c3b266d58433297847cf42032ed0aea857afa7dbd43f2de6fbe765f9c WatchSource:0}: Error finding container 2e6f880c3b266d58433297847cf42032ed0aea857afa7dbd43f2de6fbe765f9c: Status 404 returned error can't find the container with id 2e6f880c3b266d58433297847cf42032ed0aea857afa7dbd43f2de6fbe765f9c Jan 28 06:50:51 crc kubenswrapper[4642]: I0128 06:50:51.944334 4642 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-s2kvw" podUID="0a83f4e9-662c-4d05-b296-59e7ca842b15" containerName="registry-server" probeResult="failure" output=< Jan 28 06:50:51 crc kubenswrapper[4642]: timeout: failed to connect service ":50051" within 1s Jan 28 06:50:51 crc kubenswrapper[4642]: > Jan 28 06:50:52 crc kubenswrapper[4642]: I0128 06:50:52.189083 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c588587d7-r4wkq" event={"ID":"edac8762-d5ab-42ef-b810-8de869bfc50e","Type":"ContainerStarted","Data":"ea5be59e58a7284424ed9144675c75292b0fa6ac389284bd00f93b8ca7474c28"} Jan 28 06:50:52 crc kubenswrapper[4642]: I0128 06:50:52.189121 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c588587d7-r4wkq" event={"ID":"edac8762-d5ab-42ef-b810-8de869bfc50e","Type":"ContainerStarted","Data":"2e6f880c3b266d58433297847cf42032ed0aea857afa7dbd43f2de6fbe765f9c"} Jan 28 06:50:52 crc kubenswrapper[4642]: I0128 06:50:52.189534 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7c588587d7-r4wkq" Jan 28 06:50:52 crc kubenswrapper[4642]: I0128 06:50:52.209062 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7c588587d7-r4wkq" podStartSLOduration=8.209046346 podStartE2EDuration="8.209046346s" podCreationTimestamp="2026-01-28 06:50:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 06:50:52.207876475 +0000 UTC m=+175.439965284" watchObservedRunningTime="2026-01-28 06:50:52.209046346 +0000 UTC m=+175.441135155" Jan 28 06:50:52 crc kubenswrapper[4642]: I0128 06:50:52.333323 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7c588587d7-r4wkq" Jan 28 06:50:57 crc kubenswrapper[4642]: I0128 06:50:57.255917 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qgssm" Jan 28 06:50:57 crc kubenswrapper[4642]: I0128 06:50:57.256553 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qgssm" Jan 28 06:50:57 crc kubenswrapper[4642]: I0128 06:50:57.290750 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qgssm" Jan 28 06:50:57 crc kubenswrapper[4642]: I0128 06:50:57.447201 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bsvlt" Jan 28 06:50:57 crc kubenswrapper[4642]: I0128 06:50:57.447261 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bsvlt" Jan 28 06:50:57 crc kubenswrapper[4642]: I0128 06:50:57.474065 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bsvlt" Jan 28 06:50:57 crc kubenswrapper[4642]: I0128 06:50:57.706146 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7jkmz" Jan 28 06:50:57 crc kubenswrapper[4642]: I0128 06:50:57.706209 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7jkmz" Jan 28 06:50:57 crc kubenswrapper[4642]: I0128 06:50:57.735839 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7jkmz" Jan 28 06:50:57 crc kubenswrapper[4642]: I0128 06:50:57.889604 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-c56jc" Jan 28 06:50:57 crc kubenswrapper[4642]: I0128 06:50:57.889650 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-c56jc" Jan 28 06:50:57 crc kubenswrapper[4642]: I0128 06:50:57.918914 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-c56jc" Jan 28 06:50:58 crc kubenswrapper[4642]: I0128 06:50:58.248152 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-c56jc" Jan 28 06:50:58 crc kubenswrapper[4642]: I0128 06:50:58.248963 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qgssm" Jan 28 06:50:58 crc kubenswrapper[4642]: I0128 06:50:58.250731 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7jkmz" Jan 28 06:50:58 crc kubenswrapper[4642]: I0128 06:50:58.253273 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bsvlt" Jan 28 06:50:59 crc kubenswrapper[4642]: I0128 06:50:59.111227 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7jkmz"] Jan 28 06:50:59 crc kubenswrapper[4642]: I0128 06:50:59.259539 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-sv75h" Jan 28 06:50:59 crc kubenswrapper[4642]: I0128 06:50:59.259595 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-sv75h" Jan 28 06:50:59 crc kubenswrapper[4642]: I0128 06:50:59.289667 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-sv75h" Jan 28 06:50:59 crc kubenswrapper[4642]: I0128 06:50:59.642714 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-sdvcx" Jan 28 06:50:59 crc kubenswrapper[4642]: I0128 06:50:59.642776 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-sdvcx" Jan 28 06:50:59 crc kubenswrapper[4642]: I0128 06:50:59.671098 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-sdvcx" Jan 28 06:50:59 crc kubenswrapper[4642]: I0128 06:50:59.712357 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-c56jc"] Jan 28 06:51:00 crc kubenswrapper[4642]: I0128 06:51:00.228897 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7jkmz" podUID="fc531134-3ad4-4937-ab7f-8c3fac78dae6" containerName="registry-server" containerID="cri-o://0b268d79b1447bba2a8e196425f5bc88e1f87f19c60a9c5457e4891c61f37671" gracePeriod=2 Jan 28 06:51:00 crc kubenswrapper[4642]: I0128 06:51:00.229238 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-c56jc" podUID="14c19495-c3cb-4f36-9d69-c9a893c766f5" containerName="registry-server" containerID="cri-o://cf784a9c03c1702e653b26653c017559f0b965453501d8964a9de8940626f6d4" gracePeriod=2 Jan 28 06:51:00 crc kubenswrapper[4642]: I0128 06:51:00.262042 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-sdvcx" Jan 28 06:51:00 crc kubenswrapper[4642]: I0128 06:51:00.269989 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-sv75h" Jan 28 06:51:00 crc kubenswrapper[4642]: I0128 06:51:00.444134 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ps9zm" Jan 28 06:51:00 crc kubenswrapper[4642]: I0128 06:51:00.444415 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ps9zm" Jan 28 06:51:00 crc kubenswrapper[4642]: I0128 06:51:00.474664 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ps9zm" Jan 28 06:51:00 crc kubenswrapper[4642]: I0128 06:51:00.657303 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c56jc" Jan 28 06:51:00 crc kubenswrapper[4642]: I0128 06:51:00.704571 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14c19495-c3cb-4f36-9d69-c9a893c766f5-catalog-content\") pod \"14c19495-c3cb-4f36-9d69-c9a893c766f5\" (UID: \"14c19495-c3cb-4f36-9d69-c9a893c766f5\") " Jan 28 06:51:00 crc kubenswrapper[4642]: I0128 06:51:00.704621 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cq8lx\" (UniqueName: \"kubernetes.io/projected/14c19495-c3cb-4f36-9d69-c9a893c766f5-kube-api-access-cq8lx\") pod \"14c19495-c3cb-4f36-9d69-c9a893c766f5\" (UID: \"14c19495-c3cb-4f36-9d69-c9a893c766f5\") " Jan 28 06:51:00 crc kubenswrapper[4642]: I0128 06:51:00.704666 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14c19495-c3cb-4f36-9d69-c9a893c766f5-utilities\") pod \"14c19495-c3cb-4f36-9d69-c9a893c766f5\" (UID: \"14c19495-c3cb-4f36-9d69-c9a893c766f5\") " Jan 28 06:51:00 crc kubenswrapper[4642]: I0128 06:51:00.705317 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14c19495-c3cb-4f36-9d69-c9a893c766f5-utilities" (OuterVolumeSpecName: "utilities") pod "14c19495-c3cb-4f36-9d69-c9a893c766f5" (UID: "14c19495-c3cb-4f36-9d69-c9a893c766f5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 06:51:00 crc kubenswrapper[4642]: I0128 06:51:00.713404 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14c19495-c3cb-4f36-9d69-c9a893c766f5-kube-api-access-cq8lx" (OuterVolumeSpecName: "kube-api-access-cq8lx") pod "14c19495-c3cb-4f36-9d69-c9a893c766f5" (UID: "14c19495-c3cb-4f36-9d69-c9a893c766f5"). InnerVolumeSpecName "kube-api-access-cq8lx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 06:51:00 crc kubenswrapper[4642]: I0128 06:51:00.720102 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7jkmz" Jan 28 06:51:00 crc kubenswrapper[4642]: I0128 06:51:00.740986 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14c19495-c3cb-4f36-9d69-c9a893c766f5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "14c19495-c3cb-4f36-9d69-c9a893c766f5" (UID: "14c19495-c3cb-4f36-9d69-c9a893c766f5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 06:51:00 crc kubenswrapper[4642]: I0128 06:51:00.805848 4642 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14c19495-c3cb-4f36-9d69-c9a893c766f5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 06:51:00 crc kubenswrapper[4642]: I0128 06:51:00.805872 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cq8lx\" (UniqueName: \"kubernetes.io/projected/14c19495-c3cb-4f36-9d69-c9a893c766f5-kube-api-access-cq8lx\") on node \"crc\" DevicePath \"\"" Jan 28 06:51:00 crc kubenswrapper[4642]: I0128 06:51:00.805885 4642 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14c19495-c3cb-4f36-9d69-c9a893c766f5-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 06:51:00 crc kubenswrapper[4642]: I0128 06:51:00.878659 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-s2kvw" Jan 28 06:51:00 crc kubenswrapper[4642]: I0128 06:51:00.907435 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc531134-3ad4-4937-ab7f-8c3fac78dae6-catalog-content\") pod \"fc531134-3ad4-4937-ab7f-8c3fac78dae6\" (UID: \"fc531134-3ad4-4937-ab7f-8c3fac78dae6\") " Jan 28 06:51:00 crc kubenswrapper[4642]: I0128 06:51:00.907511 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqc4s\" (UniqueName: \"kubernetes.io/projected/fc531134-3ad4-4937-ab7f-8c3fac78dae6-kube-api-access-fqc4s\") pod \"fc531134-3ad4-4937-ab7f-8c3fac78dae6\" (UID: \"fc531134-3ad4-4937-ab7f-8c3fac78dae6\") " Jan 28 06:51:00 crc kubenswrapper[4642]: I0128 06:51:00.908009 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc531134-3ad4-4937-ab7f-8c3fac78dae6-utilities\") pod \"fc531134-3ad4-4937-ab7f-8c3fac78dae6\" (UID: \"fc531134-3ad4-4937-ab7f-8c3fac78dae6\") " Jan 28 06:51:00 crc kubenswrapper[4642]: I0128 06:51:00.908656 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc531134-3ad4-4937-ab7f-8c3fac78dae6-utilities" (OuterVolumeSpecName: "utilities") pod "fc531134-3ad4-4937-ab7f-8c3fac78dae6" (UID: "fc531134-3ad4-4937-ab7f-8c3fac78dae6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 06:51:00 crc kubenswrapper[4642]: I0128 06:51:00.909962 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-s2kvw" Jan 28 06:51:00 crc kubenswrapper[4642]: I0128 06:51:00.910916 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc531134-3ad4-4937-ab7f-8c3fac78dae6-kube-api-access-fqc4s" (OuterVolumeSpecName: "kube-api-access-fqc4s") pod "fc531134-3ad4-4937-ab7f-8c3fac78dae6" (UID: "fc531134-3ad4-4937-ab7f-8c3fac78dae6"). InnerVolumeSpecName "kube-api-access-fqc4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 06:51:00 crc kubenswrapper[4642]: I0128 06:51:00.947340 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc531134-3ad4-4937-ab7f-8c3fac78dae6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fc531134-3ad4-4937-ab7f-8c3fac78dae6" (UID: "fc531134-3ad4-4937-ab7f-8c3fac78dae6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 06:51:01 crc kubenswrapper[4642]: I0128 06:51:01.009806 4642 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc531134-3ad4-4937-ab7f-8c3fac78dae6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 06:51:01 crc kubenswrapper[4642]: I0128 06:51:01.009842 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqc4s\" (UniqueName: \"kubernetes.io/projected/fc531134-3ad4-4937-ab7f-8c3fac78dae6-kube-api-access-fqc4s\") on node \"crc\" DevicePath \"\"" Jan 28 06:51:01 crc kubenswrapper[4642]: I0128 06:51:01.009854 4642 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc531134-3ad4-4937-ab7f-8c3fac78dae6-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 06:51:01 crc kubenswrapper[4642]: I0128 06:51:01.095009 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-s9zdh"] Jan 28 06:51:01 crc kubenswrapper[4642]: I0128 06:51:01.234111 4642 generic.go:334] "Generic (PLEG): container finished" podID="14c19495-c3cb-4f36-9d69-c9a893c766f5" containerID="cf784a9c03c1702e653b26653c017559f0b965453501d8964a9de8940626f6d4" exitCode=0 Jan 28 06:51:01 crc kubenswrapper[4642]: I0128 06:51:01.234165 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c56jc" Jan 28 06:51:01 crc kubenswrapper[4642]: I0128 06:51:01.234239 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c56jc" event={"ID":"14c19495-c3cb-4f36-9d69-c9a893c766f5","Type":"ContainerDied","Data":"cf784a9c03c1702e653b26653c017559f0b965453501d8964a9de8940626f6d4"} Jan 28 06:51:01 crc kubenswrapper[4642]: I0128 06:51:01.234298 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c56jc" event={"ID":"14c19495-c3cb-4f36-9d69-c9a893c766f5","Type":"ContainerDied","Data":"720d07046df446f01a72620679f03ec7a29fc522affefdc6e3d506873f5cff81"} Jan 28 06:51:01 crc kubenswrapper[4642]: I0128 06:51:01.234322 4642 scope.go:117] "RemoveContainer" containerID="cf784a9c03c1702e653b26653c017559f0b965453501d8964a9de8940626f6d4" Jan 28 06:51:01 crc kubenswrapper[4642]: I0128 06:51:01.236394 4642 generic.go:334] "Generic (PLEG): container finished" podID="fc531134-3ad4-4937-ab7f-8c3fac78dae6" containerID="0b268d79b1447bba2a8e196425f5bc88e1f87f19c60a9c5457e4891c61f37671" exitCode=0 Jan 28 06:51:01 crc kubenswrapper[4642]: I0128 06:51:01.236485 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7jkmz" event={"ID":"fc531134-3ad4-4937-ab7f-8c3fac78dae6","Type":"ContainerDied","Data":"0b268d79b1447bba2a8e196425f5bc88e1f87f19c60a9c5457e4891c61f37671"} Jan 28 06:51:01 crc kubenswrapper[4642]: I0128 06:51:01.236536 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7jkmz" event={"ID":"fc531134-3ad4-4937-ab7f-8c3fac78dae6","Type":"ContainerDied","Data":"ab040461605dcb5470d9a9c03baab6590a835383f2821842fa36345a12dca1c9"} Jan 28 06:51:01 crc kubenswrapper[4642]: I0128 06:51:01.236834 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7jkmz" Jan 28 06:51:01 crc kubenswrapper[4642]: I0128 06:51:01.249524 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-c56jc"] Jan 28 06:51:01 crc kubenswrapper[4642]: I0128 06:51:01.253158 4642 scope.go:117] "RemoveContainer" containerID="7d00e975d90274615fcb6bfd247bd8bde01c3c6dacf914f42219a847b29be9b9" Jan 28 06:51:01 crc kubenswrapper[4642]: I0128 06:51:01.256094 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-c56jc"] Jan 28 06:51:01 crc kubenswrapper[4642]: I0128 06:51:01.262606 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7jkmz"] Jan 28 06:51:01 crc kubenswrapper[4642]: I0128 06:51:01.264424 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7jkmz"] Jan 28 06:51:01 crc kubenswrapper[4642]: I0128 06:51:01.264496 4642 scope.go:117] "RemoveContainer" containerID="b23b39dee5d81e56971c69505e3d35500db2d9d960b5fccbf2cfcb630dbb5b3e" Jan 28 06:51:01 crc kubenswrapper[4642]: I0128 06:51:01.274812 4642 scope.go:117] "RemoveContainer" containerID="cf784a9c03c1702e653b26653c017559f0b965453501d8964a9de8940626f6d4" Jan 28 06:51:01 crc kubenswrapper[4642]: E0128 06:51:01.275176 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf784a9c03c1702e653b26653c017559f0b965453501d8964a9de8940626f6d4\": container with ID starting with cf784a9c03c1702e653b26653c017559f0b965453501d8964a9de8940626f6d4 not found: ID does not exist" containerID="cf784a9c03c1702e653b26653c017559f0b965453501d8964a9de8940626f6d4" Jan 28 06:51:01 crc kubenswrapper[4642]: I0128 06:51:01.275229 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf784a9c03c1702e653b26653c017559f0b965453501d8964a9de8940626f6d4"} err="failed to get container status \"cf784a9c03c1702e653b26653c017559f0b965453501d8964a9de8940626f6d4\": rpc error: code = NotFound desc = could not find container \"cf784a9c03c1702e653b26653c017559f0b965453501d8964a9de8940626f6d4\": container with ID starting with cf784a9c03c1702e653b26653c017559f0b965453501d8964a9de8940626f6d4 not found: ID does not exist" Jan 28 06:51:01 crc kubenswrapper[4642]: I0128 06:51:01.275261 4642 scope.go:117] "RemoveContainer" containerID="7d00e975d90274615fcb6bfd247bd8bde01c3c6dacf914f42219a847b29be9b9" Jan 28 06:51:01 crc kubenswrapper[4642]: E0128 06:51:01.275448 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d00e975d90274615fcb6bfd247bd8bde01c3c6dacf914f42219a847b29be9b9\": container with ID starting with 7d00e975d90274615fcb6bfd247bd8bde01c3c6dacf914f42219a847b29be9b9 not found: ID does not exist" containerID="7d00e975d90274615fcb6bfd247bd8bde01c3c6dacf914f42219a847b29be9b9" Jan 28 06:51:01 crc kubenswrapper[4642]: I0128 06:51:01.275468 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d00e975d90274615fcb6bfd247bd8bde01c3c6dacf914f42219a847b29be9b9"} err="failed to get container status \"7d00e975d90274615fcb6bfd247bd8bde01c3c6dacf914f42219a847b29be9b9\": rpc error: code = NotFound desc = could not find container \"7d00e975d90274615fcb6bfd247bd8bde01c3c6dacf914f42219a847b29be9b9\": container with ID starting with 7d00e975d90274615fcb6bfd247bd8bde01c3c6dacf914f42219a847b29be9b9 not found: ID does not exist" Jan 28 06:51:01 crc kubenswrapper[4642]: I0128 06:51:01.275480 4642 scope.go:117] "RemoveContainer" containerID="b23b39dee5d81e56971c69505e3d35500db2d9d960b5fccbf2cfcb630dbb5b3e" Jan 28 06:51:01 crc kubenswrapper[4642]: E0128 06:51:01.275697 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b23b39dee5d81e56971c69505e3d35500db2d9d960b5fccbf2cfcb630dbb5b3e\": container with ID starting with b23b39dee5d81e56971c69505e3d35500db2d9d960b5fccbf2cfcb630dbb5b3e not found: ID does not exist" containerID="b23b39dee5d81e56971c69505e3d35500db2d9d960b5fccbf2cfcb630dbb5b3e" Jan 28 06:51:01 crc kubenswrapper[4642]: I0128 06:51:01.275726 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b23b39dee5d81e56971c69505e3d35500db2d9d960b5fccbf2cfcb630dbb5b3e"} err="failed to get container status \"b23b39dee5d81e56971c69505e3d35500db2d9d960b5fccbf2cfcb630dbb5b3e\": rpc error: code = NotFound desc = could not find container \"b23b39dee5d81e56971c69505e3d35500db2d9d960b5fccbf2cfcb630dbb5b3e\": container with ID starting with b23b39dee5d81e56971c69505e3d35500db2d9d960b5fccbf2cfcb630dbb5b3e not found: ID does not exist" Jan 28 06:51:01 crc kubenswrapper[4642]: I0128 06:51:01.275739 4642 scope.go:117] "RemoveContainer" containerID="0b268d79b1447bba2a8e196425f5bc88e1f87f19c60a9c5457e4891c61f37671" Jan 28 06:51:01 crc kubenswrapper[4642]: I0128 06:51:01.277738 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ps9zm" Jan 28 06:51:01 crc kubenswrapper[4642]: I0128 06:51:01.285897 4642 scope.go:117] "RemoveContainer" containerID="253ef214fdd2ea6e5a92a4d0c0bb03498be3dc1e2c7cba686c4991749ecb2a7a" Jan 28 06:51:01 crc kubenswrapper[4642]: I0128 06:51:01.304336 4642 scope.go:117] "RemoveContainer" containerID="1b75892557e9ab3d90e16675e7a4e01bf670f7f89a9bfdc8ed169c0e56e002c8" Jan 28 06:51:01 crc kubenswrapper[4642]: I0128 06:51:01.315341 4642 scope.go:117] "RemoveContainer" containerID="0b268d79b1447bba2a8e196425f5bc88e1f87f19c60a9c5457e4891c61f37671" Jan 28 06:51:01 crc kubenswrapper[4642]: E0128 06:51:01.315680 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b268d79b1447bba2a8e196425f5bc88e1f87f19c60a9c5457e4891c61f37671\": container with ID starting with 0b268d79b1447bba2a8e196425f5bc88e1f87f19c60a9c5457e4891c61f37671 not found: ID does not exist" containerID="0b268d79b1447bba2a8e196425f5bc88e1f87f19c60a9c5457e4891c61f37671" Jan 28 06:51:01 crc kubenswrapper[4642]: I0128 06:51:01.315706 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b268d79b1447bba2a8e196425f5bc88e1f87f19c60a9c5457e4891c61f37671"} err="failed to get container status \"0b268d79b1447bba2a8e196425f5bc88e1f87f19c60a9c5457e4891c61f37671\": rpc error: code = NotFound desc = could not find container \"0b268d79b1447bba2a8e196425f5bc88e1f87f19c60a9c5457e4891c61f37671\": container with ID starting with 0b268d79b1447bba2a8e196425f5bc88e1f87f19c60a9c5457e4891c61f37671 not found: ID does not exist" Jan 28 06:51:01 crc kubenswrapper[4642]: I0128 06:51:01.315727 4642 scope.go:117] "RemoveContainer" containerID="253ef214fdd2ea6e5a92a4d0c0bb03498be3dc1e2c7cba686c4991749ecb2a7a" Jan 28 06:51:01 crc kubenswrapper[4642]: E0128 06:51:01.316027 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"253ef214fdd2ea6e5a92a4d0c0bb03498be3dc1e2c7cba686c4991749ecb2a7a\": container with ID starting with 253ef214fdd2ea6e5a92a4d0c0bb03498be3dc1e2c7cba686c4991749ecb2a7a not found: ID does not exist" containerID="253ef214fdd2ea6e5a92a4d0c0bb03498be3dc1e2c7cba686c4991749ecb2a7a" Jan 28 06:51:01 crc kubenswrapper[4642]: I0128 06:51:01.316046 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"253ef214fdd2ea6e5a92a4d0c0bb03498be3dc1e2c7cba686c4991749ecb2a7a"} err="failed to get container status \"253ef214fdd2ea6e5a92a4d0c0bb03498be3dc1e2c7cba686c4991749ecb2a7a\": rpc error: code = NotFound desc = could not find container \"253ef214fdd2ea6e5a92a4d0c0bb03498be3dc1e2c7cba686c4991749ecb2a7a\": container with ID starting with 253ef214fdd2ea6e5a92a4d0c0bb03498be3dc1e2c7cba686c4991749ecb2a7a not found: ID does not exist" Jan 28 06:51:01 crc kubenswrapper[4642]: I0128 06:51:01.316061 4642 scope.go:117] "RemoveContainer" containerID="1b75892557e9ab3d90e16675e7a4e01bf670f7f89a9bfdc8ed169c0e56e002c8" Jan 28 06:51:01 crc kubenswrapper[4642]: E0128 06:51:01.316293 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b75892557e9ab3d90e16675e7a4e01bf670f7f89a9bfdc8ed169c0e56e002c8\": container with ID starting with 1b75892557e9ab3d90e16675e7a4e01bf670f7f89a9bfdc8ed169c0e56e002c8 not found: ID does not exist" containerID="1b75892557e9ab3d90e16675e7a4e01bf670f7f89a9bfdc8ed169c0e56e002c8" Jan 28 06:51:01 crc kubenswrapper[4642]: I0128 06:51:01.316312 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b75892557e9ab3d90e16675e7a4e01bf670f7f89a9bfdc8ed169c0e56e002c8"} err="failed to get container status \"1b75892557e9ab3d90e16675e7a4e01bf670f7f89a9bfdc8ed169c0e56e002c8\": rpc error: code = NotFound desc = could not find container \"1b75892557e9ab3d90e16675e7a4e01bf670f7f89a9bfdc8ed169c0e56e002c8\": container with ID starting with 1b75892557e9ab3d90e16675e7a4e01bf670f7f89a9bfdc8ed169c0e56e002c8 not found: ID does not exist" Jan 28 06:51:02 crc kubenswrapper[4642]: I0128 06:51:02.113419 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sdvcx"] Jan 28 06:51:02 crc kubenswrapper[4642]: I0128 06:51:02.243464 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-sdvcx" podUID="399d0f01-0fd0-407e-a63d-e7c900f24452" containerName="registry-server" containerID="cri-o://1bb4c90a7f4198f0d7681bdc37b564caf91259a747f9c0c7bf67d3731d209b8d" gracePeriod=2 Jan 28 06:51:02 crc kubenswrapper[4642]: I0128 06:51:02.623153 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sdvcx" Jan 28 06:51:02 crc kubenswrapper[4642]: I0128 06:51:02.727614 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/399d0f01-0fd0-407e-a63d-e7c900f24452-catalog-content\") pod \"399d0f01-0fd0-407e-a63d-e7c900f24452\" (UID: \"399d0f01-0fd0-407e-a63d-e7c900f24452\") " Jan 28 06:51:02 crc kubenswrapper[4642]: I0128 06:51:02.727658 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/399d0f01-0fd0-407e-a63d-e7c900f24452-utilities\") pod \"399d0f01-0fd0-407e-a63d-e7c900f24452\" (UID: \"399d0f01-0fd0-407e-a63d-e7c900f24452\") " Jan 28 06:51:02 crc kubenswrapper[4642]: I0128 06:51:02.727745 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fsjjt\" (UniqueName: \"kubernetes.io/projected/399d0f01-0fd0-407e-a63d-e7c900f24452-kube-api-access-fsjjt\") pod \"399d0f01-0fd0-407e-a63d-e7c900f24452\" (UID: \"399d0f01-0fd0-407e-a63d-e7c900f24452\") " Jan 28 06:51:02 crc kubenswrapper[4642]: I0128 06:51:02.728761 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/399d0f01-0fd0-407e-a63d-e7c900f24452-utilities" (OuterVolumeSpecName: "utilities") pod "399d0f01-0fd0-407e-a63d-e7c900f24452" (UID: "399d0f01-0fd0-407e-a63d-e7c900f24452"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 06:51:02 crc kubenswrapper[4642]: I0128 06:51:02.733438 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/399d0f01-0fd0-407e-a63d-e7c900f24452-kube-api-access-fsjjt" (OuterVolumeSpecName: "kube-api-access-fsjjt") pod "399d0f01-0fd0-407e-a63d-e7c900f24452" (UID: "399d0f01-0fd0-407e-a63d-e7c900f24452"). InnerVolumeSpecName "kube-api-access-fsjjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 06:51:02 crc kubenswrapper[4642]: I0128 06:51:02.747940 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/399d0f01-0fd0-407e-a63d-e7c900f24452-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "399d0f01-0fd0-407e-a63d-e7c900f24452" (UID: "399d0f01-0fd0-407e-a63d-e7c900f24452"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 06:51:02 crc kubenswrapper[4642]: I0128 06:51:02.828847 4642 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/399d0f01-0fd0-407e-a63d-e7c900f24452-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 06:51:02 crc kubenswrapper[4642]: I0128 06:51:02.828883 4642 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/399d0f01-0fd0-407e-a63d-e7c900f24452-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 06:51:02 crc kubenswrapper[4642]: I0128 06:51:02.828894 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fsjjt\" (UniqueName: \"kubernetes.io/projected/399d0f01-0fd0-407e-a63d-e7c900f24452-kube-api-access-fsjjt\") on node \"crc\" DevicePath \"\"" Jan 28 06:51:03 crc kubenswrapper[4642]: I0128 06:51:03.023582 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 06:51:03 crc kubenswrapper[4642]: I0128 06:51:03.104393 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14c19495-c3cb-4f36-9d69-c9a893c766f5" path="/var/lib/kubelet/pods/14c19495-c3cb-4f36-9d69-c9a893c766f5/volumes" Jan 28 06:51:03 crc kubenswrapper[4642]: I0128 06:51:03.105038 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc531134-3ad4-4937-ab7f-8c3fac78dae6" path="/var/lib/kubelet/pods/fc531134-3ad4-4937-ab7f-8c3fac78dae6/volumes" Jan 28 06:51:03 crc kubenswrapper[4642]: I0128 06:51:03.248941 4642 generic.go:334] "Generic (PLEG): container finished" podID="399d0f01-0fd0-407e-a63d-e7c900f24452" containerID="1bb4c90a7f4198f0d7681bdc37b564caf91259a747f9c0c7bf67d3731d209b8d" exitCode=0 Jan 28 06:51:03 crc kubenswrapper[4642]: I0128 06:51:03.248983 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sdvcx" event={"ID":"399d0f01-0fd0-407e-a63d-e7c900f24452","Type":"ContainerDied","Data":"1bb4c90a7f4198f0d7681bdc37b564caf91259a747f9c0c7bf67d3731d209b8d"} Jan 28 06:51:03 crc kubenswrapper[4642]: I0128 06:51:03.249009 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sdvcx" event={"ID":"399d0f01-0fd0-407e-a63d-e7c900f24452","Type":"ContainerDied","Data":"b7be0a6d26808c75c54f36d7fedc1918cccc881bf4e1b0a0a3d8c768b14e2747"} Jan 28 06:51:03 crc kubenswrapper[4642]: I0128 06:51:03.249026 4642 scope.go:117] "RemoveContainer" containerID="1bb4c90a7f4198f0d7681bdc37b564caf91259a747f9c0c7bf67d3731d209b8d" Jan 28 06:51:03 crc kubenswrapper[4642]: I0128 06:51:03.248985 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sdvcx" Jan 28 06:51:03 crc kubenswrapper[4642]: I0128 06:51:03.262549 4642 scope.go:117] "RemoveContainer" containerID="c7ac32f493aa94cf7827b5e33cadbc01e20d788f80ce30822441a93584884d77" Jan 28 06:51:03 crc kubenswrapper[4642]: I0128 06:51:03.263419 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sdvcx"] Jan 28 06:51:03 crc kubenswrapper[4642]: I0128 06:51:03.265772 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-sdvcx"] Jan 28 06:51:03 crc kubenswrapper[4642]: I0128 06:51:03.274652 4642 scope.go:117] "RemoveContainer" containerID="3649b91fad03dff53bf74fe00fad77fc754489ab5fac193b48eb6a0bed7b6937" Jan 28 06:51:03 crc kubenswrapper[4642]: I0128 06:51:03.288011 4642 scope.go:117] "RemoveContainer" containerID="1bb4c90a7f4198f0d7681bdc37b564caf91259a747f9c0c7bf67d3731d209b8d" Jan 28 06:51:03 crc kubenswrapper[4642]: E0128 06:51:03.288314 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1bb4c90a7f4198f0d7681bdc37b564caf91259a747f9c0c7bf67d3731d209b8d\": container with ID starting with 1bb4c90a7f4198f0d7681bdc37b564caf91259a747f9c0c7bf67d3731d209b8d not found: ID does not exist" containerID="1bb4c90a7f4198f0d7681bdc37b564caf91259a747f9c0c7bf67d3731d209b8d" Jan 28 06:51:03 crc kubenswrapper[4642]: I0128 06:51:03.288355 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bb4c90a7f4198f0d7681bdc37b564caf91259a747f9c0c7bf67d3731d209b8d"} err="failed to get container status \"1bb4c90a7f4198f0d7681bdc37b564caf91259a747f9c0c7bf67d3731d209b8d\": rpc error: code = NotFound desc = could not find container \"1bb4c90a7f4198f0d7681bdc37b564caf91259a747f9c0c7bf67d3731d209b8d\": container with ID starting with 1bb4c90a7f4198f0d7681bdc37b564caf91259a747f9c0c7bf67d3731d209b8d not found: ID does not exist" Jan 28 06:51:03 crc kubenswrapper[4642]: I0128 06:51:03.288383 4642 scope.go:117] "RemoveContainer" containerID="c7ac32f493aa94cf7827b5e33cadbc01e20d788f80ce30822441a93584884d77" Jan 28 06:51:03 crc kubenswrapper[4642]: E0128 06:51:03.288732 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7ac32f493aa94cf7827b5e33cadbc01e20d788f80ce30822441a93584884d77\": container with ID starting with c7ac32f493aa94cf7827b5e33cadbc01e20d788f80ce30822441a93584884d77 not found: ID does not exist" containerID="c7ac32f493aa94cf7827b5e33cadbc01e20d788f80ce30822441a93584884d77" Jan 28 06:51:03 crc kubenswrapper[4642]: I0128 06:51:03.288760 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7ac32f493aa94cf7827b5e33cadbc01e20d788f80ce30822441a93584884d77"} err="failed to get container status \"c7ac32f493aa94cf7827b5e33cadbc01e20d788f80ce30822441a93584884d77\": rpc error: code = NotFound desc = could not find container \"c7ac32f493aa94cf7827b5e33cadbc01e20d788f80ce30822441a93584884d77\": container with ID starting with c7ac32f493aa94cf7827b5e33cadbc01e20d788f80ce30822441a93584884d77 not found: ID does not exist" Jan 28 06:51:03 crc kubenswrapper[4642]: I0128 06:51:03.288778 4642 scope.go:117] "RemoveContainer" containerID="3649b91fad03dff53bf74fe00fad77fc754489ab5fac193b48eb6a0bed7b6937" Jan 28 06:51:03 crc kubenswrapper[4642]: E0128 06:51:03.288976 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3649b91fad03dff53bf74fe00fad77fc754489ab5fac193b48eb6a0bed7b6937\": container with ID starting with 3649b91fad03dff53bf74fe00fad77fc754489ab5fac193b48eb6a0bed7b6937 not found: ID does not exist" containerID="3649b91fad03dff53bf74fe00fad77fc754489ab5fac193b48eb6a0bed7b6937" Jan 28 06:51:03 crc kubenswrapper[4642]: I0128 06:51:03.288999 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3649b91fad03dff53bf74fe00fad77fc754489ab5fac193b48eb6a0bed7b6937"} err="failed to get container status \"3649b91fad03dff53bf74fe00fad77fc754489ab5fac193b48eb6a0bed7b6937\": rpc error: code = NotFound desc = could not find container \"3649b91fad03dff53bf74fe00fad77fc754489ab5fac193b48eb6a0bed7b6937\": container with ID starting with 3649b91fad03dff53bf74fe00fad77fc754489ab5fac193b48eb6a0bed7b6937 not found: ID does not exist" Jan 28 06:51:03 crc kubenswrapper[4642]: I0128 06:51:03.588019 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nsxdk" Jan 28 06:51:04 crc kubenswrapper[4642]: I0128 06:51:04.512669 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s2kvw"] Jan 28 06:51:04 crc kubenswrapper[4642]: I0128 06:51:04.513205 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-s2kvw" podUID="0a83f4e9-662c-4d05-b296-59e7ca842b15" containerName="registry-server" containerID="cri-o://1ce16c422ae748fe9295037bfb981f4eac1f5c4d2fcaed5d2d07b48fdc68b4c8" gracePeriod=2 Jan 28 06:51:04 crc kubenswrapper[4642]: I0128 06:51:04.850067 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-78bc4cb977-psz6q"] Jan 28 06:51:04 crc kubenswrapper[4642]: I0128 06:51:04.850299 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-78bc4cb977-psz6q" podUID="18a08ae1-6b1f-4cf6-adb1-2d0cd9e61858" containerName="controller-manager" containerID="cri-o://601b299fa9fb0c67340c35e001b17f5aa5a776b4339a2ab62947a7d372bc6acd" gracePeriod=30 Jan 28 06:51:04 crc kubenswrapper[4642]: I0128 06:51:04.948838 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c588587d7-r4wkq"] Jan 28 06:51:04 crc kubenswrapper[4642]: I0128 06:51:04.950015 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7c588587d7-r4wkq" podUID="edac8762-d5ab-42ef-b810-8de869bfc50e" containerName="route-controller-manager" containerID="cri-o://ea5be59e58a7284424ed9144675c75292b0fa6ac389284bd00f93b8ca7474c28" gracePeriod=30 Jan 28 06:51:04 crc kubenswrapper[4642]: I0128 06:51:04.955226 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s2kvw" Jan 28 06:51:05 crc kubenswrapper[4642]: I0128 06:51:05.112116 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="399d0f01-0fd0-407e-a63d-e7c900f24452" path="/var/lib/kubelet/pods/399d0f01-0fd0-407e-a63d-e7c900f24452/volumes" Jan 28 06:51:05 crc kubenswrapper[4642]: I0128 06:51:05.150518 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a83f4e9-662c-4d05-b296-59e7ca842b15-utilities\") pod \"0a83f4e9-662c-4d05-b296-59e7ca842b15\" (UID: \"0a83f4e9-662c-4d05-b296-59e7ca842b15\") " Jan 28 06:51:05 crc kubenswrapper[4642]: I0128 06:51:05.150658 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6xvp\" (UniqueName: \"kubernetes.io/projected/0a83f4e9-662c-4d05-b296-59e7ca842b15-kube-api-access-x6xvp\") pod \"0a83f4e9-662c-4d05-b296-59e7ca842b15\" (UID: \"0a83f4e9-662c-4d05-b296-59e7ca842b15\") " Jan 28 06:51:05 crc kubenswrapper[4642]: I0128 06:51:05.150678 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a83f4e9-662c-4d05-b296-59e7ca842b15-catalog-content\") pod \"0a83f4e9-662c-4d05-b296-59e7ca842b15\" (UID: \"0a83f4e9-662c-4d05-b296-59e7ca842b15\") " Jan 28 06:51:05 crc kubenswrapper[4642]: I0128 06:51:05.151262 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a83f4e9-662c-4d05-b296-59e7ca842b15-utilities" (OuterVolumeSpecName: "utilities") pod "0a83f4e9-662c-4d05-b296-59e7ca842b15" (UID: "0a83f4e9-662c-4d05-b296-59e7ca842b15"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 06:51:05 crc kubenswrapper[4642]: I0128 06:51:05.157104 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a83f4e9-662c-4d05-b296-59e7ca842b15-kube-api-access-x6xvp" (OuterVolumeSpecName: "kube-api-access-x6xvp") pod "0a83f4e9-662c-4d05-b296-59e7ca842b15" (UID: "0a83f4e9-662c-4d05-b296-59e7ca842b15"). InnerVolumeSpecName "kube-api-access-x6xvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 06:51:05 crc kubenswrapper[4642]: I0128 06:51:05.247116 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a83f4e9-662c-4d05-b296-59e7ca842b15-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0a83f4e9-662c-4d05-b296-59e7ca842b15" (UID: "0a83f4e9-662c-4d05-b296-59e7ca842b15"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 06:51:05 crc kubenswrapper[4642]: I0128 06:51:05.252444 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x6xvp\" (UniqueName: \"kubernetes.io/projected/0a83f4e9-662c-4d05-b296-59e7ca842b15-kube-api-access-x6xvp\") on node \"crc\" DevicePath \"\"" Jan 28 06:51:05 crc kubenswrapper[4642]: I0128 06:51:05.252475 4642 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a83f4e9-662c-4d05-b296-59e7ca842b15-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 06:51:05 crc kubenswrapper[4642]: I0128 06:51:05.252484 4642 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a83f4e9-662c-4d05-b296-59e7ca842b15-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 06:51:05 crc kubenswrapper[4642]: I0128 06:51:05.268238 4642 generic.go:334] "Generic (PLEG): container finished" podID="edac8762-d5ab-42ef-b810-8de869bfc50e" containerID="ea5be59e58a7284424ed9144675c75292b0fa6ac389284bd00f93b8ca7474c28" exitCode=0 Jan 28 06:51:05 crc kubenswrapper[4642]: I0128 06:51:05.268324 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c588587d7-r4wkq" event={"ID":"edac8762-d5ab-42ef-b810-8de869bfc50e","Type":"ContainerDied","Data":"ea5be59e58a7284424ed9144675c75292b0fa6ac389284bd00f93b8ca7474c28"} Jan 28 06:51:05 crc kubenswrapper[4642]: I0128 06:51:05.270153 4642 generic.go:334] "Generic (PLEG): container finished" podID="0a83f4e9-662c-4d05-b296-59e7ca842b15" containerID="1ce16c422ae748fe9295037bfb981f4eac1f5c4d2fcaed5d2d07b48fdc68b4c8" exitCode=0 Jan 28 06:51:05 crc kubenswrapper[4642]: I0128 06:51:05.270229 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s2kvw" Jan 28 06:51:05 crc kubenswrapper[4642]: I0128 06:51:05.270231 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s2kvw" event={"ID":"0a83f4e9-662c-4d05-b296-59e7ca842b15","Type":"ContainerDied","Data":"1ce16c422ae748fe9295037bfb981f4eac1f5c4d2fcaed5d2d07b48fdc68b4c8"} Jan 28 06:51:05 crc kubenswrapper[4642]: I0128 06:51:05.271484 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s2kvw" event={"ID":"0a83f4e9-662c-4d05-b296-59e7ca842b15","Type":"ContainerDied","Data":"9783e5639a22eaba150e2b4718d26c797cd36998386385e566ca8e44366a5523"} Jan 28 06:51:05 crc kubenswrapper[4642]: I0128 06:51:05.271531 4642 scope.go:117] "RemoveContainer" containerID="1ce16c422ae748fe9295037bfb981f4eac1f5c4d2fcaed5d2d07b48fdc68b4c8" Jan 28 06:51:05 crc kubenswrapper[4642]: I0128 06:51:05.272108 4642 generic.go:334] "Generic (PLEG): container finished" podID="18a08ae1-6b1f-4cf6-adb1-2d0cd9e61858" containerID="601b299fa9fb0c67340c35e001b17f5aa5a776b4339a2ab62947a7d372bc6acd" exitCode=0 Jan 28 06:51:05 crc kubenswrapper[4642]: I0128 06:51:05.272148 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-78bc4cb977-psz6q" event={"ID":"18a08ae1-6b1f-4cf6-adb1-2d0cd9e61858","Type":"ContainerDied","Data":"601b299fa9fb0c67340c35e001b17f5aa5a776b4339a2ab62947a7d372bc6acd"} Jan 28 06:51:05 crc kubenswrapper[4642]: I0128 06:51:05.287746 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c588587d7-r4wkq" Jan 28 06:51:05 crc kubenswrapper[4642]: I0128 06:51:05.290925 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s2kvw"] Jan 28 06:51:05 crc kubenswrapper[4642]: I0128 06:51:05.292703 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-s2kvw"] Jan 28 06:51:05 crc kubenswrapper[4642]: I0128 06:51:05.295855 4642 scope.go:117] "RemoveContainer" containerID="f58ccf664ac98e8781413e5c54ef727a6ccda6aa9d4ccca9f47b770a2f46a32a" Jan 28 06:51:05 crc kubenswrapper[4642]: I0128 06:51:05.334840 4642 scope.go:117] "RemoveContainer" containerID="d44d816f2a624fd271fc884be4f9a8c41ff3556ef98c8c375cbc5e9e8dd3457c" Jan 28 06:51:05 crc kubenswrapper[4642]: I0128 06:51:05.336224 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-78bc4cb977-psz6q" Jan 28 06:51:05 crc kubenswrapper[4642]: I0128 06:51:05.346694 4642 scope.go:117] "RemoveContainer" containerID="1ce16c422ae748fe9295037bfb981f4eac1f5c4d2fcaed5d2d07b48fdc68b4c8" Jan 28 06:51:05 crc kubenswrapper[4642]: E0128 06:51:05.346965 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ce16c422ae748fe9295037bfb981f4eac1f5c4d2fcaed5d2d07b48fdc68b4c8\": container with ID starting with 1ce16c422ae748fe9295037bfb981f4eac1f5c4d2fcaed5d2d07b48fdc68b4c8 not found: ID does not exist" containerID="1ce16c422ae748fe9295037bfb981f4eac1f5c4d2fcaed5d2d07b48fdc68b4c8" Jan 28 06:51:05 crc kubenswrapper[4642]: I0128 06:51:05.346997 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ce16c422ae748fe9295037bfb981f4eac1f5c4d2fcaed5d2d07b48fdc68b4c8"} err="failed to get container status \"1ce16c422ae748fe9295037bfb981f4eac1f5c4d2fcaed5d2d07b48fdc68b4c8\": rpc error: code = NotFound desc = could not find container \"1ce16c422ae748fe9295037bfb981f4eac1f5c4d2fcaed5d2d07b48fdc68b4c8\": container with ID starting with 1ce16c422ae748fe9295037bfb981f4eac1f5c4d2fcaed5d2d07b48fdc68b4c8 not found: ID does not exist" Jan 28 06:51:05 crc kubenswrapper[4642]: I0128 06:51:05.347018 4642 scope.go:117] "RemoveContainer" containerID="f58ccf664ac98e8781413e5c54ef727a6ccda6aa9d4ccca9f47b770a2f46a32a" Jan 28 06:51:05 crc kubenswrapper[4642]: E0128 06:51:05.347338 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f58ccf664ac98e8781413e5c54ef727a6ccda6aa9d4ccca9f47b770a2f46a32a\": container with ID starting with f58ccf664ac98e8781413e5c54ef727a6ccda6aa9d4ccca9f47b770a2f46a32a not found: ID does not exist" containerID="f58ccf664ac98e8781413e5c54ef727a6ccda6aa9d4ccca9f47b770a2f46a32a" Jan 28 06:51:05 crc kubenswrapper[4642]: I0128 06:51:05.347379 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f58ccf664ac98e8781413e5c54ef727a6ccda6aa9d4ccca9f47b770a2f46a32a"} err="failed to get container status \"f58ccf664ac98e8781413e5c54ef727a6ccda6aa9d4ccca9f47b770a2f46a32a\": rpc error: code = NotFound desc = could not find container \"f58ccf664ac98e8781413e5c54ef727a6ccda6aa9d4ccca9f47b770a2f46a32a\": container with ID starting with f58ccf664ac98e8781413e5c54ef727a6ccda6aa9d4ccca9f47b770a2f46a32a not found: ID does not exist" Jan 28 06:51:05 crc kubenswrapper[4642]: I0128 06:51:05.347470 4642 scope.go:117] "RemoveContainer" containerID="d44d816f2a624fd271fc884be4f9a8c41ff3556ef98c8c375cbc5e9e8dd3457c" Jan 28 06:51:05 crc kubenswrapper[4642]: E0128 06:51:05.347761 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d44d816f2a624fd271fc884be4f9a8c41ff3556ef98c8c375cbc5e9e8dd3457c\": container with ID starting with d44d816f2a624fd271fc884be4f9a8c41ff3556ef98c8c375cbc5e9e8dd3457c not found: ID does not exist" containerID="d44d816f2a624fd271fc884be4f9a8c41ff3556ef98c8c375cbc5e9e8dd3457c" Jan 28 06:51:05 crc kubenswrapper[4642]: I0128 06:51:05.347784 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d44d816f2a624fd271fc884be4f9a8c41ff3556ef98c8c375cbc5e9e8dd3457c"} err="failed to get container status \"d44d816f2a624fd271fc884be4f9a8c41ff3556ef98c8c375cbc5e9e8dd3457c\": rpc error: code = NotFound desc = could not find container \"d44d816f2a624fd271fc884be4f9a8c41ff3556ef98c8c375cbc5e9e8dd3457c\": container with ID starting with d44d816f2a624fd271fc884be4f9a8c41ff3556ef98c8c375cbc5e9e8dd3457c not found: ID does not exist" Jan 28 06:51:05 crc kubenswrapper[4642]: I0128 06:51:05.455115 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/18a08ae1-6b1f-4cf6-adb1-2d0cd9e61858-proxy-ca-bundles\") pod \"18a08ae1-6b1f-4cf6-adb1-2d0cd9e61858\" (UID: \"18a08ae1-6b1f-4cf6-adb1-2d0cd9e61858\") " Jan 28 06:51:05 crc kubenswrapper[4642]: I0128 06:51:05.455164 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edac8762-d5ab-42ef-b810-8de869bfc50e-config\") pod \"edac8762-d5ab-42ef-b810-8de869bfc50e\" (UID: \"edac8762-d5ab-42ef-b810-8de869bfc50e\") " Jan 28 06:51:05 crc kubenswrapper[4642]: I0128 06:51:05.455236 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qrhf4\" (UniqueName: \"kubernetes.io/projected/18a08ae1-6b1f-4cf6-adb1-2d0cd9e61858-kube-api-access-qrhf4\") pod \"18a08ae1-6b1f-4cf6-adb1-2d0cd9e61858\" (UID: \"18a08ae1-6b1f-4cf6-adb1-2d0cd9e61858\") " Jan 28 06:51:05 crc kubenswrapper[4642]: I0128 06:51:05.455255 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18a08ae1-6b1f-4cf6-adb1-2d0cd9e61858-serving-cert\") pod \"18a08ae1-6b1f-4cf6-adb1-2d0cd9e61858\" (UID: \"18a08ae1-6b1f-4cf6-adb1-2d0cd9e61858\") " Jan 28 06:51:05 crc kubenswrapper[4642]: I0128 06:51:05.455294 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18a08ae1-6b1f-4cf6-adb1-2d0cd9e61858-config\") pod \"18a08ae1-6b1f-4cf6-adb1-2d0cd9e61858\" (UID: \"18a08ae1-6b1f-4cf6-adb1-2d0cd9e61858\") " Jan 28 06:51:05 crc kubenswrapper[4642]: I0128 06:51:05.455317 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/18a08ae1-6b1f-4cf6-adb1-2d0cd9e61858-client-ca\") pod \"18a08ae1-6b1f-4cf6-adb1-2d0cd9e61858\" (UID: \"18a08ae1-6b1f-4cf6-adb1-2d0cd9e61858\") " Jan 28 06:51:05 crc kubenswrapper[4642]: I0128 06:51:05.455357 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/edac8762-d5ab-42ef-b810-8de869bfc50e-serving-cert\") pod \"edac8762-d5ab-42ef-b810-8de869bfc50e\" (UID: \"edac8762-d5ab-42ef-b810-8de869bfc50e\") " Jan 28 06:51:05 crc kubenswrapper[4642]: I0128 06:51:05.455385 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/edac8762-d5ab-42ef-b810-8de869bfc50e-client-ca\") pod \"edac8762-d5ab-42ef-b810-8de869bfc50e\" (UID: \"edac8762-d5ab-42ef-b810-8de869bfc50e\") " Jan 28 06:51:05 crc kubenswrapper[4642]: I0128 06:51:05.455415 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mq8gs\" (UniqueName: \"kubernetes.io/projected/edac8762-d5ab-42ef-b810-8de869bfc50e-kube-api-access-mq8gs\") pod \"edac8762-d5ab-42ef-b810-8de869bfc50e\" (UID: \"edac8762-d5ab-42ef-b810-8de869bfc50e\") " Jan 28 06:51:05 crc kubenswrapper[4642]: I0128 06:51:05.455828 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18a08ae1-6b1f-4cf6-adb1-2d0cd9e61858-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "18a08ae1-6b1f-4cf6-adb1-2d0cd9e61858" (UID: "18a08ae1-6b1f-4cf6-adb1-2d0cd9e61858"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:51:05 crc kubenswrapper[4642]: I0128 06:51:05.455845 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/edac8762-d5ab-42ef-b810-8de869bfc50e-config" (OuterVolumeSpecName: "config") pod "edac8762-d5ab-42ef-b810-8de869bfc50e" (UID: "edac8762-d5ab-42ef-b810-8de869bfc50e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:51:05 crc kubenswrapper[4642]: I0128 06:51:05.456207 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18a08ae1-6b1f-4cf6-adb1-2d0cd9e61858-config" (OuterVolumeSpecName: "config") pod "18a08ae1-6b1f-4cf6-adb1-2d0cd9e61858" (UID: "18a08ae1-6b1f-4cf6-adb1-2d0cd9e61858"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:51:05 crc kubenswrapper[4642]: I0128 06:51:05.456458 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18a08ae1-6b1f-4cf6-adb1-2d0cd9e61858-client-ca" (OuterVolumeSpecName: "client-ca") pod "18a08ae1-6b1f-4cf6-adb1-2d0cd9e61858" (UID: "18a08ae1-6b1f-4cf6-adb1-2d0cd9e61858"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:51:05 crc kubenswrapper[4642]: I0128 06:51:05.456646 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/edac8762-d5ab-42ef-b810-8de869bfc50e-client-ca" (OuterVolumeSpecName: "client-ca") pod "edac8762-d5ab-42ef-b810-8de869bfc50e" (UID: "edac8762-d5ab-42ef-b810-8de869bfc50e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:51:05 crc kubenswrapper[4642]: I0128 06:51:05.459741 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18a08ae1-6b1f-4cf6-adb1-2d0cd9e61858-kube-api-access-qrhf4" (OuterVolumeSpecName: "kube-api-access-qrhf4") pod "18a08ae1-6b1f-4cf6-adb1-2d0cd9e61858" (UID: "18a08ae1-6b1f-4cf6-adb1-2d0cd9e61858"). InnerVolumeSpecName "kube-api-access-qrhf4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 06:51:05 crc kubenswrapper[4642]: I0128 06:51:05.459831 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edac8762-d5ab-42ef-b810-8de869bfc50e-kube-api-access-mq8gs" (OuterVolumeSpecName: "kube-api-access-mq8gs") pod "edac8762-d5ab-42ef-b810-8de869bfc50e" (UID: "edac8762-d5ab-42ef-b810-8de869bfc50e"). InnerVolumeSpecName "kube-api-access-mq8gs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 06:51:05 crc kubenswrapper[4642]: I0128 06:51:05.459861 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18a08ae1-6b1f-4cf6-adb1-2d0cd9e61858-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "18a08ae1-6b1f-4cf6-adb1-2d0cd9e61858" (UID: "18a08ae1-6b1f-4cf6-adb1-2d0cd9e61858"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 06:51:05 crc kubenswrapper[4642]: I0128 06:51:05.459993 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edac8762-d5ab-42ef-b810-8de869bfc50e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "edac8762-d5ab-42ef-b810-8de869bfc50e" (UID: "edac8762-d5ab-42ef-b810-8de869bfc50e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 06:51:05 crc kubenswrapper[4642]: I0128 06:51:05.556467 4642 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/18a08ae1-6b1f-4cf6-adb1-2d0cd9e61858-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 06:51:05 crc kubenswrapper[4642]: I0128 06:51:05.556515 4642 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/edac8762-d5ab-42ef-b810-8de869bfc50e-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 06:51:05 crc kubenswrapper[4642]: I0128 06:51:05.556526 4642 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/edac8762-d5ab-42ef-b810-8de869bfc50e-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 06:51:05 crc kubenswrapper[4642]: I0128 06:51:05.556535 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mq8gs\" (UniqueName: \"kubernetes.io/projected/edac8762-d5ab-42ef-b810-8de869bfc50e-kube-api-access-mq8gs\") on node \"crc\" DevicePath \"\"" Jan 28 06:51:05 crc kubenswrapper[4642]: I0128 06:51:05.556549 4642 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/18a08ae1-6b1f-4cf6-adb1-2d0cd9e61858-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 28 06:51:05 crc kubenswrapper[4642]: I0128 06:51:05.556560 4642 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edac8762-d5ab-42ef-b810-8de869bfc50e-config\") on node \"crc\" DevicePath \"\"" Jan 28 06:51:05 crc kubenswrapper[4642]: I0128 06:51:05.556570 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qrhf4\" (UniqueName: \"kubernetes.io/projected/18a08ae1-6b1f-4cf6-adb1-2d0cd9e61858-kube-api-access-qrhf4\") on node \"crc\" DevicePath \"\"" Jan 28 06:51:05 crc kubenswrapper[4642]: I0128 06:51:05.556577 4642 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18a08ae1-6b1f-4cf6-adb1-2d0cd9e61858-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 06:51:05 crc kubenswrapper[4642]: I0128 06:51:05.556585 4642 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18a08ae1-6b1f-4cf6-adb1-2d0cd9e61858-config\") on node \"crc\" DevicePath \"\"" Jan 28 06:51:06 crc kubenswrapper[4642]: I0128 06:51:06.279030 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c588587d7-r4wkq" Jan 28 06:51:06 crc kubenswrapper[4642]: I0128 06:51:06.279029 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c588587d7-r4wkq" event={"ID":"edac8762-d5ab-42ef-b810-8de869bfc50e","Type":"ContainerDied","Data":"2e6f880c3b266d58433297847cf42032ed0aea857afa7dbd43f2de6fbe765f9c"} Jan 28 06:51:06 crc kubenswrapper[4642]: I0128 06:51:06.279157 4642 scope.go:117] "RemoveContainer" containerID="ea5be59e58a7284424ed9144675c75292b0fa6ac389284bd00f93b8ca7474c28" Jan 28 06:51:06 crc kubenswrapper[4642]: I0128 06:51:06.282324 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-78bc4cb977-psz6q" event={"ID":"18a08ae1-6b1f-4cf6-adb1-2d0cd9e61858","Type":"ContainerDied","Data":"cd76ce02490f8f7a0fb2c8bb438d1c07a16a7f912e909a596fa8f00889cba99a"} Jan 28 06:51:06 crc kubenswrapper[4642]: I0128 06:51:06.282413 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-78bc4cb977-psz6q" Jan 28 06:51:06 crc kubenswrapper[4642]: I0128 06:51:06.292712 4642 scope.go:117] "RemoveContainer" containerID="601b299fa9fb0c67340c35e001b17f5aa5a776b4339a2ab62947a7d372bc6acd" Jan 28 06:51:06 crc kubenswrapper[4642]: I0128 06:51:06.302103 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c588587d7-r4wkq"] Jan 28 06:51:06 crc kubenswrapper[4642]: I0128 06:51:06.306072 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c588587d7-r4wkq"] Jan 28 06:51:06 crc kubenswrapper[4642]: I0128 06:51:06.318077 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-78bc4cb977-psz6q"] Jan 28 06:51:06 crc kubenswrapper[4642]: I0128 06:51:06.318943 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-78bc4cb977-psz6q"] Jan 28 06:51:06 crc kubenswrapper[4642]: I0128 06:51:06.333574 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-88f5b8547-cjv66"] Jan 28 06:51:06 crc kubenswrapper[4642]: E0128 06:51:06.333820 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="399d0f01-0fd0-407e-a63d-e7c900f24452" containerName="extract-utilities" Jan 28 06:51:06 crc kubenswrapper[4642]: I0128 06:51:06.333839 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="399d0f01-0fd0-407e-a63d-e7c900f24452" containerName="extract-utilities" Jan 28 06:51:06 crc kubenswrapper[4642]: E0128 06:51:06.333854 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14c19495-c3cb-4f36-9d69-c9a893c766f5" containerName="extract-utilities" Jan 28 06:51:06 crc kubenswrapper[4642]: I0128 06:51:06.333861 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="14c19495-c3cb-4f36-9d69-c9a893c766f5" containerName="extract-utilities" Jan 28 06:51:06 crc kubenswrapper[4642]: E0128 06:51:06.333870 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc531134-3ad4-4937-ab7f-8c3fac78dae6" containerName="extract-utilities" Jan 28 06:51:06 crc kubenswrapper[4642]: I0128 06:51:06.333876 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc531134-3ad4-4937-ab7f-8c3fac78dae6" containerName="extract-utilities" Jan 28 06:51:06 crc kubenswrapper[4642]: E0128 06:51:06.333883 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14c19495-c3cb-4f36-9d69-c9a893c766f5" containerName="extract-content" Jan 28 06:51:06 crc kubenswrapper[4642]: I0128 06:51:06.333888 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="14c19495-c3cb-4f36-9d69-c9a893c766f5" containerName="extract-content" Jan 28 06:51:06 crc kubenswrapper[4642]: E0128 06:51:06.333919 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edac8762-d5ab-42ef-b810-8de869bfc50e" containerName="route-controller-manager" Jan 28 06:51:06 crc kubenswrapper[4642]: I0128 06:51:06.334476 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="edac8762-d5ab-42ef-b810-8de869bfc50e" containerName="route-controller-manager" Jan 28 06:51:06 crc kubenswrapper[4642]: E0128 06:51:06.334488 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a83f4e9-662c-4d05-b296-59e7ca842b15" containerName="registry-server" Jan 28 06:51:06 crc kubenswrapper[4642]: I0128 06:51:06.334589 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a83f4e9-662c-4d05-b296-59e7ca842b15" containerName="registry-server" Jan 28 06:51:06 crc kubenswrapper[4642]: E0128 06:51:06.334600 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14c19495-c3cb-4f36-9d69-c9a893c766f5" containerName="registry-server" Jan 28 06:51:06 crc kubenswrapper[4642]: I0128 06:51:06.334608 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="14c19495-c3cb-4f36-9d69-c9a893c766f5" containerName="registry-server" Jan 28 06:51:06 crc kubenswrapper[4642]: E0128 06:51:06.334619 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a83f4e9-662c-4d05-b296-59e7ca842b15" containerName="extract-utilities" Jan 28 06:51:06 crc kubenswrapper[4642]: I0128 06:51:06.334624 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a83f4e9-662c-4d05-b296-59e7ca842b15" containerName="extract-utilities" Jan 28 06:51:06 crc kubenswrapper[4642]: E0128 06:51:06.334644 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc531134-3ad4-4937-ab7f-8c3fac78dae6" containerName="registry-server" Jan 28 06:51:06 crc kubenswrapper[4642]: I0128 06:51:06.334650 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc531134-3ad4-4937-ab7f-8c3fac78dae6" containerName="registry-server" Jan 28 06:51:06 crc kubenswrapper[4642]: E0128 06:51:06.334657 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="399d0f01-0fd0-407e-a63d-e7c900f24452" containerName="extract-content" Jan 28 06:51:06 crc kubenswrapper[4642]: I0128 06:51:06.334662 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="399d0f01-0fd0-407e-a63d-e7c900f24452" containerName="extract-content" Jan 28 06:51:06 crc kubenswrapper[4642]: E0128 06:51:06.334670 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a83f4e9-662c-4d05-b296-59e7ca842b15" containerName="extract-content" Jan 28 06:51:06 crc kubenswrapper[4642]: I0128 06:51:06.334675 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a83f4e9-662c-4d05-b296-59e7ca842b15" containerName="extract-content" Jan 28 06:51:06 crc kubenswrapper[4642]: E0128 06:51:06.334685 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="399d0f01-0fd0-407e-a63d-e7c900f24452" containerName="registry-server" Jan 28 06:51:06 crc kubenswrapper[4642]: I0128 06:51:06.334690 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="399d0f01-0fd0-407e-a63d-e7c900f24452" containerName="registry-server" Jan 28 06:51:06 crc kubenswrapper[4642]: E0128 06:51:06.334699 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc531134-3ad4-4937-ab7f-8c3fac78dae6" containerName="extract-content" Jan 28 06:51:06 crc kubenswrapper[4642]: I0128 06:51:06.335091 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc531134-3ad4-4937-ab7f-8c3fac78dae6" containerName="extract-content" Jan 28 06:51:06 crc kubenswrapper[4642]: E0128 06:51:06.335103 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18a08ae1-6b1f-4cf6-adb1-2d0cd9e61858" containerName="controller-manager" Jan 28 06:51:06 crc kubenswrapper[4642]: I0128 06:51:06.335170 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="18a08ae1-6b1f-4cf6-adb1-2d0cd9e61858" containerName="controller-manager" Jan 28 06:51:06 crc kubenswrapper[4642]: I0128 06:51:06.335303 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="18a08ae1-6b1f-4cf6-adb1-2d0cd9e61858" containerName="controller-manager" Jan 28 06:51:06 crc kubenswrapper[4642]: I0128 06:51:06.335314 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="edac8762-d5ab-42ef-b810-8de869bfc50e" containerName="route-controller-manager" Jan 28 06:51:06 crc kubenswrapper[4642]: I0128 06:51:06.335323 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="399d0f01-0fd0-407e-a63d-e7c900f24452" containerName="registry-server" Jan 28 06:51:06 crc kubenswrapper[4642]: I0128 06:51:06.335330 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a83f4e9-662c-4d05-b296-59e7ca842b15" containerName="registry-server" Jan 28 06:51:06 crc kubenswrapper[4642]: I0128 06:51:06.335338 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc531134-3ad4-4937-ab7f-8c3fac78dae6" containerName="registry-server" Jan 28 06:51:06 crc kubenswrapper[4642]: I0128 06:51:06.335348 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="14c19495-c3cb-4f36-9d69-c9a893c766f5" containerName="registry-server" Jan 28 06:51:06 crc kubenswrapper[4642]: I0128 06:51:06.335777 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-88f5b8547-cjv66" Jan 28 06:51:06 crc kubenswrapper[4642]: I0128 06:51:06.336325 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-c5c847958-rpb4f"] Jan 28 06:51:06 crc kubenswrapper[4642]: I0128 06:51:06.337079 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c5c847958-rpb4f" Jan 28 06:51:06 crc kubenswrapper[4642]: I0128 06:51:06.338387 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 28 06:51:06 crc kubenswrapper[4642]: I0128 06:51:06.338666 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 28 06:51:06 crc kubenswrapper[4642]: I0128 06:51:06.338756 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 28 06:51:06 crc kubenswrapper[4642]: I0128 06:51:06.338800 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 28 06:51:06 crc kubenswrapper[4642]: I0128 06:51:06.338902 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 28 06:51:06 crc kubenswrapper[4642]: I0128 06:51:06.338980 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-88f5b8547-cjv66"] Jan 28 06:51:06 crc kubenswrapper[4642]: I0128 06:51:06.339073 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 28 06:51:06 crc kubenswrapper[4642]: I0128 06:51:06.339637 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 28 06:51:06 crc kubenswrapper[4642]: I0128 06:51:06.339790 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 28 06:51:06 crc kubenswrapper[4642]: I0128 06:51:06.340434 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 28 06:51:06 crc kubenswrapper[4642]: I0128 06:51:06.340450 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 28 06:51:06 crc kubenswrapper[4642]: I0128 06:51:06.340523 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-c5c847958-rpb4f"] Jan 28 06:51:06 crc kubenswrapper[4642]: I0128 06:51:06.340591 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 28 06:51:06 crc kubenswrapper[4642]: I0128 06:51:06.341216 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 28 06:51:06 crc kubenswrapper[4642]: I0128 06:51:06.348969 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 28 06:51:06 crc kubenswrapper[4642]: I0128 06:51:06.363830 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cac681b7-54d5-42b5-bab9-9dc6ab1dd0dc-config\") pod \"controller-manager-c5c847958-rpb4f\" (UID: \"cac681b7-54d5-42b5-bab9-9dc6ab1dd0dc\") " pod="openshift-controller-manager/controller-manager-c5c847958-rpb4f" Jan 28 06:51:06 crc kubenswrapper[4642]: I0128 06:51:06.363878 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nc2ck\" (UniqueName: \"kubernetes.io/projected/cac681b7-54d5-42b5-bab9-9dc6ab1dd0dc-kube-api-access-nc2ck\") pod \"controller-manager-c5c847958-rpb4f\" (UID: \"cac681b7-54d5-42b5-bab9-9dc6ab1dd0dc\") " pod="openshift-controller-manager/controller-manager-c5c847958-rpb4f" Jan 28 06:51:06 crc kubenswrapper[4642]: I0128 06:51:06.363904 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhr7z\" (UniqueName: \"kubernetes.io/projected/43651102-735d-490b-b53e-9b94cf5cd4ba-kube-api-access-hhr7z\") pod \"route-controller-manager-88f5b8547-cjv66\" (UID: \"43651102-735d-490b-b53e-9b94cf5cd4ba\") " pod="openshift-route-controller-manager/route-controller-manager-88f5b8547-cjv66" Jan 28 06:51:06 crc kubenswrapper[4642]: I0128 06:51:06.363922 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cac681b7-54d5-42b5-bab9-9dc6ab1dd0dc-client-ca\") pod \"controller-manager-c5c847958-rpb4f\" (UID: \"cac681b7-54d5-42b5-bab9-9dc6ab1dd0dc\") " pod="openshift-controller-manager/controller-manager-c5c847958-rpb4f" Jan 28 06:51:06 crc kubenswrapper[4642]: I0128 06:51:06.363942 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/43651102-735d-490b-b53e-9b94cf5cd4ba-client-ca\") pod \"route-controller-manager-88f5b8547-cjv66\" (UID: \"43651102-735d-490b-b53e-9b94cf5cd4ba\") " pod="openshift-route-controller-manager/route-controller-manager-88f5b8547-cjv66" Jan 28 06:51:06 crc kubenswrapper[4642]: I0128 06:51:06.364102 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cac681b7-54d5-42b5-bab9-9dc6ab1dd0dc-proxy-ca-bundles\") pod \"controller-manager-c5c847958-rpb4f\" (UID: \"cac681b7-54d5-42b5-bab9-9dc6ab1dd0dc\") " pod="openshift-controller-manager/controller-manager-c5c847958-rpb4f" Jan 28 06:51:06 crc kubenswrapper[4642]: I0128 06:51:06.364147 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43651102-735d-490b-b53e-9b94cf5cd4ba-config\") pod \"route-controller-manager-88f5b8547-cjv66\" (UID: \"43651102-735d-490b-b53e-9b94cf5cd4ba\") " pod="openshift-route-controller-manager/route-controller-manager-88f5b8547-cjv66" Jan 28 06:51:06 crc kubenswrapper[4642]: I0128 06:51:06.364168 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43651102-735d-490b-b53e-9b94cf5cd4ba-serving-cert\") pod \"route-controller-manager-88f5b8547-cjv66\" (UID: \"43651102-735d-490b-b53e-9b94cf5cd4ba\") " pod="openshift-route-controller-manager/route-controller-manager-88f5b8547-cjv66" Jan 28 06:51:06 crc kubenswrapper[4642]: I0128 06:51:06.364282 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cac681b7-54d5-42b5-bab9-9dc6ab1dd0dc-serving-cert\") pod \"controller-manager-c5c847958-rpb4f\" (UID: \"cac681b7-54d5-42b5-bab9-9dc6ab1dd0dc\") " pod="openshift-controller-manager/controller-manager-c5c847958-rpb4f" Jan 28 06:51:06 crc kubenswrapper[4642]: I0128 06:51:06.465559 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cac681b7-54d5-42b5-bab9-9dc6ab1dd0dc-config\") pod \"controller-manager-c5c847958-rpb4f\" (UID: \"cac681b7-54d5-42b5-bab9-9dc6ab1dd0dc\") " pod="openshift-controller-manager/controller-manager-c5c847958-rpb4f" Jan 28 06:51:06 crc kubenswrapper[4642]: I0128 06:51:06.465606 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nc2ck\" (UniqueName: \"kubernetes.io/projected/cac681b7-54d5-42b5-bab9-9dc6ab1dd0dc-kube-api-access-nc2ck\") pod \"controller-manager-c5c847958-rpb4f\" (UID: \"cac681b7-54d5-42b5-bab9-9dc6ab1dd0dc\") " pod="openshift-controller-manager/controller-manager-c5c847958-rpb4f" Jan 28 06:51:06 crc kubenswrapper[4642]: I0128 06:51:06.465625 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cac681b7-54d5-42b5-bab9-9dc6ab1dd0dc-client-ca\") pod \"controller-manager-c5c847958-rpb4f\" (UID: \"cac681b7-54d5-42b5-bab9-9dc6ab1dd0dc\") " pod="openshift-controller-manager/controller-manager-c5c847958-rpb4f" Jan 28 06:51:06 crc kubenswrapper[4642]: I0128 06:51:06.465650 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhr7z\" (UniqueName: \"kubernetes.io/projected/43651102-735d-490b-b53e-9b94cf5cd4ba-kube-api-access-hhr7z\") pod \"route-controller-manager-88f5b8547-cjv66\" (UID: \"43651102-735d-490b-b53e-9b94cf5cd4ba\") " pod="openshift-route-controller-manager/route-controller-manager-88f5b8547-cjv66" Jan 28 06:51:06 crc kubenswrapper[4642]: I0128 06:51:06.465668 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/43651102-735d-490b-b53e-9b94cf5cd4ba-client-ca\") pod \"route-controller-manager-88f5b8547-cjv66\" (UID: \"43651102-735d-490b-b53e-9b94cf5cd4ba\") " pod="openshift-route-controller-manager/route-controller-manager-88f5b8547-cjv66" Jan 28 06:51:06 crc kubenswrapper[4642]: I0128 06:51:06.465704 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cac681b7-54d5-42b5-bab9-9dc6ab1dd0dc-proxy-ca-bundles\") pod \"controller-manager-c5c847958-rpb4f\" (UID: \"cac681b7-54d5-42b5-bab9-9dc6ab1dd0dc\") " pod="openshift-controller-manager/controller-manager-c5c847958-rpb4f" Jan 28 06:51:06 crc kubenswrapper[4642]: I0128 06:51:06.465733 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43651102-735d-490b-b53e-9b94cf5cd4ba-config\") pod \"route-controller-manager-88f5b8547-cjv66\" (UID: \"43651102-735d-490b-b53e-9b94cf5cd4ba\") " pod="openshift-route-controller-manager/route-controller-manager-88f5b8547-cjv66" Jan 28 06:51:06 crc kubenswrapper[4642]: I0128 06:51:06.465748 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43651102-735d-490b-b53e-9b94cf5cd4ba-serving-cert\") pod \"route-controller-manager-88f5b8547-cjv66\" (UID: \"43651102-735d-490b-b53e-9b94cf5cd4ba\") " pod="openshift-route-controller-manager/route-controller-manager-88f5b8547-cjv66" Jan 28 06:51:06 crc kubenswrapper[4642]: I0128 06:51:06.465777 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cac681b7-54d5-42b5-bab9-9dc6ab1dd0dc-serving-cert\") pod \"controller-manager-c5c847958-rpb4f\" (UID: \"cac681b7-54d5-42b5-bab9-9dc6ab1dd0dc\") " pod="openshift-controller-manager/controller-manager-c5c847958-rpb4f" Jan 28 06:51:06 crc kubenswrapper[4642]: I0128 06:51:06.466572 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/43651102-735d-490b-b53e-9b94cf5cd4ba-client-ca\") pod \"route-controller-manager-88f5b8547-cjv66\" (UID: \"43651102-735d-490b-b53e-9b94cf5cd4ba\") " pod="openshift-route-controller-manager/route-controller-manager-88f5b8547-cjv66" Jan 28 06:51:06 crc kubenswrapper[4642]: I0128 06:51:06.467041 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43651102-735d-490b-b53e-9b94cf5cd4ba-config\") pod \"route-controller-manager-88f5b8547-cjv66\" (UID: \"43651102-735d-490b-b53e-9b94cf5cd4ba\") " pod="openshift-route-controller-manager/route-controller-manager-88f5b8547-cjv66" Jan 28 06:51:06 crc kubenswrapper[4642]: I0128 06:51:06.467684 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cac681b7-54d5-42b5-bab9-9dc6ab1dd0dc-config\") pod \"controller-manager-c5c847958-rpb4f\" (UID: \"cac681b7-54d5-42b5-bab9-9dc6ab1dd0dc\") " pod="openshift-controller-manager/controller-manager-c5c847958-rpb4f" Jan 28 06:51:06 crc kubenswrapper[4642]: I0128 06:51:06.467720 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cac681b7-54d5-42b5-bab9-9dc6ab1dd0dc-proxy-ca-bundles\") pod \"controller-manager-c5c847958-rpb4f\" (UID: \"cac681b7-54d5-42b5-bab9-9dc6ab1dd0dc\") " pod="openshift-controller-manager/controller-manager-c5c847958-rpb4f" Jan 28 06:51:06 crc kubenswrapper[4642]: I0128 06:51:06.467810 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cac681b7-54d5-42b5-bab9-9dc6ab1dd0dc-client-ca\") pod \"controller-manager-c5c847958-rpb4f\" (UID: \"cac681b7-54d5-42b5-bab9-9dc6ab1dd0dc\") " pod="openshift-controller-manager/controller-manager-c5c847958-rpb4f" Jan 28 06:51:06 crc kubenswrapper[4642]: I0128 06:51:06.472864 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43651102-735d-490b-b53e-9b94cf5cd4ba-serving-cert\") pod \"route-controller-manager-88f5b8547-cjv66\" (UID: \"43651102-735d-490b-b53e-9b94cf5cd4ba\") " pod="openshift-route-controller-manager/route-controller-manager-88f5b8547-cjv66" Jan 28 06:51:06 crc kubenswrapper[4642]: I0128 06:51:06.474430 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cac681b7-54d5-42b5-bab9-9dc6ab1dd0dc-serving-cert\") pod \"controller-manager-c5c847958-rpb4f\" (UID: \"cac681b7-54d5-42b5-bab9-9dc6ab1dd0dc\") " pod="openshift-controller-manager/controller-manager-c5c847958-rpb4f" Jan 28 06:51:06 crc kubenswrapper[4642]: I0128 06:51:06.478572 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nc2ck\" (UniqueName: \"kubernetes.io/projected/cac681b7-54d5-42b5-bab9-9dc6ab1dd0dc-kube-api-access-nc2ck\") pod \"controller-manager-c5c847958-rpb4f\" (UID: \"cac681b7-54d5-42b5-bab9-9dc6ab1dd0dc\") " pod="openshift-controller-manager/controller-manager-c5c847958-rpb4f" Jan 28 06:51:06 crc kubenswrapper[4642]: I0128 06:51:06.479875 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhr7z\" (UniqueName: \"kubernetes.io/projected/43651102-735d-490b-b53e-9b94cf5cd4ba-kube-api-access-hhr7z\") pod \"route-controller-manager-88f5b8547-cjv66\" (UID: \"43651102-735d-490b-b53e-9b94cf5cd4ba\") " pod="openshift-route-controller-manager/route-controller-manager-88f5b8547-cjv66" Jan 28 06:51:06 crc kubenswrapper[4642]: I0128 06:51:06.652836 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-88f5b8547-cjv66" Jan 28 06:51:06 crc kubenswrapper[4642]: I0128 06:51:06.660324 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c5c847958-rpb4f" Jan 28 06:51:07 crc kubenswrapper[4642]: I0128 06:51:07.016951 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-c5c847958-rpb4f"] Jan 28 06:51:07 crc kubenswrapper[4642]: I0128 06:51:07.050171 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-88f5b8547-cjv66"] Jan 28 06:51:07 crc kubenswrapper[4642]: W0128 06:51:07.054694 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod43651102_735d_490b_b53e_9b94cf5cd4ba.slice/crio-f38c7f90f3dfd2579950483a4f9a09ee5a7e91d3971662da9d6875b71bcd0d0d WatchSource:0}: Error finding container f38c7f90f3dfd2579950483a4f9a09ee5a7e91d3971662da9d6875b71bcd0d0d: Status 404 returned error can't find the container with id f38c7f90f3dfd2579950483a4f9a09ee5a7e91d3971662da9d6875b71bcd0d0d Jan 28 06:51:07 crc kubenswrapper[4642]: I0128 06:51:07.105042 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a83f4e9-662c-4d05-b296-59e7ca842b15" path="/var/lib/kubelet/pods/0a83f4e9-662c-4d05-b296-59e7ca842b15/volumes" Jan 28 06:51:07 crc kubenswrapper[4642]: I0128 06:51:07.105679 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18a08ae1-6b1f-4cf6-adb1-2d0cd9e61858" path="/var/lib/kubelet/pods/18a08ae1-6b1f-4cf6-adb1-2d0cd9e61858/volumes" Jan 28 06:51:07 crc kubenswrapper[4642]: I0128 06:51:07.106134 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edac8762-d5ab-42ef-b810-8de869bfc50e" path="/var/lib/kubelet/pods/edac8762-d5ab-42ef-b810-8de869bfc50e/volumes" Jan 28 06:51:07 crc kubenswrapper[4642]: I0128 06:51:07.290970 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-88f5b8547-cjv66" event={"ID":"43651102-735d-490b-b53e-9b94cf5cd4ba","Type":"ContainerStarted","Data":"d36b9f2d142757bdf56dc0f110b0caf014b5dccc3ccf23021721743159da92ef"} Jan 28 06:51:07 crc kubenswrapper[4642]: I0128 06:51:07.291253 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-88f5b8547-cjv66" Jan 28 06:51:07 crc kubenswrapper[4642]: I0128 06:51:07.291265 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-88f5b8547-cjv66" event={"ID":"43651102-735d-490b-b53e-9b94cf5cd4ba","Type":"ContainerStarted","Data":"f38c7f90f3dfd2579950483a4f9a09ee5a7e91d3971662da9d6875b71bcd0d0d"} Jan 28 06:51:07 crc kubenswrapper[4642]: I0128 06:51:07.295050 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-c5c847958-rpb4f" event={"ID":"cac681b7-54d5-42b5-bab9-9dc6ab1dd0dc","Type":"ContainerStarted","Data":"4e48bcbd881049071ca1dd2e292257df1907f7d783939a8f7a91f1d326b8adbd"} Jan 28 06:51:07 crc kubenswrapper[4642]: I0128 06:51:07.295092 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-c5c847958-rpb4f" event={"ID":"cac681b7-54d5-42b5-bab9-9dc6ab1dd0dc","Type":"ContainerStarted","Data":"adbebee09538de345f66ba569e0284609aaf6bc18b1fb6c5593d52392309c085"} Jan 28 06:51:07 crc kubenswrapper[4642]: I0128 06:51:07.295212 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-c5c847958-rpb4f" Jan 28 06:51:07 crc kubenswrapper[4642]: I0128 06:51:07.298897 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-c5c847958-rpb4f" Jan 28 06:51:07 crc kubenswrapper[4642]: I0128 06:51:07.309406 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-88f5b8547-cjv66" podStartSLOduration=3.309391441 podStartE2EDuration="3.309391441s" podCreationTimestamp="2026-01-28 06:51:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 06:51:07.307744112 +0000 UTC m=+190.539832921" watchObservedRunningTime="2026-01-28 06:51:07.309391441 +0000 UTC m=+190.541480250" Jan 28 06:51:07 crc kubenswrapper[4642]: I0128 06:51:07.327775 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-c5c847958-rpb4f" podStartSLOduration=3.327760878 podStartE2EDuration="3.327760878s" podCreationTimestamp="2026-01-28 06:51:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 06:51:07.327605987 +0000 UTC m=+190.559694796" watchObservedRunningTime="2026-01-28 06:51:07.327760878 +0000 UTC m=+190.559849687" Jan 28 06:51:07 crc kubenswrapper[4642]: I0128 06:51:07.416112 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-88f5b8547-cjv66" Jan 28 06:51:07 crc kubenswrapper[4642]: I0128 06:51:07.543234 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 28 06:51:07 crc kubenswrapper[4642]: I0128 06:51:07.543828 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 06:51:07 crc kubenswrapper[4642]: I0128 06:51:07.545234 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 28 06:51:07 crc kubenswrapper[4642]: I0128 06:51:07.546947 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 28 06:51:07 crc kubenswrapper[4642]: I0128 06:51:07.552708 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 28 06:51:07 crc kubenswrapper[4642]: I0128 06:51:07.677274 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/966f2fa4-9a12-419a-8ae3-c57592daaef7-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"966f2fa4-9a12-419a-8ae3-c57592daaef7\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 06:51:07 crc kubenswrapper[4642]: I0128 06:51:07.677340 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/966f2fa4-9a12-419a-8ae3-c57592daaef7-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"966f2fa4-9a12-419a-8ae3-c57592daaef7\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 06:51:07 crc kubenswrapper[4642]: I0128 06:51:07.778696 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/966f2fa4-9a12-419a-8ae3-c57592daaef7-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"966f2fa4-9a12-419a-8ae3-c57592daaef7\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 06:51:07 crc kubenswrapper[4642]: I0128 06:51:07.778803 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/966f2fa4-9a12-419a-8ae3-c57592daaef7-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"966f2fa4-9a12-419a-8ae3-c57592daaef7\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 06:51:07 crc kubenswrapper[4642]: I0128 06:51:07.778823 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/966f2fa4-9a12-419a-8ae3-c57592daaef7-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"966f2fa4-9a12-419a-8ae3-c57592daaef7\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 06:51:07 crc kubenswrapper[4642]: I0128 06:51:07.794480 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/966f2fa4-9a12-419a-8ae3-c57592daaef7-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"966f2fa4-9a12-419a-8ae3-c57592daaef7\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 06:51:07 crc kubenswrapper[4642]: I0128 06:51:07.856691 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 06:51:08 crc kubenswrapper[4642]: I0128 06:51:08.190431 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 28 06:51:08 crc kubenswrapper[4642]: I0128 06:51:08.199255 4642 patch_prober.go:28] interesting pod/machine-config-daemon-hdsmf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 06:51:08 crc kubenswrapper[4642]: I0128 06:51:08.199309 4642 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 06:51:08 crc kubenswrapper[4642]: I0128 06:51:08.302263 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"966f2fa4-9a12-419a-8ae3-c57592daaef7","Type":"ContainerStarted","Data":"79c68c17b23accb8ff3499a6a13e9333022d246c64dcf7dd6ac0a00f2641c585"} Jan 28 06:51:09 crc kubenswrapper[4642]: I0128 06:51:09.307409 4642 generic.go:334] "Generic (PLEG): container finished" podID="966f2fa4-9a12-419a-8ae3-c57592daaef7" containerID="f664655eb803e84027a1d43bc6a0e493c5f39537a5dc698b42cdd127ccf9bac2" exitCode=0 Jan 28 06:51:09 crc kubenswrapper[4642]: I0128 06:51:09.307532 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"966f2fa4-9a12-419a-8ae3-c57592daaef7","Type":"ContainerDied","Data":"f664655eb803e84027a1d43bc6a0e493c5f39537a5dc698b42cdd127ccf9bac2"} Jan 28 06:51:10 crc kubenswrapper[4642]: I0128 06:51:10.556844 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 06:51:10 crc kubenswrapper[4642]: I0128 06:51:10.713074 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/966f2fa4-9a12-419a-8ae3-c57592daaef7-kube-api-access\") pod \"966f2fa4-9a12-419a-8ae3-c57592daaef7\" (UID: \"966f2fa4-9a12-419a-8ae3-c57592daaef7\") " Jan 28 06:51:10 crc kubenswrapper[4642]: I0128 06:51:10.713425 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/966f2fa4-9a12-419a-8ae3-c57592daaef7-kubelet-dir\") pod \"966f2fa4-9a12-419a-8ae3-c57592daaef7\" (UID: \"966f2fa4-9a12-419a-8ae3-c57592daaef7\") " Jan 28 06:51:10 crc kubenswrapper[4642]: I0128 06:51:10.713475 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/966f2fa4-9a12-419a-8ae3-c57592daaef7-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "966f2fa4-9a12-419a-8ae3-c57592daaef7" (UID: "966f2fa4-9a12-419a-8ae3-c57592daaef7"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 06:51:10 crc kubenswrapper[4642]: I0128 06:51:10.713808 4642 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/966f2fa4-9a12-419a-8ae3-c57592daaef7-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 28 06:51:10 crc kubenswrapper[4642]: I0128 06:51:10.720240 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/966f2fa4-9a12-419a-8ae3-c57592daaef7-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "966f2fa4-9a12-419a-8ae3-c57592daaef7" (UID: "966f2fa4-9a12-419a-8ae3-c57592daaef7"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 06:51:10 crc kubenswrapper[4642]: I0128 06:51:10.814811 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/966f2fa4-9a12-419a-8ae3-c57592daaef7-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 06:51:11 crc kubenswrapper[4642]: I0128 06:51:11.318821 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"966f2fa4-9a12-419a-8ae3-c57592daaef7","Type":"ContainerDied","Data":"79c68c17b23accb8ff3499a6a13e9333022d246c64dcf7dd6ac0a00f2641c585"} Jan 28 06:51:11 crc kubenswrapper[4642]: I0128 06:51:11.318857 4642 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79c68c17b23accb8ff3499a6a13e9333022d246c64dcf7dd6ac0a00f2641c585" Jan 28 06:51:11 crc kubenswrapper[4642]: I0128 06:51:11.318867 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 06:51:14 crc kubenswrapper[4642]: I0128 06:51:14.942799 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 28 06:51:14 crc kubenswrapper[4642]: E0128 06:51:14.943405 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="966f2fa4-9a12-419a-8ae3-c57592daaef7" containerName="pruner" Jan 28 06:51:14 crc kubenswrapper[4642]: I0128 06:51:14.943420 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="966f2fa4-9a12-419a-8ae3-c57592daaef7" containerName="pruner" Jan 28 06:51:14 crc kubenswrapper[4642]: I0128 06:51:14.943526 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="966f2fa4-9a12-419a-8ae3-c57592daaef7" containerName="pruner" Jan 28 06:51:14 crc kubenswrapper[4642]: I0128 06:51:14.943895 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 28 06:51:14 crc kubenswrapper[4642]: I0128 06:51:14.945428 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 28 06:51:14 crc kubenswrapper[4642]: I0128 06:51:14.945436 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 28 06:51:14 crc kubenswrapper[4642]: I0128 06:51:14.951379 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 28 06:51:15 crc kubenswrapper[4642]: I0128 06:51:15.067486 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a74d06e3-e9f9-4b9e-a6d4-4d439a470df0-kubelet-dir\") pod \"installer-9-crc\" (UID: \"a74d06e3-e9f9-4b9e-a6d4-4d439a470df0\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 06:51:15 crc kubenswrapper[4642]: I0128 06:51:15.067625 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a74d06e3-e9f9-4b9e-a6d4-4d439a470df0-var-lock\") pod \"installer-9-crc\" (UID: \"a74d06e3-e9f9-4b9e-a6d4-4d439a470df0\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 06:51:15 crc kubenswrapper[4642]: I0128 06:51:15.067770 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a74d06e3-e9f9-4b9e-a6d4-4d439a470df0-kube-api-access\") pod \"installer-9-crc\" (UID: \"a74d06e3-e9f9-4b9e-a6d4-4d439a470df0\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 06:51:15 crc kubenswrapper[4642]: I0128 06:51:15.168785 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a74d06e3-e9f9-4b9e-a6d4-4d439a470df0-var-lock\") pod \"installer-9-crc\" (UID: \"a74d06e3-e9f9-4b9e-a6d4-4d439a470df0\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 06:51:15 crc kubenswrapper[4642]: I0128 06:51:15.168860 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a74d06e3-e9f9-4b9e-a6d4-4d439a470df0-kube-api-access\") pod \"installer-9-crc\" (UID: \"a74d06e3-e9f9-4b9e-a6d4-4d439a470df0\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 06:51:15 crc kubenswrapper[4642]: I0128 06:51:15.168905 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a74d06e3-e9f9-4b9e-a6d4-4d439a470df0-kubelet-dir\") pod \"installer-9-crc\" (UID: \"a74d06e3-e9f9-4b9e-a6d4-4d439a470df0\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 06:51:15 crc kubenswrapper[4642]: I0128 06:51:15.168975 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a74d06e3-e9f9-4b9e-a6d4-4d439a470df0-kubelet-dir\") pod \"installer-9-crc\" (UID: \"a74d06e3-e9f9-4b9e-a6d4-4d439a470df0\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 06:51:15 crc kubenswrapper[4642]: I0128 06:51:15.169017 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a74d06e3-e9f9-4b9e-a6d4-4d439a470df0-var-lock\") pod \"installer-9-crc\" (UID: \"a74d06e3-e9f9-4b9e-a6d4-4d439a470df0\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 06:51:15 crc kubenswrapper[4642]: I0128 06:51:15.186117 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a74d06e3-e9f9-4b9e-a6d4-4d439a470df0-kube-api-access\") pod \"installer-9-crc\" (UID: \"a74d06e3-e9f9-4b9e-a6d4-4d439a470df0\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 06:51:15 crc kubenswrapper[4642]: I0128 06:51:15.258370 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 28 06:51:15 crc kubenswrapper[4642]: I0128 06:51:15.614719 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 28 06:51:16 crc kubenswrapper[4642]: I0128 06:51:16.340804 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"a74d06e3-e9f9-4b9e-a6d4-4d439a470df0","Type":"ContainerStarted","Data":"7391b239d9c233b5b73dd1ba902c1f79f20c004f7fe6878c5fa049b38e168b35"} Jan 28 06:51:16 crc kubenswrapper[4642]: I0128 06:51:16.341126 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"a74d06e3-e9f9-4b9e-a6d4-4d439a470df0","Type":"ContainerStarted","Data":"a4de104489ec0d13abb778e6268a3881528551f36ac0c1823533394e9cdb2013"} Jan 28 06:51:16 crc kubenswrapper[4642]: I0128 06:51:16.353057 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.353041339 podStartE2EDuration="2.353041339s" podCreationTimestamp="2026-01-28 06:51:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 06:51:16.350236638 +0000 UTC m=+199.582325447" watchObservedRunningTime="2026-01-28 06:51:16.353041339 +0000 UTC m=+199.585130138" Jan 28 06:51:24 crc kubenswrapper[4642]: I0128 06:51:24.802950 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-c5c847958-rpb4f"] Jan 28 06:51:24 crc kubenswrapper[4642]: I0128 06:51:24.803836 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-c5c847958-rpb4f" podUID="cac681b7-54d5-42b5-bab9-9dc6ab1dd0dc" containerName="controller-manager" containerID="cri-o://4e48bcbd881049071ca1dd2e292257df1907f7d783939a8f7a91f1d326b8adbd" gracePeriod=30 Jan 28 06:51:24 crc kubenswrapper[4642]: I0128 06:51:24.818329 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-88f5b8547-cjv66"] Jan 28 06:51:24 crc kubenswrapper[4642]: I0128 06:51:24.818923 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-88f5b8547-cjv66" podUID="43651102-735d-490b-b53e-9b94cf5cd4ba" containerName="route-controller-manager" containerID="cri-o://d36b9f2d142757bdf56dc0f110b0caf014b5dccc3ccf23021721743159da92ef" gracePeriod=30 Jan 28 06:51:25 crc kubenswrapper[4642]: I0128 06:51:25.260931 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-88f5b8547-cjv66" Jan 28 06:51:25 crc kubenswrapper[4642]: I0128 06:51:25.311293 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c5c847958-rpb4f" Jan 28 06:51:25 crc kubenswrapper[4642]: I0128 06:51:25.381605 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43651102-735d-490b-b53e-9b94cf5cd4ba-serving-cert\") pod \"43651102-735d-490b-b53e-9b94cf5cd4ba\" (UID: \"43651102-735d-490b-b53e-9b94cf5cd4ba\") " Jan 28 06:51:25 crc kubenswrapper[4642]: I0128 06:51:25.381697 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hhr7z\" (UniqueName: \"kubernetes.io/projected/43651102-735d-490b-b53e-9b94cf5cd4ba-kube-api-access-hhr7z\") pod \"43651102-735d-490b-b53e-9b94cf5cd4ba\" (UID: \"43651102-735d-490b-b53e-9b94cf5cd4ba\") " Jan 28 06:51:25 crc kubenswrapper[4642]: I0128 06:51:25.381735 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/43651102-735d-490b-b53e-9b94cf5cd4ba-client-ca\") pod \"43651102-735d-490b-b53e-9b94cf5cd4ba\" (UID: \"43651102-735d-490b-b53e-9b94cf5cd4ba\") " Jan 28 06:51:25 crc kubenswrapper[4642]: I0128 06:51:25.381775 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43651102-735d-490b-b53e-9b94cf5cd4ba-config\") pod \"43651102-735d-490b-b53e-9b94cf5cd4ba\" (UID: \"43651102-735d-490b-b53e-9b94cf5cd4ba\") " Jan 28 06:51:25 crc kubenswrapper[4642]: I0128 06:51:25.382392 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43651102-735d-490b-b53e-9b94cf5cd4ba-client-ca" (OuterVolumeSpecName: "client-ca") pod "43651102-735d-490b-b53e-9b94cf5cd4ba" (UID: "43651102-735d-490b-b53e-9b94cf5cd4ba"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:51:25 crc kubenswrapper[4642]: I0128 06:51:25.382544 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43651102-735d-490b-b53e-9b94cf5cd4ba-config" (OuterVolumeSpecName: "config") pod "43651102-735d-490b-b53e-9b94cf5cd4ba" (UID: "43651102-735d-490b-b53e-9b94cf5cd4ba"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:51:25 crc kubenswrapper[4642]: I0128 06:51:25.382665 4642 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/43651102-735d-490b-b53e-9b94cf5cd4ba-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 06:51:25 crc kubenswrapper[4642]: I0128 06:51:25.382683 4642 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43651102-735d-490b-b53e-9b94cf5cd4ba-config\") on node \"crc\" DevicePath \"\"" Jan 28 06:51:25 crc kubenswrapper[4642]: I0128 06:51:25.385777 4642 generic.go:334] "Generic (PLEG): container finished" podID="cac681b7-54d5-42b5-bab9-9dc6ab1dd0dc" containerID="4e48bcbd881049071ca1dd2e292257df1907f7d783939a8f7a91f1d326b8adbd" exitCode=0 Jan 28 06:51:25 crc kubenswrapper[4642]: I0128 06:51:25.385845 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c5c847958-rpb4f" Jan 28 06:51:25 crc kubenswrapper[4642]: I0128 06:51:25.385874 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-c5c847958-rpb4f" event={"ID":"cac681b7-54d5-42b5-bab9-9dc6ab1dd0dc","Type":"ContainerDied","Data":"4e48bcbd881049071ca1dd2e292257df1907f7d783939a8f7a91f1d326b8adbd"} Jan 28 06:51:25 crc kubenswrapper[4642]: I0128 06:51:25.385929 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-c5c847958-rpb4f" event={"ID":"cac681b7-54d5-42b5-bab9-9dc6ab1dd0dc","Type":"ContainerDied","Data":"adbebee09538de345f66ba569e0284609aaf6bc18b1fb6c5593d52392309c085"} Jan 28 06:51:25 crc kubenswrapper[4642]: I0128 06:51:25.385950 4642 scope.go:117] "RemoveContainer" containerID="4e48bcbd881049071ca1dd2e292257df1907f7d783939a8f7a91f1d326b8adbd" Jan 28 06:51:25 crc kubenswrapper[4642]: I0128 06:51:25.387404 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43651102-735d-490b-b53e-9b94cf5cd4ba-kube-api-access-hhr7z" (OuterVolumeSpecName: "kube-api-access-hhr7z") pod "43651102-735d-490b-b53e-9b94cf5cd4ba" (UID: "43651102-735d-490b-b53e-9b94cf5cd4ba"). InnerVolumeSpecName "kube-api-access-hhr7z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 06:51:25 crc kubenswrapper[4642]: I0128 06:51:25.387773 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43651102-735d-490b-b53e-9b94cf5cd4ba-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "43651102-735d-490b-b53e-9b94cf5cd4ba" (UID: "43651102-735d-490b-b53e-9b94cf5cd4ba"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 06:51:25 crc kubenswrapper[4642]: I0128 06:51:25.388515 4642 generic.go:334] "Generic (PLEG): container finished" podID="43651102-735d-490b-b53e-9b94cf5cd4ba" containerID="d36b9f2d142757bdf56dc0f110b0caf014b5dccc3ccf23021721743159da92ef" exitCode=0 Jan 28 06:51:25 crc kubenswrapper[4642]: I0128 06:51:25.388562 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-88f5b8547-cjv66" Jan 28 06:51:25 crc kubenswrapper[4642]: I0128 06:51:25.388563 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-88f5b8547-cjv66" event={"ID":"43651102-735d-490b-b53e-9b94cf5cd4ba","Type":"ContainerDied","Data":"d36b9f2d142757bdf56dc0f110b0caf014b5dccc3ccf23021721743159da92ef"} Jan 28 06:51:25 crc kubenswrapper[4642]: I0128 06:51:25.389310 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-88f5b8547-cjv66" event={"ID":"43651102-735d-490b-b53e-9b94cf5cd4ba","Type":"ContainerDied","Data":"f38c7f90f3dfd2579950483a4f9a09ee5a7e91d3971662da9d6875b71bcd0d0d"} Jan 28 06:51:25 crc kubenswrapper[4642]: I0128 06:51:25.400132 4642 scope.go:117] "RemoveContainer" containerID="4e48bcbd881049071ca1dd2e292257df1907f7d783939a8f7a91f1d326b8adbd" Jan 28 06:51:25 crc kubenswrapper[4642]: E0128 06:51:25.400451 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e48bcbd881049071ca1dd2e292257df1907f7d783939a8f7a91f1d326b8adbd\": container with ID starting with 4e48bcbd881049071ca1dd2e292257df1907f7d783939a8f7a91f1d326b8adbd not found: ID does not exist" containerID="4e48bcbd881049071ca1dd2e292257df1907f7d783939a8f7a91f1d326b8adbd" Jan 28 06:51:25 crc kubenswrapper[4642]: I0128 06:51:25.400581 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e48bcbd881049071ca1dd2e292257df1907f7d783939a8f7a91f1d326b8adbd"} err="failed to get container status \"4e48bcbd881049071ca1dd2e292257df1907f7d783939a8f7a91f1d326b8adbd\": rpc error: code = NotFound desc = could not find container \"4e48bcbd881049071ca1dd2e292257df1907f7d783939a8f7a91f1d326b8adbd\": container with ID starting with 4e48bcbd881049071ca1dd2e292257df1907f7d783939a8f7a91f1d326b8adbd not found: ID does not exist" Jan 28 06:51:25 crc kubenswrapper[4642]: I0128 06:51:25.400673 4642 scope.go:117] "RemoveContainer" containerID="d36b9f2d142757bdf56dc0f110b0caf014b5dccc3ccf23021721743159da92ef" Jan 28 06:51:25 crc kubenswrapper[4642]: I0128 06:51:25.411054 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-88f5b8547-cjv66"] Jan 28 06:51:25 crc kubenswrapper[4642]: I0128 06:51:25.413465 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-88f5b8547-cjv66"] Jan 28 06:51:25 crc kubenswrapper[4642]: I0128 06:51:25.413977 4642 scope.go:117] "RemoveContainer" containerID="d36b9f2d142757bdf56dc0f110b0caf014b5dccc3ccf23021721743159da92ef" Jan 28 06:51:25 crc kubenswrapper[4642]: E0128 06:51:25.414340 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d36b9f2d142757bdf56dc0f110b0caf014b5dccc3ccf23021721743159da92ef\": container with ID starting with d36b9f2d142757bdf56dc0f110b0caf014b5dccc3ccf23021721743159da92ef not found: ID does not exist" containerID="d36b9f2d142757bdf56dc0f110b0caf014b5dccc3ccf23021721743159da92ef" Jan 28 06:51:25 crc kubenswrapper[4642]: I0128 06:51:25.414366 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d36b9f2d142757bdf56dc0f110b0caf014b5dccc3ccf23021721743159da92ef"} err="failed to get container status \"d36b9f2d142757bdf56dc0f110b0caf014b5dccc3ccf23021721743159da92ef\": rpc error: code = NotFound desc = could not find container \"d36b9f2d142757bdf56dc0f110b0caf014b5dccc3ccf23021721743159da92ef\": container with ID starting with d36b9f2d142757bdf56dc0f110b0caf014b5dccc3ccf23021721743159da92ef not found: ID does not exist" Jan 28 06:51:25 crc kubenswrapper[4642]: I0128 06:51:25.484018 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nc2ck\" (UniqueName: \"kubernetes.io/projected/cac681b7-54d5-42b5-bab9-9dc6ab1dd0dc-kube-api-access-nc2ck\") pod \"cac681b7-54d5-42b5-bab9-9dc6ab1dd0dc\" (UID: \"cac681b7-54d5-42b5-bab9-9dc6ab1dd0dc\") " Jan 28 06:51:25 crc kubenswrapper[4642]: I0128 06:51:25.484081 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cac681b7-54d5-42b5-bab9-9dc6ab1dd0dc-client-ca\") pod \"cac681b7-54d5-42b5-bab9-9dc6ab1dd0dc\" (UID: \"cac681b7-54d5-42b5-bab9-9dc6ab1dd0dc\") " Jan 28 06:51:25 crc kubenswrapper[4642]: I0128 06:51:25.484100 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cac681b7-54d5-42b5-bab9-9dc6ab1dd0dc-serving-cert\") pod \"cac681b7-54d5-42b5-bab9-9dc6ab1dd0dc\" (UID: \"cac681b7-54d5-42b5-bab9-9dc6ab1dd0dc\") " Jan 28 06:51:25 crc kubenswrapper[4642]: I0128 06:51:25.484151 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cac681b7-54d5-42b5-bab9-9dc6ab1dd0dc-proxy-ca-bundles\") pod \"cac681b7-54d5-42b5-bab9-9dc6ab1dd0dc\" (UID: \"cac681b7-54d5-42b5-bab9-9dc6ab1dd0dc\") " Jan 28 06:51:25 crc kubenswrapper[4642]: I0128 06:51:25.484181 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cac681b7-54d5-42b5-bab9-9dc6ab1dd0dc-config\") pod \"cac681b7-54d5-42b5-bab9-9dc6ab1dd0dc\" (UID: \"cac681b7-54d5-42b5-bab9-9dc6ab1dd0dc\") " Jan 28 06:51:25 crc kubenswrapper[4642]: I0128 06:51:25.484392 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hhr7z\" (UniqueName: \"kubernetes.io/projected/43651102-735d-490b-b53e-9b94cf5cd4ba-kube-api-access-hhr7z\") on node \"crc\" DevicePath \"\"" Jan 28 06:51:25 crc kubenswrapper[4642]: I0128 06:51:25.484408 4642 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43651102-735d-490b-b53e-9b94cf5cd4ba-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 06:51:25 crc kubenswrapper[4642]: I0128 06:51:25.484970 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cac681b7-54d5-42b5-bab9-9dc6ab1dd0dc-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "cac681b7-54d5-42b5-bab9-9dc6ab1dd0dc" (UID: "cac681b7-54d5-42b5-bab9-9dc6ab1dd0dc"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:51:25 crc kubenswrapper[4642]: I0128 06:51:25.485004 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cac681b7-54d5-42b5-bab9-9dc6ab1dd0dc-client-ca" (OuterVolumeSpecName: "client-ca") pod "cac681b7-54d5-42b5-bab9-9dc6ab1dd0dc" (UID: "cac681b7-54d5-42b5-bab9-9dc6ab1dd0dc"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:51:25 crc kubenswrapper[4642]: I0128 06:51:25.485058 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cac681b7-54d5-42b5-bab9-9dc6ab1dd0dc-config" (OuterVolumeSpecName: "config") pod "cac681b7-54d5-42b5-bab9-9dc6ab1dd0dc" (UID: "cac681b7-54d5-42b5-bab9-9dc6ab1dd0dc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:51:25 crc kubenswrapper[4642]: I0128 06:51:25.486941 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cac681b7-54d5-42b5-bab9-9dc6ab1dd0dc-kube-api-access-nc2ck" (OuterVolumeSpecName: "kube-api-access-nc2ck") pod "cac681b7-54d5-42b5-bab9-9dc6ab1dd0dc" (UID: "cac681b7-54d5-42b5-bab9-9dc6ab1dd0dc"). InnerVolumeSpecName "kube-api-access-nc2ck". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 06:51:25 crc kubenswrapper[4642]: I0128 06:51:25.487172 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cac681b7-54d5-42b5-bab9-9dc6ab1dd0dc-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "cac681b7-54d5-42b5-bab9-9dc6ab1dd0dc" (UID: "cac681b7-54d5-42b5-bab9-9dc6ab1dd0dc"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 06:51:25 crc kubenswrapper[4642]: I0128 06:51:25.585688 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nc2ck\" (UniqueName: \"kubernetes.io/projected/cac681b7-54d5-42b5-bab9-9dc6ab1dd0dc-kube-api-access-nc2ck\") on node \"crc\" DevicePath \"\"" Jan 28 06:51:25 crc kubenswrapper[4642]: I0128 06:51:25.585891 4642 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cac681b7-54d5-42b5-bab9-9dc6ab1dd0dc-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 06:51:25 crc kubenswrapper[4642]: I0128 06:51:25.585999 4642 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cac681b7-54d5-42b5-bab9-9dc6ab1dd0dc-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 06:51:25 crc kubenswrapper[4642]: I0128 06:51:25.586061 4642 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cac681b7-54d5-42b5-bab9-9dc6ab1dd0dc-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 28 06:51:25 crc kubenswrapper[4642]: I0128 06:51:25.586122 4642 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cac681b7-54d5-42b5-bab9-9dc6ab1dd0dc-config\") on node \"crc\" DevicePath \"\"" Jan 28 06:51:25 crc kubenswrapper[4642]: I0128 06:51:25.705807 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-c5c847958-rpb4f"] Jan 28 06:51:25 crc kubenswrapper[4642]: I0128 06:51:25.709021 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-c5c847958-rpb4f"] Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.128447 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-s9zdh" podUID="1d9faf46-d412-4182-96a4-f8350fd4c34e" containerName="oauth-openshift" containerID="cri-o://e7079d16ef882be6c81243ffe9c1fe72341e956c4f23f620c2ec82dc87aa52d7" gracePeriod=15 Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.346922 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55d684b56c-xr959"] Jan 28 06:51:26 crc kubenswrapper[4642]: E0128 06:51:26.347135 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43651102-735d-490b-b53e-9b94cf5cd4ba" containerName="route-controller-manager" Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.347153 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="43651102-735d-490b-b53e-9b94cf5cd4ba" containerName="route-controller-manager" Jan 28 06:51:26 crc kubenswrapper[4642]: E0128 06:51:26.347163 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cac681b7-54d5-42b5-bab9-9dc6ab1dd0dc" containerName="controller-manager" Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.347169 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="cac681b7-54d5-42b5-bab9-9dc6ab1dd0dc" containerName="controller-manager" Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.347259 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="cac681b7-54d5-42b5-bab9-9dc6ab1dd0dc" containerName="controller-manager" Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.347277 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="43651102-735d-490b-b53e-9b94cf5cd4ba" containerName="route-controller-manager" Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.347571 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-55d684b56c-xr959" Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.347781 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-dd78cd67-skh2w"] Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.348148 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-dd78cd67-skh2w" Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.350164 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.350429 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.350709 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.350777 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.350891 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.350942 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.356166 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.357788 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.357895 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.358020 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.364251 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-dd78cd67-skh2w"] Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.364658 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.364910 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.372701 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.374036 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55d684b56c-xr959"] Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.393528 4642 generic.go:334] "Generic (PLEG): container finished" podID="1d9faf46-d412-4182-96a4-f8350fd4c34e" containerID="e7079d16ef882be6c81243ffe9c1fe72341e956c4f23f620c2ec82dc87aa52d7" exitCode=0 Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.393595 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-s9zdh" event={"ID":"1d9faf46-d412-4182-96a4-f8350fd4c34e","Type":"ContainerDied","Data":"e7079d16ef882be6c81243ffe9c1fe72341e956c4f23f620c2ec82dc87aa52d7"} Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.394118 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d963256-5576-423f-b52d-dcad06ba2be4-config\") pod \"controller-manager-dd78cd67-skh2w\" (UID: \"2d963256-5576-423f-b52d-dcad06ba2be4\") " pod="openshift-controller-manager/controller-manager-dd78cd67-skh2w" Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.394163 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b88b0798-1f9a-4cf9-89a4-5a9fe872b780-client-ca\") pod \"route-controller-manager-55d684b56c-xr959\" (UID: \"b88b0798-1f9a-4cf9-89a4-5a9fe872b780\") " pod="openshift-route-controller-manager/route-controller-manager-55d684b56c-xr959" Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.394180 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2d963256-5576-423f-b52d-dcad06ba2be4-client-ca\") pod \"controller-manager-dd78cd67-skh2w\" (UID: \"2d963256-5576-423f-b52d-dcad06ba2be4\") " pod="openshift-controller-manager/controller-manager-dd78cd67-skh2w" Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.394227 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b88b0798-1f9a-4cf9-89a4-5a9fe872b780-serving-cert\") pod \"route-controller-manager-55d684b56c-xr959\" (UID: \"b88b0798-1f9a-4cf9-89a4-5a9fe872b780\") " pod="openshift-route-controller-manager/route-controller-manager-55d684b56c-xr959" Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.394258 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d963256-5576-423f-b52d-dcad06ba2be4-serving-cert\") pod \"controller-manager-dd78cd67-skh2w\" (UID: \"2d963256-5576-423f-b52d-dcad06ba2be4\") " pod="openshift-controller-manager/controller-manager-dd78cd67-skh2w" Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.394298 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dz25\" (UniqueName: \"kubernetes.io/projected/2d963256-5576-423f-b52d-dcad06ba2be4-kube-api-access-8dz25\") pod \"controller-manager-dd78cd67-skh2w\" (UID: \"2d963256-5576-423f-b52d-dcad06ba2be4\") " pod="openshift-controller-manager/controller-manager-dd78cd67-skh2w" Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.394338 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b88b0798-1f9a-4cf9-89a4-5a9fe872b780-config\") pod \"route-controller-manager-55d684b56c-xr959\" (UID: \"b88b0798-1f9a-4cf9-89a4-5a9fe872b780\") " pod="openshift-route-controller-manager/route-controller-manager-55d684b56c-xr959" Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.394359 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2d963256-5576-423f-b52d-dcad06ba2be4-proxy-ca-bundles\") pod \"controller-manager-dd78cd67-skh2w\" (UID: \"2d963256-5576-423f-b52d-dcad06ba2be4\") " pod="openshift-controller-manager/controller-manager-dd78cd67-skh2w" Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.394403 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmb6r\" (UniqueName: \"kubernetes.io/projected/b88b0798-1f9a-4cf9-89a4-5a9fe872b780-kube-api-access-rmb6r\") pod \"route-controller-manager-55d684b56c-xr959\" (UID: \"b88b0798-1f9a-4cf9-89a4-5a9fe872b780\") " pod="openshift-route-controller-manager/route-controller-manager-55d684b56c-xr959" Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.425552 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-s9zdh" Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.495163 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmb6r\" (UniqueName: \"kubernetes.io/projected/b88b0798-1f9a-4cf9-89a4-5a9fe872b780-kube-api-access-rmb6r\") pod \"route-controller-manager-55d684b56c-xr959\" (UID: \"b88b0798-1f9a-4cf9-89a4-5a9fe872b780\") " pod="openshift-route-controller-manager/route-controller-manager-55d684b56c-xr959" Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.495263 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d963256-5576-423f-b52d-dcad06ba2be4-config\") pod \"controller-manager-dd78cd67-skh2w\" (UID: \"2d963256-5576-423f-b52d-dcad06ba2be4\") " pod="openshift-controller-manager/controller-manager-dd78cd67-skh2w" Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.495296 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b88b0798-1f9a-4cf9-89a4-5a9fe872b780-client-ca\") pod \"route-controller-manager-55d684b56c-xr959\" (UID: \"b88b0798-1f9a-4cf9-89a4-5a9fe872b780\") " pod="openshift-route-controller-manager/route-controller-manager-55d684b56c-xr959" Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.495313 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2d963256-5576-423f-b52d-dcad06ba2be4-client-ca\") pod \"controller-manager-dd78cd67-skh2w\" (UID: \"2d963256-5576-423f-b52d-dcad06ba2be4\") " pod="openshift-controller-manager/controller-manager-dd78cd67-skh2w" Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.495332 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b88b0798-1f9a-4cf9-89a4-5a9fe872b780-serving-cert\") pod \"route-controller-manager-55d684b56c-xr959\" (UID: \"b88b0798-1f9a-4cf9-89a4-5a9fe872b780\") " pod="openshift-route-controller-manager/route-controller-manager-55d684b56c-xr959" Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.495368 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d963256-5576-423f-b52d-dcad06ba2be4-serving-cert\") pod \"controller-manager-dd78cd67-skh2w\" (UID: \"2d963256-5576-423f-b52d-dcad06ba2be4\") " pod="openshift-controller-manager/controller-manager-dd78cd67-skh2w" Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.495410 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dz25\" (UniqueName: \"kubernetes.io/projected/2d963256-5576-423f-b52d-dcad06ba2be4-kube-api-access-8dz25\") pod \"controller-manager-dd78cd67-skh2w\" (UID: \"2d963256-5576-423f-b52d-dcad06ba2be4\") " pod="openshift-controller-manager/controller-manager-dd78cd67-skh2w" Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.495432 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b88b0798-1f9a-4cf9-89a4-5a9fe872b780-config\") pod \"route-controller-manager-55d684b56c-xr959\" (UID: \"b88b0798-1f9a-4cf9-89a4-5a9fe872b780\") " pod="openshift-route-controller-manager/route-controller-manager-55d684b56c-xr959" Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.495454 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2d963256-5576-423f-b52d-dcad06ba2be4-proxy-ca-bundles\") pod \"controller-manager-dd78cd67-skh2w\" (UID: \"2d963256-5576-423f-b52d-dcad06ba2be4\") " pod="openshift-controller-manager/controller-manager-dd78cd67-skh2w" Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.496348 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b88b0798-1f9a-4cf9-89a4-5a9fe872b780-client-ca\") pod \"route-controller-manager-55d684b56c-xr959\" (UID: \"b88b0798-1f9a-4cf9-89a4-5a9fe872b780\") " pod="openshift-route-controller-manager/route-controller-manager-55d684b56c-xr959" Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.496377 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2d963256-5576-423f-b52d-dcad06ba2be4-client-ca\") pod \"controller-manager-dd78cd67-skh2w\" (UID: \"2d963256-5576-423f-b52d-dcad06ba2be4\") " pod="openshift-controller-manager/controller-manager-dd78cd67-skh2w" Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.496821 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d963256-5576-423f-b52d-dcad06ba2be4-config\") pod \"controller-manager-dd78cd67-skh2w\" (UID: \"2d963256-5576-423f-b52d-dcad06ba2be4\") " pod="openshift-controller-manager/controller-manager-dd78cd67-skh2w" Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.497160 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2d963256-5576-423f-b52d-dcad06ba2be4-proxy-ca-bundles\") pod \"controller-manager-dd78cd67-skh2w\" (UID: \"2d963256-5576-423f-b52d-dcad06ba2be4\") " pod="openshift-controller-manager/controller-manager-dd78cd67-skh2w" Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.497661 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b88b0798-1f9a-4cf9-89a4-5a9fe872b780-config\") pod \"route-controller-manager-55d684b56c-xr959\" (UID: \"b88b0798-1f9a-4cf9-89a4-5a9fe872b780\") " pod="openshift-route-controller-manager/route-controller-manager-55d684b56c-xr959" Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.500601 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b88b0798-1f9a-4cf9-89a4-5a9fe872b780-serving-cert\") pod \"route-controller-manager-55d684b56c-xr959\" (UID: \"b88b0798-1f9a-4cf9-89a4-5a9fe872b780\") " pod="openshift-route-controller-manager/route-controller-manager-55d684b56c-xr959" Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.500601 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d963256-5576-423f-b52d-dcad06ba2be4-serving-cert\") pod \"controller-manager-dd78cd67-skh2w\" (UID: \"2d963256-5576-423f-b52d-dcad06ba2be4\") " pod="openshift-controller-manager/controller-manager-dd78cd67-skh2w" Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.507171 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmb6r\" (UniqueName: \"kubernetes.io/projected/b88b0798-1f9a-4cf9-89a4-5a9fe872b780-kube-api-access-rmb6r\") pod \"route-controller-manager-55d684b56c-xr959\" (UID: \"b88b0798-1f9a-4cf9-89a4-5a9fe872b780\") " pod="openshift-route-controller-manager/route-controller-manager-55d684b56c-xr959" Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.508243 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dz25\" (UniqueName: \"kubernetes.io/projected/2d963256-5576-423f-b52d-dcad06ba2be4-kube-api-access-8dz25\") pod \"controller-manager-dd78cd67-skh2w\" (UID: \"2d963256-5576-423f-b52d-dcad06ba2be4\") " pod="openshift-controller-manager/controller-manager-dd78cd67-skh2w" Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.597065 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1d9faf46-d412-4182-96a4-f8350fd4c34e-v4-0-config-user-template-login\") pod \"1d9faf46-d412-4182-96a4-f8350fd4c34e\" (UID: \"1d9faf46-d412-4182-96a4-f8350fd4c34e\") " Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.597984 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d9faf46-d412-4182-96a4-f8350fd4c34e-v4-0-config-system-trusted-ca-bundle\") pod \"1d9faf46-d412-4182-96a4-f8350fd4c34e\" (UID: \"1d9faf46-d412-4182-96a4-f8350fd4c34e\") " Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.598019 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1d9faf46-d412-4182-96a4-f8350fd4c34e-v4-0-config-system-session\") pod \"1d9faf46-d412-4182-96a4-f8350fd4c34e\" (UID: \"1d9faf46-d412-4182-96a4-f8350fd4c34e\") " Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.598092 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1d9faf46-d412-4182-96a4-f8350fd4c34e-v4-0-config-system-ocp-branding-template\") pod \"1d9faf46-d412-4182-96a4-f8350fd4c34e\" (UID: \"1d9faf46-d412-4182-96a4-f8350fd4c34e\") " Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.598126 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6lqnn\" (UniqueName: \"kubernetes.io/projected/1d9faf46-d412-4182-96a4-f8350fd4c34e-kube-api-access-6lqnn\") pod \"1d9faf46-d412-4182-96a4-f8350fd4c34e\" (UID: \"1d9faf46-d412-4182-96a4-f8350fd4c34e\") " Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.598152 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1d9faf46-d412-4182-96a4-f8350fd4c34e-v4-0-config-user-template-provider-selection\") pod \"1d9faf46-d412-4182-96a4-f8350fd4c34e\" (UID: \"1d9faf46-d412-4182-96a4-f8350fd4c34e\") " Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.598180 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/1d9faf46-d412-4182-96a4-f8350fd4c34e-v4-0-config-user-idp-0-file-data\") pod \"1d9faf46-d412-4182-96a4-f8350fd4c34e\" (UID: \"1d9faf46-d412-4182-96a4-f8350fd4c34e\") " Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.598217 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1d9faf46-d412-4182-96a4-f8350fd4c34e-v4-0-config-system-serving-cert\") pod \"1d9faf46-d412-4182-96a4-f8350fd4c34e\" (UID: \"1d9faf46-d412-4182-96a4-f8350fd4c34e\") " Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.598238 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1d9faf46-d412-4182-96a4-f8350fd4c34e-v4-0-config-user-template-error\") pod \"1d9faf46-d412-4182-96a4-f8350fd4c34e\" (UID: \"1d9faf46-d412-4182-96a4-f8350fd4c34e\") " Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.598317 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1d9faf46-d412-4182-96a4-f8350fd4c34e-v4-0-config-system-service-ca\") pod \"1d9faf46-d412-4182-96a4-f8350fd4c34e\" (UID: \"1d9faf46-d412-4182-96a4-f8350fd4c34e\") " Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.598341 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1d9faf46-d412-4182-96a4-f8350fd4c34e-v4-0-config-system-router-certs\") pod \"1d9faf46-d412-4182-96a4-f8350fd4c34e\" (UID: \"1d9faf46-d412-4182-96a4-f8350fd4c34e\") " Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.598363 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1d9faf46-d412-4182-96a4-f8350fd4c34e-v4-0-config-system-cliconfig\") pod \"1d9faf46-d412-4182-96a4-f8350fd4c34e\" (UID: \"1d9faf46-d412-4182-96a4-f8350fd4c34e\") " Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.598404 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1d9faf46-d412-4182-96a4-f8350fd4c34e-audit-dir\") pod \"1d9faf46-d412-4182-96a4-f8350fd4c34e\" (UID: \"1d9faf46-d412-4182-96a4-f8350fd4c34e\") " Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.598431 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1d9faf46-d412-4182-96a4-f8350fd4c34e-audit-policies\") pod \"1d9faf46-d412-4182-96a4-f8350fd4c34e\" (UID: \"1d9faf46-d412-4182-96a4-f8350fd4c34e\") " Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.598522 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d9faf46-d412-4182-96a4-f8350fd4c34e-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "1d9faf46-d412-4182-96a4-f8350fd4c34e" (UID: "1d9faf46-d412-4182-96a4-f8350fd4c34e"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.598780 4642 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1d9faf46-d412-4182-96a4-f8350fd4c34e-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.598810 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d9faf46-d412-4182-96a4-f8350fd4c34e-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "1d9faf46-d412-4182-96a4-f8350fd4c34e" (UID: "1d9faf46-d412-4182-96a4-f8350fd4c34e"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.598852 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d9faf46-d412-4182-96a4-f8350fd4c34e-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "1d9faf46-d412-4182-96a4-f8350fd4c34e" (UID: "1d9faf46-d412-4182-96a4-f8350fd4c34e"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.598898 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d9faf46-d412-4182-96a4-f8350fd4c34e-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "1d9faf46-d412-4182-96a4-f8350fd4c34e" (UID: "1d9faf46-d412-4182-96a4-f8350fd4c34e"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.599156 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d9faf46-d412-4182-96a4-f8350fd4c34e-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "1d9faf46-d412-4182-96a4-f8350fd4c34e" (UID: "1d9faf46-d412-4182-96a4-f8350fd4c34e"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.601059 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d9faf46-d412-4182-96a4-f8350fd4c34e-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "1d9faf46-d412-4182-96a4-f8350fd4c34e" (UID: "1d9faf46-d412-4182-96a4-f8350fd4c34e"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.601474 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d9faf46-d412-4182-96a4-f8350fd4c34e-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "1d9faf46-d412-4182-96a4-f8350fd4c34e" (UID: "1d9faf46-d412-4182-96a4-f8350fd4c34e"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.601788 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d9faf46-d412-4182-96a4-f8350fd4c34e-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "1d9faf46-d412-4182-96a4-f8350fd4c34e" (UID: "1d9faf46-d412-4182-96a4-f8350fd4c34e"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.602221 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d9faf46-d412-4182-96a4-f8350fd4c34e-kube-api-access-6lqnn" (OuterVolumeSpecName: "kube-api-access-6lqnn") pod "1d9faf46-d412-4182-96a4-f8350fd4c34e" (UID: "1d9faf46-d412-4182-96a4-f8350fd4c34e"). InnerVolumeSpecName "kube-api-access-6lqnn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.602226 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d9faf46-d412-4182-96a4-f8350fd4c34e-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "1d9faf46-d412-4182-96a4-f8350fd4c34e" (UID: "1d9faf46-d412-4182-96a4-f8350fd4c34e"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.602295 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d9faf46-d412-4182-96a4-f8350fd4c34e-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "1d9faf46-d412-4182-96a4-f8350fd4c34e" (UID: "1d9faf46-d412-4182-96a4-f8350fd4c34e"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.602547 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d9faf46-d412-4182-96a4-f8350fd4c34e-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "1d9faf46-d412-4182-96a4-f8350fd4c34e" (UID: "1d9faf46-d412-4182-96a4-f8350fd4c34e"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.602660 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d9faf46-d412-4182-96a4-f8350fd4c34e-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "1d9faf46-d412-4182-96a4-f8350fd4c34e" (UID: "1d9faf46-d412-4182-96a4-f8350fd4c34e"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.602769 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d9faf46-d412-4182-96a4-f8350fd4c34e-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "1d9faf46-d412-4182-96a4-f8350fd4c34e" (UID: "1d9faf46-d412-4182-96a4-f8350fd4c34e"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.664698 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-55d684b56c-xr959" Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.676040 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-dd78cd67-skh2w" Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.699610 4642 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1d9faf46-d412-4182-96a4-f8350fd4c34e-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.699641 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6lqnn\" (UniqueName: \"kubernetes.io/projected/1d9faf46-d412-4182-96a4-f8350fd4c34e-kube-api-access-6lqnn\") on node \"crc\" DevicePath \"\"" Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.699652 4642 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1d9faf46-d412-4182-96a4-f8350fd4c34e-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.699665 4642 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/1d9faf46-d412-4182-96a4-f8350fd4c34e-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.699674 4642 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1d9faf46-d412-4182-96a4-f8350fd4c34e-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.699683 4642 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1d9faf46-d412-4182-96a4-f8350fd4c34e-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.699691 4642 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1d9faf46-d412-4182-96a4-f8350fd4c34e-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.699699 4642 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1d9faf46-d412-4182-96a4-f8350fd4c34e-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.699707 4642 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1d9faf46-d412-4182-96a4-f8350fd4c34e-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.699717 4642 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1d9faf46-d412-4182-96a4-f8350fd4c34e-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.699725 4642 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1d9faf46-d412-4182-96a4-f8350fd4c34e-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.699734 4642 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d9faf46-d412-4182-96a4-f8350fd4c34e-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 06:51:26 crc kubenswrapper[4642]: I0128 06:51:26.699742 4642 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1d9faf46-d412-4182-96a4-f8350fd4c34e-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 28 06:51:27 crc kubenswrapper[4642]: I0128 06:51:27.003162 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55d684b56c-xr959"] Jan 28 06:51:27 crc kubenswrapper[4642]: I0128 06:51:27.047639 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-dd78cd67-skh2w"] Jan 28 06:51:27 crc kubenswrapper[4642]: W0128 06:51:27.053343 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d963256_5576_423f_b52d_dcad06ba2be4.slice/crio-19b0fc965d0cb12a4b89f10689bb32fcf21228ebb214b7e6988bd79acf81f20d WatchSource:0}: Error finding container 19b0fc965d0cb12a4b89f10689bb32fcf21228ebb214b7e6988bd79acf81f20d: Status 404 returned error can't find the container with id 19b0fc965d0cb12a4b89f10689bb32fcf21228ebb214b7e6988bd79acf81f20d Jan 28 06:51:27 crc kubenswrapper[4642]: I0128 06:51:27.104656 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43651102-735d-490b-b53e-9b94cf5cd4ba" path="/var/lib/kubelet/pods/43651102-735d-490b-b53e-9b94cf5cd4ba/volumes" Jan 28 06:51:27 crc kubenswrapper[4642]: I0128 06:51:27.105313 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cac681b7-54d5-42b5-bab9-9dc6ab1dd0dc" path="/var/lib/kubelet/pods/cac681b7-54d5-42b5-bab9-9dc6ab1dd0dc/volumes" Jan 28 06:51:27 crc kubenswrapper[4642]: I0128 06:51:27.346907 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-848fcd468b-48jx2"] Jan 28 06:51:27 crc kubenswrapper[4642]: E0128 06:51:27.347445 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d9faf46-d412-4182-96a4-f8350fd4c34e" containerName="oauth-openshift" Jan 28 06:51:27 crc kubenswrapper[4642]: I0128 06:51:27.347457 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d9faf46-d412-4182-96a4-f8350fd4c34e" containerName="oauth-openshift" Jan 28 06:51:27 crc kubenswrapper[4642]: I0128 06:51:27.347560 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d9faf46-d412-4182-96a4-f8350fd4c34e" containerName="oauth-openshift" Jan 28 06:51:27 crc kubenswrapper[4642]: I0128 06:51:27.348066 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-848fcd468b-48jx2" Jan 28 06:51:27 crc kubenswrapper[4642]: I0128 06:51:27.359733 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-848fcd468b-48jx2"] Jan 28 06:51:27 crc kubenswrapper[4642]: I0128 06:51:27.401148 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-s9zdh" Jan 28 06:51:27 crc kubenswrapper[4642]: I0128 06:51:27.401254 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-s9zdh" event={"ID":"1d9faf46-d412-4182-96a4-f8350fd4c34e","Type":"ContainerDied","Data":"9cb244a6245a92262ccae2c4187dae820334a649771ab142179d166c7b3c3446"} Jan 28 06:51:27 crc kubenswrapper[4642]: I0128 06:51:27.401321 4642 scope.go:117] "RemoveContainer" containerID="e7079d16ef882be6c81243ffe9c1fe72341e956c4f23f620c2ec82dc87aa52d7" Jan 28 06:51:27 crc kubenswrapper[4642]: I0128 06:51:27.403676 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-55d684b56c-xr959" event={"ID":"b88b0798-1f9a-4cf9-89a4-5a9fe872b780","Type":"ContainerStarted","Data":"6ae9c365ce216037df637b07731ea778254a2d6e5981dc9f2635d7c5b453b5d1"} Jan 28 06:51:27 crc kubenswrapper[4642]: I0128 06:51:27.403716 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-55d684b56c-xr959" event={"ID":"b88b0798-1f9a-4cf9-89a4-5a9fe872b780","Type":"ContainerStarted","Data":"1a90d90ef8fdb184055b01cff59aad9512406e54f96495bad1d5dccab95d81dd"} Jan 28 06:51:27 crc kubenswrapper[4642]: I0128 06:51:27.403836 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-55d684b56c-xr959" Jan 28 06:51:27 crc kubenswrapper[4642]: I0128 06:51:27.407024 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-dd78cd67-skh2w" event={"ID":"2d963256-5576-423f-b52d-dcad06ba2be4","Type":"ContainerStarted","Data":"da6b717172ac6e1130127273c7fabdc3ce8a110fb186c06e0dcf0e7163ae6a61"} Jan 28 06:51:27 crc kubenswrapper[4642]: I0128 06:51:27.407052 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-dd78cd67-skh2w" event={"ID":"2d963256-5576-423f-b52d-dcad06ba2be4","Type":"ContainerStarted","Data":"19b0fc965d0cb12a4b89f10689bb32fcf21228ebb214b7e6988bd79acf81f20d"} Jan 28 06:51:27 crc kubenswrapper[4642]: I0128 06:51:27.407210 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-dd78cd67-skh2w" Jan 28 06:51:27 crc kubenswrapper[4642]: I0128 06:51:27.411001 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-dd78cd67-skh2w" Jan 28 06:51:27 crc kubenswrapper[4642]: I0128 06:51:27.415027 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-s9zdh"] Jan 28 06:51:27 crc kubenswrapper[4642]: I0128 06:51:27.416115 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-55d684b56c-xr959" Jan 28 06:51:27 crc kubenswrapper[4642]: I0128 06:51:27.417903 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-s9zdh"] Jan 28 06:51:27 crc kubenswrapper[4642]: I0128 06:51:27.428095 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-55d684b56c-xr959" podStartSLOduration=3.428080924 podStartE2EDuration="3.428080924s" podCreationTimestamp="2026-01-28 06:51:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 06:51:27.42605506 +0000 UTC m=+210.658143869" watchObservedRunningTime="2026-01-28 06:51:27.428080924 +0000 UTC m=+210.660169734" Jan 28 06:51:27 crc kubenswrapper[4642]: I0128 06:51:27.461114 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-dd78cd67-skh2w" podStartSLOduration=3.461099035 podStartE2EDuration="3.461099035s" podCreationTimestamp="2026-01-28 06:51:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 06:51:27.442153793 +0000 UTC m=+210.674242601" watchObservedRunningTime="2026-01-28 06:51:27.461099035 +0000 UTC m=+210.693187844" Jan 28 06:51:27 crc kubenswrapper[4642]: I0128 06:51:27.507305 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e063277a-5e0b-4f0d-98b1-b35a6d7fb635-v4-0-config-user-template-error\") pod \"oauth-openshift-848fcd468b-48jx2\" (UID: \"e063277a-5e0b-4f0d-98b1-b35a6d7fb635\") " pod="openshift-authentication/oauth-openshift-848fcd468b-48jx2" Jan 28 06:51:27 crc kubenswrapper[4642]: I0128 06:51:27.507379 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e063277a-5e0b-4f0d-98b1-b35a6d7fb635-v4-0-config-system-service-ca\") pod \"oauth-openshift-848fcd468b-48jx2\" (UID: \"e063277a-5e0b-4f0d-98b1-b35a6d7fb635\") " pod="openshift-authentication/oauth-openshift-848fcd468b-48jx2" Jan 28 06:51:27 crc kubenswrapper[4642]: I0128 06:51:27.507407 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hkcj\" (UniqueName: \"kubernetes.io/projected/e063277a-5e0b-4f0d-98b1-b35a6d7fb635-kube-api-access-2hkcj\") pod \"oauth-openshift-848fcd468b-48jx2\" (UID: \"e063277a-5e0b-4f0d-98b1-b35a6d7fb635\") " pod="openshift-authentication/oauth-openshift-848fcd468b-48jx2" Jan 28 06:51:27 crc kubenswrapper[4642]: I0128 06:51:27.507442 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e063277a-5e0b-4f0d-98b1-b35a6d7fb635-v4-0-config-system-serving-cert\") pod \"oauth-openshift-848fcd468b-48jx2\" (UID: \"e063277a-5e0b-4f0d-98b1-b35a6d7fb635\") " pod="openshift-authentication/oauth-openshift-848fcd468b-48jx2" Jan 28 06:51:27 crc kubenswrapper[4642]: I0128 06:51:27.507636 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e063277a-5e0b-4f0d-98b1-b35a6d7fb635-audit-dir\") pod \"oauth-openshift-848fcd468b-48jx2\" (UID: \"e063277a-5e0b-4f0d-98b1-b35a6d7fb635\") " pod="openshift-authentication/oauth-openshift-848fcd468b-48jx2" Jan 28 06:51:27 crc kubenswrapper[4642]: I0128 06:51:27.507745 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e063277a-5e0b-4f0d-98b1-b35a6d7fb635-v4-0-config-system-session\") pod \"oauth-openshift-848fcd468b-48jx2\" (UID: \"e063277a-5e0b-4f0d-98b1-b35a6d7fb635\") " pod="openshift-authentication/oauth-openshift-848fcd468b-48jx2" Jan 28 06:51:27 crc kubenswrapper[4642]: I0128 06:51:27.507772 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e063277a-5e0b-4f0d-98b1-b35a6d7fb635-v4-0-config-system-cliconfig\") pod \"oauth-openshift-848fcd468b-48jx2\" (UID: \"e063277a-5e0b-4f0d-98b1-b35a6d7fb635\") " pod="openshift-authentication/oauth-openshift-848fcd468b-48jx2" Jan 28 06:51:27 crc kubenswrapper[4642]: I0128 06:51:27.507789 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e063277a-5e0b-4f0d-98b1-b35a6d7fb635-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-848fcd468b-48jx2\" (UID: \"e063277a-5e0b-4f0d-98b1-b35a6d7fb635\") " pod="openshift-authentication/oauth-openshift-848fcd468b-48jx2" Jan 28 06:51:27 crc kubenswrapper[4642]: I0128 06:51:27.507804 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e063277a-5e0b-4f0d-98b1-b35a6d7fb635-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-848fcd468b-48jx2\" (UID: \"e063277a-5e0b-4f0d-98b1-b35a6d7fb635\") " pod="openshift-authentication/oauth-openshift-848fcd468b-48jx2" Jan 28 06:51:27 crc kubenswrapper[4642]: I0128 06:51:27.507828 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e063277a-5e0b-4f0d-98b1-b35a6d7fb635-audit-policies\") pod \"oauth-openshift-848fcd468b-48jx2\" (UID: \"e063277a-5e0b-4f0d-98b1-b35a6d7fb635\") " pod="openshift-authentication/oauth-openshift-848fcd468b-48jx2" Jan 28 06:51:27 crc kubenswrapper[4642]: I0128 06:51:27.507988 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e063277a-5e0b-4f0d-98b1-b35a6d7fb635-v4-0-config-user-template-login\") pod \"oauth-openshift-848fcd468b-48jx2\" (UID: \"e063277a-5e0b-4f0d-98b1-b35a6d7fb635\") " pod="openshift-authentication/oauth-openshift-848fcd468b-48jx2" Jan 28 06:51:27 crc kubenswrapper[4642]: I0128 06:51:27.508036 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e063277a-5e0b-4f0d-98b1-b35a6d7fb635-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-848fcd468b-48jx2\" (UID: \"e063277a-5e0b-4f0d-98b1-b35a6d7fb635\") " pod="openshift-authentication/oauth-openshift-848fcd468b-48jx2" Jan 28 06:51:27 crc kubenswrapper[4642]: I0128 06:51:27.508083 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e063277a-5e0b-4f0d-98b1-b35a6d7fb635-v4-0-config-system-router-certs\") pod \"oauth-openshift-848fcd468b-48jx2\" (UID: \"e063277a-5e0b-4f0d-98b1-b35a6d7fb635\") " pod="openshift-authentication/oauth-openshift-848fcd468b-48jx2" Jan 28 06:51:27 crc kubenswrapper[4642]: I0128 06:51:27.508153 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e063277a-5e0b-4f0d-98b1-b35a6d7fb635-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-848fcd468b-48jx2\" (UID: \"e063277a-5e0b-4f0d-98b1-b35a6d7fb635\") " pod="openshift-authentication/oauth-openshift-848fcd468b-48jx2" Jan 28 06:51:27 crc kubenswrapper[4642]: I0128 06:51:27.608807 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e063277a-5e0b-4f0d-98b1-b35a6d7fb635-v4-0-config-user-template-error\") pod \"oauth-openshift-848fcd468b-48jx2\" (UID: \"e063277a-5e0b-4f0d-98b1-b35a6d7fb635\") " pod="openshift-authentication/oauth-openshift-848fcd468b-48jx2" Jan 28 06:51:27 crc kubenswrapper[4642]: I0128 06:51:27.608862 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e063277a-5e0b-4f0d-98b1-b35a6d7fb635-v4-0-config-system-service-ca\") pod \"oauth-openshift-848fcd468b-48jx2\" (UID: \"e063277a-5e0b-4f0d-98b1-b35a6d7fb635\") " pod="openshift-authentication/oauth-openshift-848fcd468b-48jx2" Jan 28 06:51:27 crc kubenswrapper[4642]: I0128 06:51:27.608907 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hkcj\" (UniqueName: \"kubernetes.io/projected/e063277a-5e0b-4f0d-98b1-b35a6d7fb635-kube-api-access-2hkcj\") pod \"oauth-openshift-848fcd468b-48jx2\" (UID: \"e063277a-5e0b-4f0d-98b1-b35a6d7fb635\") " pod="openshift-authentication/oauth-openshift-848fcd468b-48jx2" Jan 28 06:51:27 crc kubenswrapper[4642]: I0128 06:51:27.608931 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e063277a-5e0b-4f0d-98b1-b35a6d7fb635-v4-0-config-system-serving-cert\") pod \"oauth-openshift-848fcd468b-48jx2\" (UID: \"e063277a-5e0b-4f0d-98b1-b35a6d7fb635\") " pod="openshift-authentication/oauth-openshift-848fcd468b-48jx2" Jan 28 06:51:27 crc kubenswrapper[4642]: I0128 06:51:27.609582 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e063277a-5e0b-4f0d-98b1-b35a6d7fb635-v4-0-config-system-service-ca\") pod \"oauth-openshift-848fcd468b-48jx2\" (UID: \"e063277a-5e0b-4f0d-98b1-b35a6d7fb635\") " pod="openshift-authentication/oauth-openshift-848fcd468b-48jx2" Jan 28 06:51:27 crc kubenswrapper[4642]: I0128 06:51:27.609592 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e063277a-5e0b-4f0d-98b1-b35a6d7fb635-audit-dir\") pod \"oauth-openshift-848fcd468b-48jx2\" (UID: \"e063277a-5e0b-4f0d-98b1-b35a6d7fb635\") " pod="openshift-authentication/oauth-openshift-848fcd468b-48jx2" Jan 28 06:51:27 crc kubenswrapper[4642]: I0128 06:51:27.609636 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e063277a-5e0b-4f0d-98b1-b35a6d7fb635-audit-dir\") pod \"oauth-openshift-848fcd468b-48jx2\" (UID: \"e063277a-5e0b-4f0d-98b1-b35a6d7fb635\") " pod="openshift-authentication/oauth-openshift-848fcd468b-48jx2" Jan 28 06:51:27 crc kubenswrapper[4642]: I0128 06:51:27.609688 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e063277a-5e0b-4f0d-98b1-b35a6d7fb635-v4-0-config-system-session\") pod \"oauth-openshift-848fcd468b-48jx2\" (UID: \"e063277a-5e0b-4f0d-98b1-b35a6d7fb635\") " pod="openshift-authentication/oauth-openshift-848fcd468b-48jx2" Jan 28 06:51:27 crc kubenswrapper[4642]: I0128 06:51:27.609727 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e063277a-5e0b-4f0d-98b1-b35a6d7fb635-v4-0-config-system-cliconfig\") pod \"oauth-openshift-848fcd468b-48jx2\" (UID: \"e063277a-5e0b-4f0d-98b1-b35a6d7fb635\") " pod="openshift-authentication/oauth-openshift-848fcd468b-48jx2" Jan 28 06:51:27 crc kubenswrapper[4642]: I0128 06:51:27.609750 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e063277a-5e0b-4f0d-98b1-b35a6d7fb635-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-848fcd468b-48jx2\" (UID: \"e063277a-5e0b-4f0d-98b1-b35a6d7fb635\") " pod="openshift-authentication/oauth-openshift-848fcd468b-48jx2" Jan 28 06:51:27 crc kubenswrapper[4642]: I0128 06:51:27.609772 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e063277a-5e0b-4f0d-98b1-b35a6d7fb635-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-848fcd468b-48jx2\" (UID: \"e063277a-5e0b-4f0d-98b1-b35a6d7fb635\") " pod="openshift-authentication/oauth-openshift-848fcd468b-48jx2" Jan 28 06:51:27 crc kubenswrapper[4642]: I0128 06:51:27.609825 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e063277a-5e0b-4f0d-98b1-b35a6d7fb635-audit-policies\") pod \"oauth-openshift-848fcd468b-48jx2\" (UID: \"e063277a-5e0b-4f0d-98b1-b35a6d7fb635\") " pod="openshift-authentication/oauth-openshift-848fcd468b-48jx2" Jan 28 06:51:27 crc kubenswrapper[4642]: I0128 06:51:27.609856 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e063277a-5e0b-4f0d-98b1-b35a6d7fb635-v4-0-config-user-template-login\") pod \"oauth-openshift-848fcd468b-48jx2\" (UID: \"e063277a-5e0b-4f0d-98b1-b35a6d7fb635\") " pod="openshift-authentication/oauth-openshift-848fcd468b-48jx2" Jan 28 06:51:27 crc kubenswrapper[4642]: I0128 06:51:27.609895 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e063277a-5e0b-4f0d-98b1-b35a6d7fb635-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-848fcd468b-48jx2\" (UID: \"e063277a-5e0b-4f0d-98b1-b35a6d7fb635\") " pod="openshift-authentication/oauth-openshift-848fcd468b-48jx2" Jan 28 06:51:27 crc kubenswrapper[4642]: I0128 06:51:27.609917 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e063277a-5e0b-4f0d-98b1-b35a6d7fb635-v4-0-config-system-router-certs\") pod \"oauth-openshift-848fcd468b-48jx2\" (UID: \"e063277a-5e0b-4f0d-98b1-b35a6d7fb635\") " pod="openshift-authentication/oauth-openshift-848fcd468b-48jx2" Jan 28 06:51:27 crc kubenswrapper[4642]: I0128 06:51:27.609964 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e063277a-5e0b-4f0d-98b1-b35a6d7fb635-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-848fcd468b-48jx2\" (UID: \"e063277a-5e0b-4f0d-98b1-b35a6d7fb635\") " pod="openshift-authentication/oauth-openshift-848fcd468b-48jx2" Jan 28 06:51:27 crc kubenswrapper[4642]: I0128 06:51:27.611143 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e063277a-5e0b-4f0d-98b1-b35a6d7fb635-audit-policies\") pod \"oauth-openshift-848fcd468b-48jx2\" (UID: \"e063277a-5e0b-4f0d-98b1-b35a6d7fb635\") " pod="openshift-authentication/oauth-openshift-848fcd468b-48jx2" Jan 28 06:51:27 crc kubenswrapper[4642]: I0128 06:51:27.613596 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e063277a-5e0b-4f0d-98b1-b35a6d7fb635-v4-0-config-system-cliconfig\") pod \"oauth-openshift-848fcd468b-48jx2\" (UID: \"e063277a-5e0b-4f0d-98b1-b35a6d7fb635\") " pod="openshift-authentication/oauth-openshift-848fcd468b-48jx2" Jan 28 06:51:27 crc kubenswrapper[4642]: I0128 06:51:27.614439 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e063277a-5e0b-4f0d-98b1-b35a6d7fb635-v4-0-config-user-template-error\") pod \"oauth-openshift-848fcd468b-48jx2\" (UID: \"e063277a-5e0b-4f0d-98b1-b35a6d7fb635\") " pod="openshift-authentication/oauth-openshift-848fcd468b-48jx2" Jan 28 06:51:27 crc kubenswrapper[4642]: I0128 06:51:27.614761 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e063277a-5e0b-4f0d-98b1-b35a6d7fb635-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-848fcd468b-48jx2\" (UID: \"e063277a-5e0b-4f0d-98b1-b35a6d7fb635\") " pod="openshift-authentication/oauth-openshift-848fcd468b-48jx2" Jan 28 06:51:27 crc kubenswrapper[4642]: I0128 06:51:27.615129 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e063277a-5e0b-4f0d-98b1-b35a6d7fb635-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-848fcd468b-48jx2\" (UID: \"e063277a-5e0b-4f0d-98b1-b35a6d7fb635\") " pod="openshift-authentication/oauth-openshift-848fcd468b-48jx2" Jan 28 06:51:27 crc kubenswrapper[4642]: I0128 06:51:27.615526 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e063277a-5e0b-4f0d-98b1-b35a6d7fb635-v4-0-config-system-serving-cert\") pod \"oauth-openshift-848fcd468b-48jx2\" (UID: \"e063277a-5e0b-4f0d-98b1-b35a6d7fb635\") " pod="openshift-authentication/oauth-openshift-848fcd468b-48jx2" Jan 28 06:51:27 crc kubenswrapper[4642]: I0128 06:51:27.615724 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e063277a-5e0b-4f0d-98b1-b35a6d7fb635-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-848fcd468b-48jx2\" (UID: \"e063277a-5e0b-4f0d-98b1-b35a6d7fb635\") " pod="openshift-authentication/oauth-openshift-848fcd468b-48jx2" Jan 28 06:51:27 crc kubenswrapper[4642]: I0128 06:51:27.616085 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e063277a-5e0b-4f0d-98b1-b35a6d7fb635-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-848fcd468b-48jx2\" (UID: \"e063277a-5e0b-4f0d-98b1-b35a6d7fb635\") " pod="openshift-authentication/oauth-openshift-848fcd468b-48jx2" Jan 28 06:51:27 crc kubenswrapper[4642]: I0128 06:51:27.618823 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e063277a-5e0b-4f0d-98b1-b35a6d7fb635-v4-0-config-user-template-login\") pod \"oauth-openshift-848fcd468b-48jx2\" (UID: \"e063277a-5e0b-4f0d-98b1-b35a6d7fb635\") " pod="openshift-authentication/oauth-openshift-848fcd468b-48jx2" Jan 28 06:51:27 crc kubenswrapper[4642]: I0128 06:51:27.620883 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e063277a-5e0b-4f0d-98b1-b35a6d7fb635-v4-0-config-system-router-certs\") pod \"oauth-openshift-848fcd468b-48jx2\" (UID: \"e063277a-5e0b-4f0d-98b1-b35a6d7fb635\") " pod="openshift-authentication/oauth-openshift-848fcd468b-48jx2" Jan 28 06:51:27 crc kubenswrapper[4642]: I0128 06:51:27.623853 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e063277a-5e0b-4f0d-98b1-b35a6d7fb635-v4-0-config-system-session\") pod \"oauth-openshift-848fcd468b-48jx2\" (UID: \"e063277a-5e0b-4f0d-98b1-b35a6d7fb635\") " pod="openshift-authentication/oauth-openshift-848fcd468b-48jx2" Jan 28 06:51:27 crc kubenswrapper[4642]: I0128 06:51:27.624659 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hkcj\" (UniqueName: \"kubernetes.io/projected/e063277a-5e0b-4f0d-98b1-b35a6d7fb635-kube-api-access-2hkcj\") pod \"oauth-openshift-848fcd468b-48jx2\" (UID: \"e063277a-5e0b-4f0d-98b1-b35a6d7fb635\") " pod="openshift-authentication/oauth-openshift-848fcd468b-48jx2" Jan 28 06:51:27 crc kubenswrapper[4642]: I0128 06:51:27.659181 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-848fcd468b-48jx2" Jan 28 06:51:27 crc kubenswrapper[4642]: I0128 06:51:27.994769 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-848fcd468b-48jx2"] Jan 28 06:51:28 crc kubenswrapper[4642]: I0128 06:51:28.412992 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-848fcd468b-48jx2" event={"ID":"e063277a-5e0b-4f0d-98b1-b35a6d7fb635","Type":"ContainerStarted","Data":"a26d598baeb51f8f37fce078fbd7da1c309b145151cccac5c9f59b2d8906e912"} Jan 28 06:51:28 crc kubenswrapper[4642]: I0128 06:51:28.413361 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-848fcd468b-48jx2" event={"ID":"e063277a-5e0b-4f0d-98b1-b35a6d7fb635","Type":"ContainerStarted","Data":"1277fcde9991c81f58f4d2bc938729f6302b0fad0c7b0f07f049ba7d8749b5b7"} Jan 28 06:51:28 crc kubenswrapper[4642]: I0128 06:51:28.413389 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-848fcd468b-48jx2" Jan 28 06:51:28 crc kubenswrapper[4642]: I0128 06:51:28.417703 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-848fcd468b-48jx2" Jan 28 06:51:28 crc kubenswrapper[4642]: I0128 06:51:28.429632 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-848fcd468b-48jx2" podStartSLOduration=27.429614174 podStartE2EDuration="27.429614174s" podCreationTimestamp="2026-01-28 06:51:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 06:51:28.429569739 +0000 UTC m=+211.661658549" watchObservedRunningTime="2026-01-28 06:51:28.429614174 +0000 UTC m=+211.661702983" Jan 28 06:51:29 crc kubenswrapper[4642]: I0128 06:51:29.105028 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d9faf46-d412-4182-96a4-f8350fd4c34e" path="/var/lib/kubelet/pods/1d9faf46-d412-4182-96a4-f8350fd4c34e/volumes" Jan 28 06:51:38 crc kubenswrapper[4642]: I0128 06:51:38.199800 4642 patch_prober.go:28] interesting pod/machine-config-daemon-hdsmf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 06:51:38 crc kubenswrapper[4642]: I0128 06:51:38.200204 4642 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 06:51:38 crc kubenswrapper[4642]: I0128 06:51:38.200259 4642 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" Jan 28 06:51:38 crc kubenswrapper[4642]: I0128 06:51:38.200748 4642 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f517e5ecc6bf060062365392506b3ee3a09354a4f5cfaa980e0e3a2507f298c3"} pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 06:51:38 crc kubenswrapper[4642]: I0128 06:51:38.200795 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" containerName="machine-config-daemon" containerID="cri-o://f517e5ecc6bf060062365392506b3ee3a09354a4f5cfaa980e0e3a2507f298c3" gracePeriod=600 Jan 28 06:51:38 crc kubenswrapper[4642]: I0128 06:51:38.463289 4642 generic.go:334] "Generic (PLEG): container finished" podID="338ae955-434d-40bd-8519-580badf3e175" containerID="f517e5ecc6bf060062365392506b3ee3a09354a4f5cfaa980e0e3a2507f298c3" exitCode=0 Jan 28 06:51:38 crc kubenswrapper[4642]: I0128 06:51:38.463348 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" event={"ID":"338ae955-434d-40bd-8519-580badf3e175","Type":"ContainerDied","Data":"f517e5ecc6bf060062365392506b3ee3a09354a4f5cfaa980e0e3a2507f298c3"} Jan 28 06:51:38 crc kubenswrapper[4642]: I0128 06:51:38.463381 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" event={"ID":"338ae955-434d-40bd-8519-580badf3e175","Type":"ContainerStarted","Data":"48bbb04e7e00b1645e7fb76e2fc1e8e43eb0a9a8d80349bf4cb4e5bb19fe3f52"} Jan 28 06:51:44 crc kubenswrapper[4642]: I0128 06:51:44.823942 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-dd78cd67-skh2w"] Jan 28 06:51:44 crc kubenswrapper[4642]: I0128 06:51:44.824934 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-dd78cd67-skh2w" podUID="2d963256-5576-423f-b52d-dcad06ba2be4" containerName="controller-manager" containerID="cri-o://da6b717172ac6e1130127273c7fabdc3ce8a110fb186c06e0dcf0e7163ae6a61" gracePeriod=30 Jan 28 06:51:44 crc kubenswrapper[4642]: I0128 06:51:44.922029 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55d684b56c-xr959"] Jan 28 06:51:44 crc kubenswrapper[4642]: I0128 06:51:44.922305 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-55d684b56c-xr959" podUID="b88b0798-1f9a-4cf9-89a4-5a9fe872b780" containerName="route-controller-manager" containerID="cri-o://6ae9c365ce216037df637b07731ea778254a2d6e5981dc9f2635d7c5b453b5d1" gracePeriod=30 Jan 28 06:51:45 crc kubenswrapper[4642]: I0128 06:51:45.333858 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-55d684b56c-xr959" Jan 28 06:51:45 crc kubenswrapper[4642]: I0128 06:51:45.383466 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-dd78cd67-skh2w" Jan 28 06:51:45 crc kubenswrapper[4642]: I0128 06:51:45.413050 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8dz25\" (UniqueName: \"kubernetes.io/projected/2d963256-5576-423f-b52d-dcad06ba2be4-kube-api-access-8dz25\") pod \"2d963256-5576-423f-b52d-dcad06ba2be4\" (UID: \"2d963256-5576-423f-b52d-dcad06ba2be4\") " Jan 28 06:51:45 crc kubenswrapper[4642]: I0128 06:51:45.413111 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmb6r\" (UniqueName: \"kubernetes.io/projected/b88b0798-1f9a-4cf9-89a4-5a9fe872b780-kube-api-access-rmb6r\") pod \"b88b0798-1f9a-4cf9-89a4-5a9fe872b780\" (UID: \"b88b0798-1f9a-4cf9-89a4-5a9fe872b780\") " Jan 28 06:51:45 crc kubenswrapper[4642]: I0128 06:51:45.413141 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2d963256-5576-423f-b52d-dcad06ba2be4-client-ca\") pod \"2d963256-5576-423f-b52d-dcad06ba2be4\" (UID: \"2d963256-5576-423f-b52d-dcad06ba2be4\") " Jan 28 06:51:45 crc kubenswrapper[4642]: I0128 06:51:45.413177 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b88b0798-1f9a-4cf9-89a4-5a9fe872b780-serving-cert\") pod \"b88b0798-1f9a-4cf9-89a4-5a9fe872b780\" (UID: \"b88b0798-1f9a-4cf9-89a4-5a9fe872b780\") " Jan 28 06:51:45 crc kubenswrapper[4642]: I0128 06:51:45.413214 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d963256-5576-423f-b52d-dcad06ba2be4-config\") pod \"2d963256-5576-423f-b52d-dcad06ba2be4\" (UID: \"2d963256-5576-423f-b52d-dcad06ba2be4\") " Jan 28 06:51:45 crc kubenswrapper[4642]: I0128 06:51:45.413231 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d963256-5576-423f-b52d-dcad06ba2be4-serving-cert\") pod \"2d963256-5576-423f-b52d-dcad06ba2be4\" (UID: \"2d963256-5576-423f-b52d-dcad06ba2be4\") " Jan 28 06:51:45 crc kubenswrapper[4642]: I0128 06:51:45.413255 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b88b0798-1f9a-4cf9-89a4-5a9fe872b780-client-ca\") pod \"b88b0798-1f9a-4cf9-89a4-5a9fe872b780\" (UID: \"b88b0798-1f9a-4cf9-89a4-5a9fe872b780\") " Jan 28 06:51:45 crc kubenswrapper[4642]: I0128 06:51:45.413275 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b88b0798-1f9a-4cf9-89a4-5a9fe872b780-config\") pod \"b88b0798-1f9a-4cf9-89a4-5a9fe872b780\" (UID: \"b88b0798-1f9a-4cf9-89a4-5a9fe872b780\") " Jan 28 06:51:45 crc kubenswrapper[4642]: I0128 06:51:45.413293 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2d963256-5576-423f-b52d-dcad06ba2be4-proxy-ca-bundles\") pod \"2d963256-5576-423f-b52d-dcad06ba2be4\" (UID: \"2d963256-5576-423f-b52d-dcad06ba2be4\") " Jan 28 06:51:45 crc kubenswrapper[4642]: I0128 06:51:45.414333 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d963256-5576-423f-b52d-dcad06ba2be4-client-ca" (OuterVolumeSpecName: "client-ca") pod "2d963256-5576-423f-b52d-dcad06ba2be4" (UID: "2d963256-5576-423f-b52d-dcad06ba2be4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:51:45 crc kubenswrapper[4642]: I0128 06:51:45.414485 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d963256-5576-423f-b52d-dcad06ba2be4-config" (OuterVolumeSpecName: "config") pod "2d963256-5576-423f-b52d-dcad06ba2be4" (UID: "2d963256-5576-423f-b52d-dcad06ba2be4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:51:45 crc kubenswrapper[4642]: I0128 06:51:45.414563 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d963256-5576-423f-b52d-dcad06ba2be4-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "2d963256-5576-423f-b52d-dcad06ba2be4" (UID: "2d963256-5576-423f-b52d-dcad06ba2be4"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:51:45 crc kubenswrapper[4642]: I0128 06:51:45.414564 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b88b0798-1f9a-4cf9-89a4-5a9fe872b780-client-ca" (OuterVolumeSpecName: "client-ca") pod "b88b0798-1f9a-4cf9-89a4-5a9fe872b780" (UID: "b88b0798-1f9a-4cf9-89a4-5a9fe872b780"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:51:45 crc kubenswrapper[4642]: I0128 06:51:45.414737 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b88b0798-1f9a-4cf9-89a4-5a9fe872b780-config" (OuterVolumeSpecName: "config") pod "b88b0798-1f9a-4cf9-89a4-5a9fe872b780" (UID: "b88b0798-1f9a-4cf9-89a4-5a9fe872b780"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:51:45 crc kubenswrapper[4642]: I0128 06:51:45.419631 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b88b0798-1f9a-4cf9-89a4-5a9fe872b780-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b88b0798-1f9a-4cf9-89a4-5a9fe872b780" (UID: "b88b0798-1f9a-4cf9-89a4-5a9fe872b780"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 06:51:45 crc kubenswrapper[4642]: I0128 06:51:45.419637 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b88b0798-1f9a-4cf9-89a4-5a9fe872b780-kube-api-access-rmb6r" (OuterVolumeSpecName: "kube-api-access-rmb6r") pod "b88b0798-1f9a-4cf9-89a4-5a9fe872b780" (UID: "b88b0798-1f9a-4cf9-89a4-5a9fe872b780"). InnerVolumeSpecName "kube-api-access-rmb6r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 06:51:45 crc kubenswrapper[4642]: I0128 06:51:45.419691 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d963256-5576-423f-b52d-dcad06ba2be4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2d963256-5576-423f-b52d-dcad06ba2be4" (UID: "2d963256-5576-423f-b52d-dcad06ba2be4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 06:51:45 crc kubenswrapper[4642]: I0128 06:51:45.419828 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d963256-5576-423f-b52d-dcad06ba2be4-kube-api-access-8dz25" (OuterVolumeSpecName: "kube-api-access-8dz25") pod "2d963256-5576-423f-b52d-dcad06ba2be4" (UID: "2d963256-5576-423f-b52d-dcad06ba2be4"). InnerVolumeSpecName "kube-api-access-8dz25". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 06:51:45 crc kubenswrapper[4642]: I0128 06:51:45.493161 4642 generic.go:334] "Generic (PLEG): container finished" podID="b88b0798-1f9a-4cf9-89a4-5a9fe872b780" containerID="6ae9c365ce216037df637b07731ea778254a2d6e5981dc9f2635d7c5b453b5d1" exitCode=0 Jan 28 06:51:45 crc kubenswrapper[4642]: I0128 06:51:45.493246 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-55d684b56c-xr959" event={"ID":"b88b0798-1f9a-4cf9-89a4-5a9fe872b780","Type":"ContainerDied","Data":"6ae9c365ce216037df637b07731ea778254a2d6e5981dc9f2635d7c5b453b5d1"} Jan 28 06:51:45 crc kubenswrapper[4642]: I0128 06:51:45.493274 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-55d684b56c-xr959" event={"ID":"b88b0798-1f9a-4cf9-89a4-5a9fe872b780","Type":"ContainerDied","Data":"1a90d90ef8fdb184055b01cff59aad9512406e54f96495bad1d5dccab95d81dd"} Jan 28 06:51:45 crc kubenswrapper[4642]: I0128 06:51:45.493292 4642 scope.go:117] "RemoveContainer" containerID="6ae9c365ce216037df637b07731ea778254a2d6e5981dc9f2635d7c5b453b5d1" Jan 28 06:51:45 crc kubenswrapper[4642]: I0128 06:51:45.493384 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-55d684b56c-xr959" Jan 28 06:51:45 crc kubenswrapper[4642]: I0128 06:51:45.496402 4642 generic.go:334] "Generic (PLEG): container finished" podID="2d963256-5576-423f-b52d-dcad06ba2be4" containerID="da6b717172ac6e1130127273c7fabdc3ce8a110fb186c06e0dcf0e7163ae6a61" exitCode=0 Jan 28 06:51:45 crc kubenswrapper[4642]: I0128 06:51:45.496445 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-dd78cd67-skh2w" event={"ID":"2d963256-5576-423f-b52d-dcad06ba2be4","Type":"ContainerDied","Data":"da6b717172ac6e1130127273c7fabdc3ce8a110fb186c06e0dcf0e7163ae6a61"} Jan 28 06:51:45 crc kubenswrapper[4642]: I0128 06:51:45.496472 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-dd78cd67-skh2w" event={"ID":"2d963256-5576-423f-b52d-dcad06ba2be4","Type":"ContainerDied","Data":"19b0fc965d0cb12a4b89f10689bb32fcf21228ebb214b7e6988bd79acf81f20d"} Jan 28 06:51:45 crc kubenswrapper[4642]: I0128 06:51:45.496519 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-dd78cd67-skh2w" Jan 28 06:51:45 crc kubenswrapper[4642]: I0128 06:51:45.510457 4642 scope.go:117] "RemoveContainer" containerID="6ae9c365ce216037df637b07731ea778254a2d6e5981dc9f2635d7c5b453b5d1" Jan 28 06:51:45 crc kubenswrapper[4642]: E0128 06:51:45.510798 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ae9c365ce216037df637b07731ea778254a2d6e5981dc9f2635d7c5b453b5d1\": container with ID starting with 6ae9c365ce216037df637b07731ea778254a2d6e5981dc9f2635d7c5b453b5d1 not found: ID does not exist" containerID="6ae9c365ce216037df637b07731ea778254a2d6e5981dc9f2635d7c5b453b5d1" Jan 28 06:51:45 crc kubenswrapper[4642]: I0128 06:51:45.510878 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ae9c365ce216037df637b07731ea778254a2d6e5981dc9f2635d7c5b453b5d1"} err="failed to get container status \"6ae9c365ce216037df637b07731ea778254a2d6e5981dc9f2635d7c5b453b5d1\": rpc error: code = NotFound desc = could not find container \"6ae9c365ce216037df637b07731ea778254a2d6e5981dc9f2635d7c5b453b5d1\": container with ID starting with 6ae9c365ce216037df637b07731ea778254a2d6e5981dc9f2635d7c5b453b5d1 not found: ID does not exist" Jan 28 06:51:45 crc kubenswrapper[4642]: I0128 06:51:45.510971 4642 scope.go:117] "RemoveContainer" containerID="da6b717172ac6e1130127273c7fabdc3ce8a110fb186c06e0dcf0e7163ae6a61" Jan 28 06:51:45 crc kubenswrapper[4642]: I0128 06:51:45.514211 4642 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d963256-5576-423f-b52d-dcad06ba2be4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 06:51:45 crc kubenswrapper[4642]: I0128 06:51:45.514247 4642 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b88b0798-1f9a-4cf9-89a4-5a9fe872b780-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 06:51:45 crc kubenswrapper[4642]: I0128 06:51:45.514258 4642 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b88b0798-1f9a-4cf9-89a4-5a9fe872b780-config\") on node \"crc\" DevicePath \"\"" Jan 28 06:51:45 crc kubenswrapper[4642]: I0128 06:51:45.514266 4642 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2d963256-5576-423f-b52d-dcad06ba2be4-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 28 06:51:45 crc kubenswrapper[4642]: I0128 06:51:45.514276 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8dz25\" (UniqueName: \"kubernetes.io/projected/2d963256-5576-423f-b52d-dcad06ba2be4-kube-api-access-8dz25\") on node \"crc\" DevicePath \"\"" Jan 28 06:51:45 crc kubenswrapper[4642]: I0128 06:51:45.514285 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rmb6r\" (UniqueName: \"kubernetes.io/projected/b88b0798-1f9a-4cf9-89a4-5a9fe872b780-kube-api-access-rmb6r\") on node \"crc\" DevicePath \"\"" Jan 28 06:51:45 crc kubenswrapper[4642]: I0128 06:51:45.514292 4642 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2d963256-5576-423f-b52d-dcad06ba2be4-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 06:51:45 crc kubenswrapper[4642]: I0128 06:51:45.514300 4642 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b88b0798-1f9a-4cf9-89a4-5a9fe872b780-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 06:51:45 crc kubenswrapper[4642]: I0128 06:51:45.514308 4642 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d963256-5576-423f-b52d-dcad06ba2be4-config\") on node \"crc\" DevicePath \"\"" Jan 28 06:51:45 crc kubenswrapper[4642]: I0128 06:51:45.521969 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-dd78cd67-skh2w"] Jan 28 06:51:45 crc kubenswrapper[4642]: I0128 06:51:45.525745 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-dd78cd67-skh2w"] Jan 28 06:51:45 crc kubenswrapper[4642]: I0128 06:51:45.527564 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55d684b56c-xr959"] Jan 28 06:51:45 crc kubenswrapper[4642]: I0128 06:51:45.529303 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55d684b56c-xr959"] Jan 28 06:51:45 crc kubenswrapper[4642]: I0128 06:51:45.529512 4642 scope.go:117] "RemoveContainer" containerID="da6b717172ac6e1130127273c7fabdc3ce8a110fb186c06e0dcf0e7163ae6a61" Jan 28 06:51:45 crc kubenswrapper[4642]: E0128 06:51:45.529894 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da6b717172ac6e1130127273c7fabdc3ce8a110fb186c06e0dcf0e7163ae6a61\": container with ID starting with da6b717172ac6e1130127273c7fabdc3ce8a110fb186c06e0dcf0e7163ae6a61 not found: ID does not exist" containerID="da6b717172ac6e1130127273c7fabdc3ce8a110fb186c06e0dcf0e7163ae6a61" Jan 28 06:51:45 crc kubenswrapper[4642]: I0128 06:51:45.529931 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da6b717172ac6e1130127273c7fabdc3ce8a110fb186c06e0dcf0e7163ae6a61"} err="failed to get container status \"da6b717172ac6e1130127273c7fabdc3ce8a110fb186c06e0dcf0e7163ae6a61\": rpc error: code = NotFound desc = could not find container \"da6b717172ac6e1130127273c7fabdc3ce8a110fb186c06e0dcf0e7163ae6a61\": container with ID starting with da6b717172ac6e1130127273c7fabdc3ce8a110fb186c06e0dcf0e7163ae6a61 not found: ID does not exist" Jan 28 06:51:46 crc kubenswrapper[4642]: I0128 06:51:46.357838 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-845d7c486c-sj5fj"] Jan 28 06:51:46 crc kubenswrapper[4642]: E0128 06:51:46.358477 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d963256-5576-423f-b52d-dcad06ba2be4" containerName="controller-manager" Jan 28 06:51:46 crc kubenswrapper[4642]: I0128 06:51:46.358490 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d963256-5576-423f-b52d-dcad06ba2be4" containerName="controller-manager" Jan 28 06:51:46 crc kubenswrapper[4642]: E0128 06:51:46.358507 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b88b0798-1f9a-4cf9-89a4-5a9fe872b780" containerName="route-controller-manager" Jan 28 06:51:46 crc kubenswrapper[4642]: I0128 06:51:46.358514 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="b88b0798-1f9a-4cf9-89a4-5a9fe872b780" containerName="route-controller-manager" Jan 28 06:51:46 crc kubenswrapper[4642]: I0128 06:51:46.358603 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d963256-5576-423f-b52d-dcad06ba2be4" containerName="controller-manager" Jan 28 06:51:46 crc kubenswrapper[4642]: I0128 06:51:46.358617 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="b88b0798-1f9a-4cf9-89a4-5a9fe872b780" containerName="route-controller-manager" Jan 28 06:51:46 crc kubenswrapper[4642]: I0128 06:51:46.359037 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-845d7c486c-sj5fj" Jan 28 06:51:46 crc kubenswrapper[4642]: I0128 06:51:46.360247 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-dcd5f4dd8-plm52"] Jan 28 06:51:46 crc kubenswrapper[4642]: I0128 06:51:46.360881 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-dcd5f4dd8-plm52" Jan 28 06:51:46 crc kubenswrapper[4642]: I0128 06:51:46.362419 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 28 06:51:46 crc kubenswrapper[4642]: I0128 06:51:46.362429 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 28 06:51:46 crc kubenswrapper[4642]: I0128 06:51:46.362531 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 28 06:51:46 crc kubenswrapper[4642]: I0128 06:51:46.363047 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 28 06:51:46 crc kubenswrapper[4642]: I0128 06:51:46.363324 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 28 06:51:46 crc kubenswrapper[4642]: I0128 06:51:46.363838 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 28 06:51:46 crc kubenswrapper[4642]: I0128 06:51:46.365465 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 28 06:51:46 crc kubenswrapper[4642]: I0128 06:51:46.365851 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 28 06:51:46 crc kubenswrapper[4642]: I0128 06:51:46.365927 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 28 06:51:46 crc kubenswrapper[4642]: I0128 06:51:46.366029 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 28 06:51:46 crc kubenswrapper[4642]: I0128 06:51:46.365856 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 28 06:51:46 crc kubenswrapper[4642]: I0128 06:51:46.366505 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 28 06:51:46 crc kubenswrapper[4642]: I0128 06:51:46.369987 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 28 06:51:46 crc kubenswrapper[4642]: I0128 06:51:46.371227 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-dcd5f4dd8-plm52"] Jan 28 06:51:46 crc kubenswrapper[4642]: I0128 06:51:46.373167 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-845d7c486c-sj5fj"] Jan 28 06:51:46 crc kubenswrapper[4642]: I0128 06:51:46.425026 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b43cdb0-7bb1-4529-8429-4d146030b92d-config\") pod \"controller-manager-dcd5f4dd8-plm52\" (UID: \"1b43cdb0-7bb1-4529-8429-4d146030b92d\") " pod="openshift-controller-manager/controller-manager-dcd5f4dd8-plm52" Jan 28 06:51:46 crc kubenswrapper[4642]: I0128 06:51:46.425068 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55kfk\" (UniqueName: \"kubernetes.io/projected/1b43cdb0-7bb1-4529-8429-4d146030b92d-kube-api-access-55kfk\") pod \"controller-manager-dcd5f4dd8-plm52\" (UID: \"1b43cdb0-7bb1-4529-8429-4d146030b92d\") " pod="openshift-controller-manager/controller-manager-dcd5f4dd8-plm52" Jan 28 06:51:46 crc kubenswrapper[4642]: I0128 06:51:46.425091 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b43cdb0-7bb1-4529-8429-4d146030b92d-serving-cert\") pod \"controller-manager-dcd5f4dd8-plm52\" (UID: \"1b43cdb0-7bb1-4529-8429-4d146030b92d\") " pod="openshift-controller-manager/controller-manager-dcd5f4dd8-plm52" Jan 28 06:51:46 crc kubenswrapper[4642]: I0128 06:51:46.425138 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5033b62f-30ba-4f10-97b7-701dd6a1fcae-client-ca\") pod \"route-controller-manager-845d7c486c-sj5fj\" (UID: \"5033b62f-30ba-4f10-97b7-701dd6a1fcae\") " pod="openshift-route-controller-manager/route-controller-manager-845d7c486c-sj5fj" Jan 28 06:51:46 crc kubenswrapper[4642]: I0128 06:51:46.425157 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1b43cdb0-7bb1-4529-8429-4d146030b92d-proxy-ca-bundles\") pod \"controller-manager-dcd5f4dd8-plm52\" (UID: \"1b43cdb0-7bb1-4529-8429-4d146030b92d\") " pod="openshift-controller-manager/controller-manager-dcd5f4dd8-plm52" Jan 28 06:51:46 crc kubenswrapper[4642]: I0128 06:51:46.425204 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1b43cdb0-7bb1-4529-8429-4d146030b92d-client-ca\") pod \"controller-manager-dcd5f4dd8-plm52\" (UID: \"1b43cdb0-7bb1-4529-8429-4d146030b92d\") " pod="openshift-controller-manager/controller-manager-dcd5f4dd8-plm52" Jan 28 06:51:46 crc kubenswrapper[4642]: I0128 06:51:46.425221 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5033b62f-30ba-4f10-97b7-701dd6a1fcae-serving-cert\") pod \"route-controller-manager-845d7c486c-sj5fj\" (UID: \"5033b62f-30ba-4f10-97b7-701dd6a1fcae\") " pod="openshift-route-controller-manager/route-controller-manager-845d7c486c-sj5fj" Jan 28 06:51:46 crc kubenswrapper[4642]: I0128 06:51:46.425269 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkxfl\" (UniqueName: \"kubernetes.io/projected/5033b62f-30ba-4f10-97b7-701dd6a1fcae-kube-api-access-dkxfl\") pod \"route-controller-manager-845d7c486c-sj5fj\" (UID: \"5033b62f-30ba-4f10-97b7-701dd6a1fcae\") " pod="openshift-route-controller-manager/route-controller-manager-845d7c486c-sj5fj" Jan 28 06:51:46 crc kubenswrapper[4642]: I0128 06:51:46.425289 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5033b62f-30ba-4f10-97b7-701dd6a1fcae-config\") pod \"route-controller-manager-845d7c486c-sj5fj\" (UID: \"5033b62f-30ba-4f10-97b7-701dd6a1fcae\") " pod="openshift-route-controller-manager/route-controller-manager-845d7c486c-sj5fj" Jan 28 06:51:46 crc kubenswrapper[4642]: I0128 06:51:46.526146 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkxfl\" (UniqueName: \"kubernetes.io/projected/5033b62f-30ba-4f10-97b7-701dd6a1fcae-kube-api-access-dkxfl\") pod \"route-controller-manager-845d7c486c-sj5fj\" (UID: \"5033b62f-30ba-4f10-97b7-701dd6a1fcae\") " pod="openshift-route-controller-manager/route-controller-manager-845d7c486c-sj5fj" Jan 28 06:51:46 crc kubenswrapper[4642]: I0128 06:51:46.526216 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5033b62f-30ba-4f10-97b7-701dd6a1fcae-config\") pod \"route-controller-manager-845d7c486c-sj5fj\" (UID: \"5033b62f-30ba-4f10-97b7-701dd6a1fcae\") " pod="openshift-route-controller-manager/route-controller-manager-845d7c486c-sj5fj" Jan 28 06:51:46 crc kubenswrapper[4642]: I0128 06:51:46.526249 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b43cdb0-7bb1-4529-8429-4d146030b92d-config\") pod \"controller-manager-dcd5f4dd8-plm52\" (UID: \"1b43cdb0-7bb1-4529-8429-4d146030b92d\") " pod="openshift-controller-manager/controller-manager-dcd5f4dd8-plm52" Jan 28 06:51:46 crc kubenswrapper[4642]: I0128 06:51:46.526275 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55kfk\" (UniqueName: \"kubernetes.io/projected/1b43cdb0-7bb1-4529-8429-4d146030b92d-kube-api-access-55kfk\") pod \"controller-manager-dcd5f4dd8-plm52\" (UID: \"1b43cdb0-7bb1-4529-8429-4d146030b92d\") " pod="openshift-controller-manager/controller-manager-dcd5f4dd8-plm52" Jan 28 06:51:46 crc kubenswrapper[4642]: I0128 06:51:46.526292 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b43cdb0-7bb1-4529-8429-4d146030b92d-serving-cert\") pod \"controller-manager-dcd5f4dd8-plm52\" (UID: \"1b43cdb0-7bb1-4529-8429-4d146030b92d\") " pod="openshift-controller-manager/controller-manager-dcd5f4dd8-plm52" Jan 28 06:51:46 crc kubenswrapper[4642]: I0128 06:51:46.526318 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5033b62f-30ba-4f10-97b7-701dd6a1fcae-client-ca\") pod \"route-controller-manager-845d7c486c-sj5fj\" (UID: \"5033b62f-30ba-4f10-97b7-701dd6a1fcae\") " pod="openshift-route-controller-manager/route-controller-manager-845d7c486c-sj5fj" Jan 28 06:51:46 crc kubenswrapper[4642]: I0128 06:51:46.526339 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1b43cdb0-7bb1-4529-8429-4d146030b92d-proxy-ca-bundles\") pod \"controller-manager-dcd5f4dd8-plm52\" (UID: \"1b43cdb0-7bb1-4529-8429-4d146030b92d\") " pod="openshift-controller-manager/controller-manager-dcd5f4dd8-plm52" Jan 28 06:51:46 crc kubenswrapper[4642]: I0128 06:51:46.526355 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1b43cdb0-7bb1-4529-8429-4d146030b92d-client-ca\") pod \"controller-manager-dcd5f4dd8-plm52\" (UID: \"1b43cdb0-7bb1-4529-8429-4d146030b92d\") " pod="openshift-controller-manager/controller-manager-dcd5f4dd8-plm52" Jan 28 06:51:46 crc kubenswrapper[4642]: I0128 06:51:46.526368 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5033b62f-30ba-4f10-97b7-701dd6a1fcae-serving-cert\") pod \"route-controller-manager-845d7c486c-sj5fj\" (UID: \"5033b62f-30ba-4f10-97b7-701dd6a1fcae\") " pod="openshift-route-controller-manager/route-controller-manager-845d7c486c-sj5fj" Jan 28 06:51:46 crc kubenswrapper[4642]: I0128 06:51:46.527633 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5033b62f-30ba-4f10-97b7-701dd6a1fcae-client-ca\") pod \"route-controller-manager-845d7c486c-sj5fj\" (UID: \"5033b62f-30ba-4f10-97b7-701dd6a1fcae\") " pod="openshift-route-controller-manager/route-controller-manager-845d7c486c-sj5fj" Jan 28 06:51:46 crc kubenswrapper[4642]: I0128 06:51:46.528247 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1b43cdb0-7bb1-4529-8429-4d146030b92d-client-ca\") pod \"controller-manager-dcd5f4dd8-plm52\" (UID: \"1b43cdb0-7bb1-4529-8429-4d146030b92d\") " pod="openshift-controller-manager/controller-manager-dcd5f4dd8-plm52" Jan 28 06:51:46 crc kubenswrapper[4642]: I0128 06:51:46.528510 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1b43cdb0-7bb1-4529-8429-4d146030b92d-proxy-ca-bundles\") pod \"controller-manager-dcd5f4dd8-plm52\" (UID: \"1b43cdb0-7bb1-4529-8429-4d146030b92d\") " pod="openshift-controller-manager/controller-manager-dcd5f4dd8-plm52" Jan 28 06:51:46 crc kubenswrapper[4642]: I0128 06:51:46.528654 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5033b62f-30ba-4f10-97b7-701dd6a1fcae-config\") pod \"route-controller-manager-845d7c486c-sj5fj\" (UID: \"5033b62f-30ba-4f10-97b7-701dd6a1fcae\") " pod="openshift-route-controller-manager/route-controller-manager-845d7c486c-sj5fj" Jan 28 06:51:46 crc kubenswrapper[4642]: I0128 06:51:46.528829 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b43cdb0-7bb1-4529-8429-4d146030b92d-config\") pod \"controller-manager-dcd5f4dd8-plm52\" (UID: \"1b43cdb0-7bb1-4529-8429-4d146030b92d\") " pod="openshift-controller-manager/controller-manager-dcd5f4dd8-plm52" Jan 28 06:51:46 crc kubenswrapper[4642]: I0128 06:51:46.530128 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b43cdb0-7bb1-4529-8429-4d146030b92d-serving-cert\") pod \"controller-manager-dcd5f4dd8-plm52\" (UID: \"1b43cdb0-7bb1-4529-8429-4d146030b92d\") " pod="openshift-controller-manager/controller-manager-dcd5f4dd8-plm52" Jan 28 06:51:46 crc kubenswrapper[4642]: I0128 06:51:46.538518 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5033b62f-30ba-4f10-97b7-701dd6a1fcae-serving-cert\") pod \"route-controller-manager-845d7c486c-sj5fj\" (UID: \"5033b62f-30ba-4f10-97b7-701dd6a1fcae\") " pod="openshift-route-controller-manager/route-controller-manager-845d7c486c-sj5fj" Jan 28 06:51:46 crc kubenswrapper[4642]: I0128 06:51:46.541519 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55kfk\" (UniqueName: \"kubernetes.io/projected/1b43cdb0-7bb1-4529-8429-4d146030b92d-kube-api-access-55kfk\") pod \"controller-manager-dcd5f4dd8-plm52\" (UID: \"1b43cdb0-7bb1-4529-8429-4d146030b92d\") " pod="openshift-controller-manager/controller-manager-dcd5f4dd8-plm52" Jan 28 06:51:46 crc kubenswrapper[4642]: I0128 06:51:46.547112 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkxfl\" (UniqueName: \"kubernetes.io/projected/5033b62f-30ba-4f10-97b7-701dd6a1fcae-kube-api-access-dkxfl\") pod \"route-controller-manager-845d7c486c-sj5fj\" (UID: \"5033b62f-30ba-4f10-97b7-701dd6a1fcae\") " pod="openshift-route-controller-manager/route-controller-manager-845d7c486c-sj5fj" Jan 28 06:51:46 crc kubenswrapper[4642]: I0128 06:51:46.675948 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-845d7c486c-sj5fj" Jan 28 06:51:46 crc kubenswrapper[4642]: I0128 06:51:46.681750 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-dcd5f4dd8-plm52" Jan 28 06:51:47 crc kubenswrapper[4642]: I0128 06:51:47.022765 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-845d7c486c-sj5fj"] Jan 28 06:51:47 crc kubenswrapper[4642]: I0128 06:51:47.060147 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-dcd5f4dd8-plm52"] Jan 28 06:51:47 crc kubenswrapper[4642]: W0128 06:51:47.063884 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1b43cdb0_7bb1_4529_8429_4d146030b92d.slice/crio-8b0622918d659dcac2d26b5783195d1e1a56cf768e4ae6fcf9a69d09edf1b755 WatchSource:0}: Error finding container 8b0622918d659dcac2d26b5783195d1e1a56cf768e4ae6fcf9a69d09edf1b755: Status 404 returned error can't find the container with id 8b0622918d659dcac2d26b5783195d1e1a56cf768e4ae6fcf9a69d09edf1b755 Jan 28 06:51:47 crc kubenswrapper[4642]: I0128 06:51:47.103176 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d963256-5576-423f-b52d-dcad06ba2be4" path="/var/lib/kubelet/pods/2d963256-5576-423f-b52d-dcad06ba2be4/volumes" Jan 28 06:51:47 crc kubenswrapper[4642]: I0128 06:51:47.103861 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b88b0798-1f9a-4cf9-89a4-5a9fe872b780" path="/var/lib/kubelet/pods/b88b0798-1f9a-4cf9-89a4-5a9fe872b780/volumes" Jan 28 06:51:47 crc kubenswrapper[4642]: I0128 06:51:47.509100 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-dcd5f4dd8-plm52" event={"ID":"1b43cdb0-7bb1-4529-8429-4d146030b92d","Type":"ContainerStarted","Data":"e69096256b43f2aba542172ecd97831b41d595b310625cbe59dbdb6739bf4010"} Jan 28 06:51:47 crc kubenswrapper[4642]: I0128 06:51:47.509458 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-dcd5f4dd8-plm52" event={"ID":"1b43cdb0-7bb1-4529-8429-4d146030b92d","Type":"ContainerStarted","Data":"8b0622918d659dcac2d26b5783195d1e1a56cf768e4ae6fcf9a69d09edf1b755"} Jan 28 06:51:47 crc kubenswrapper[4642]: I0128 06:51:47.509476 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-dcd5f4dd8-plm52" Jan 28 06:51:47 crc kubenswrapper[4642]: I0128 06:51:47.512577 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-845d7c486c-sj5fj" event={"ID":"5033b62f-30ba-4f10-97b7-701dd6a1fcae","Type":"ContainerStarted","Data":"614a5c8f0f80de5deed27e93d0a263d6b39a348f00636bc3c49b2bc9aa08b25d"} Jan 28 06:51:47 crc kubenswrapper[4642]: I0128 06:51:47.512719 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-845d7c486c-sj5fj" event={"ID":"5033b62f-30ba-4f10-97b7-701dd6a1fcae","Type":"ContainerStarted","Data":"4f1f917dc9dae97ef4cdf75150c989b6c670499a5b6b9cb6f4a1865f5e5858ed"} Jan 28 06:51:47 crc kubenswrapper[4642]: I0128 06:51:47.512796 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-845d7c486c-sj5fj" Jan 28 06:51:47 crc kubenswrapper[4642]: I0128 06:51:47.514664 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-dcd5f4dd8-plm52" Jan 28 06:51:47 crc kubenswrapper[4642]: I0128 06:51:47.526150 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-dcd5f4dd8-plm52" podStartSLOduration=3.526140935 podStartE2EDuration="3.526140935s" podCreationTimestamp="2026-01-28 06:51:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 06:51:47.523280337 +0000 UTC m=+230.755369146" watchObservedRunningTime="2026-01-28 06:51:47.526140935 +0000 UTC m=+230.758229743" Jan 28 06:51:47 crc kubenswrapper[4642]: I0128 06:51:47.539948 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-845d7c486c-sj5fj" podStartSLOduration=3.539928374 podStartE2EDuration="3.539928374s" podCreationTimestamp="2026-01-28 06:51:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 06:51:47.535710412 +0000 UTC m=+230.767799221" watchObservedRunningTime="2026-01-28 06:51:47.539928374 +0000 UTC m=+230.772017183" Jan 28 06:51:47 crc kubenswrapper[4642]: I0128 06:51:47.770623 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-845d7c486c-sj5fj" Jan 28 06:51:53 crc kubenswrapper[4642]: I0128 06:51:53.336845 4642 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 28 06:51:53 crc kubenswrapper[4642]: I0128 06:51:53.338310 4642 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 28 06:51:53 crc kubenswrapper[4642]: I0128 06:51:53.338534 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 06:51:53 crc kubenswrapper[4642]: I0128 06:51:53.339142 4642 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 28 06:51:53 crc kubenswrapper[4642]: I0128 06:51:53.339469 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://7181241cbadd0b8edc9a6ff7f035813a1b2a06cdec451dc94d869bb56424a485" gracePeriod=15 Jan 28 06:51:53 crc kubenswrapper[4642]: I0128 06:51:53.339486 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://75be46ee251716ce59f7d4e31811114c790e07ff2068465e1bba6ee4abc6909b" gracePeriod=15 Jan 28 06:51:53 crc kubenswrapper[4642]: I0128 06:51:53.339478 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://ad6d564af60cb16fea7167aea68703adced71f83b8ff6a2270e2e6d109f6e9c9" gracePeriod=15 Jan 28 06:51:53 crc kubenswrapper[4642]: I0128 06:51:53.339301 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://6769a5284fe4dae7df7948532cabc41d1f4a27a7e61ab818d61b26eb8165a750" gracePeriod=15 Jan 28 06:51:53 crc kubenswrapper[4642]: I0128 06:51:53.339500 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://ee1c4b797e1cd78fa6a850c8c7491a690dab16a3c06eeb71b2dc7e6799cf285a" gracePeriod=15 Jan 28 06:51:53 crc kubenswrapper[4642]: E0128 06:51:53.339856 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 28 06:51:53 crc kubenswrapper[4642]: I0128 06:51:53.339910 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 28 06:51:53 crc kubenswrapper[4642]: E0128 06:51:53.339923 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 28 06:51:53 crc kubenswrapper[4642]: I0128 06:51:53.339931 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 28 06:51:53 crc kubenswrapper[4642]: E0128 06:51:53.339941 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 06:51:53 crc kubenswrapper[4642]: I0128 06:51:53.339949 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 06:51:53 crc kubenswrapper[4642]: E0128 06:51:53.339973 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 06:51:53 crc kubenswrapper[4642]: I0128 06:51:53.339981 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 06:51:53 crc kubenswrapper[4642]: E0128 06:51:53.339998 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 28 06:51:53 crc kubenswrapper[4642]: I0128 06:51:53.340006 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 28 06:51:53 crc kubenswrapper[4642]: E0128 06:51:53.340015 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 28 06:51:53 crc kubenswrapper[4642]: I0128 06:51:53.340044 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 28 06:51:53 crc kubenswrapper[4642]: E0128 06:51:53.340056 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 28 06:51:53 crc kubenswrapper[4642]: I0128 06:51:53.340063 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 28 06:51:53 crc kubenswrapper[4642]: I0128 06:51:53.340230 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 06:51:53 crc kubenswrapper[4642]: I0128 06:51:53.340247 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 06:51:53 crc kubenswrapper[4642]: I0128 06:51:53.340255 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 28 06:51:53 crc kubenswrapper[4642]: I0128 06:51:53.340266 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 28 06:51:53 crc kubenswrapper[4642]: I0128 06:51:53.340275 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 28 06:51:53 crc kubenswrapper[4642]: I0128 06:51:53.340286 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 28 06:51:53 crc kubenswrapper[4642]: I0128 06:51:53.373483 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 28 06:51:53 crc kubenswrapper[4642]: I0128 06:51:53.508029 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 06:51:53 crc kubenswrapper[4642]: I0128 06:51:53.508084 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 06:51:53 crc kubenswrapper[4642]: I0128 06:51:53.508107 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 06:51:53 crc kubenswrapper[4642]: I0128 06:51:53.508161 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 06:51:53 crc kubenswrapper[4642]: I0128 06:51:53.508205 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 06:51:53 crc kubenswrapper[4642]: I0128 06:51:53.508259 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 06:51:53 crc kubenswrapper[4642]: I0128 06:51:53.508309 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 06:51:53 crc kubenswrapper[4642]: I0128 06:51:53.508337 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 06:51:53 crc kubenswrapper[4642]: I0128 06:51:53.540000 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 28 06:51:53 crc kubenswrapper[4642]: I0128 06:51:53.541306 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 28 06:51:53 crc kubenswrapper[4642]: I0128 06:51:53.541923 4642 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7181241cbadd0b8edc9a6ff7f035813a1b2a06cdec451dc94d869bb56424a485" exitCode=0 Jan 28 06:51:53 crc kubenswrapper[4642]: I0128 06:51:53.541943 4642 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ad6d564af60cb16fea7167aea68703adced71f83b8ff6a2270e2e6d109f6e9c9" exitCode=0 Jan 28 06:51:53 crc kubenswrapper[4642]: I0128 06:51:53.541950 4642 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="75be46ee251716ce59f7d4e31811114c790e07ff2068465e1bba6ee4abc6909b" exitCode=0 Jan 28 06:51:53 crc kubenswrapper[4642]: I0128 06:51:53.541957 4642 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ee1c4b797e1cd78fa6a850c8c7491a690dab16a3c06eeb71b2dc7e6799cf285a" exitCode=2 Jan 28 06:51:53 crc kubenswrapper[4642]: I0128 06:51:53.542001 4642 scope.go:117] "RemoveContainer" containerID="70d9e422e29c4563585ee85b03f4d3e5705c7de8c388d337a52b30a7f0ccc096" Jan 28 06:51:53 crc kubenswrapper[4642]: I0128 06:51:53.609172 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 06:51:53 crc kubenswrapper[4642]: I0128 06:51:53.609243 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 06:51:53 crc kubenswrapper[4642]: I0128 06:51:53.609268 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 06:51:53 crc kubenswrapper[4642]: I0128 06:51:53.609300 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 06:51:53 crc kubenswrapper[4642]: I0128 06:51:53.609251 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 06:51:53 crc kubenswrapper[4642]: I0128 06:51:53.609320 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 06:51:53 crc kubenswrapper[4642]: I0128 06:51:53.609345 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 06:51:53 crc kubenswrapper[4642]: I0128 06:51:53.609347 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 06:51:53 crc kubenswrapper[4642]: I0128 06:51:53.609389 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 06:51:53 crc kubenswrapper[4642]: I0128 06:51:53.609397 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 06:51:53 crc kubenswrapper[4642]: I0128 06:51:53.609423 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 06:51:53 crc kubenswrapper[4642]: I0128 06:51:53.609434 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 06:51:53 crc kubenswrapper[4642]: I0128 06:51:53.609444 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 06:51:53 crc kubenswrapper[4642]: I0128 06:51:53.609466 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 06:51:53 crc kubenswrapper[4642]: I0128 06:51:53.609500 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 06:51:53 crc kubenswrapper[4642]: I0128 06:51:53.609504 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 06:51:53 crc kubenswrapper[4642]: I0128 06:51:53.667898 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 06:51:53 crc kubenswrapper[4642]: E0128 06:51:53.686899 4642 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.25.248:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188ed2775bccceff openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-28 06:51:53.686273791 +0000 UTC m=+236.918362601,LastTimestamp:2026-01-28 06:51:53.686273791 +0000 UTC m=+236.918362601,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 28 06:51:53 crc kubenswrapper[4642]: I0128 06:51:53.969914 4642 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" start-of-body= Jan 28 06:51:53 crc kubenswrapper[4642]: I0128 06:51:53.969970 4642 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" Jan 28 06:51:54 crc kubenswrapper[4642]: I0128 06:51:54.550032 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 28 06:51:54 crc kubenswrapper[4642]: I0128 06:51:54.552280 4642 generic.go:334] "Generic (PLEG): container finished" podID="a74d06e3-e9f9-4b9e-a6d4-4d439a470df0" containerID="7391b239d9c233b5b73dd1ba902c1f79f20c004f7fe6878c5fa049b38e168b35" exitCode=0 Jan 28 06:51:54 crc kubenswrapper[4642]: I0128 06:51:54.552367 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"a74d06e3-e9f9-4b9e-a6d4-4d439a470df0","Type":"ContainerDied","Data":"7391b239d9c233b5b73dd1ba902c1f79f20c004f7fe6878c5fa049b38e168b35"} Jan 28 06:51:54 crc kubenswrapper[4642]: I0128 06:51:54.553038 4642 status_manager.go:851] "Failed to get status for pod" podUID="a74d06e3-e9f9-4b9e-a6d4-4d439a470df0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.25.248:6443: connect: connection refused" Jan 28 06:51:54 crc kubenswrapper[4642]: I0128 06:51:54.553355 4642 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.25.248:6443: connect: connection refused" Jan 28 06:51:54 crc kubenswrapper[4642]: I0128 06:51:54.553511 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"55e7e1b8d62eaa26d493e3e461f1a76d8ecd3878ab0965d1fb59c0f4ca0a605f"} Jan 28 06:51:54 crc kubenswrapper[4642]: I0128 06:51:54.553541 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"5770eeb18d0d525c821758f609678fcae6b546b8652717d344f86d64b7b9053c"} Jan 28 06:51:54 crc kubenswrapper[4642]: I0128 06:51:54.553736 4642 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.25.248:6443: connect: connection refused" Jan 28 06:51:54 crc kubenswrapper[4642]: I0128 06:51:54.554291 4642 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.25.248:6443: connect: connection refused" Jan 28 06:51:54 crc kubenswrapper[4642]: I0128 06:51:54.554809 4642 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.25.248:6443: connect: connection refused" Jan 28 06:51:54 crc kubenswrapper[4642]: I0128 06:51:54.555104 4642 status_manager.go:851] "Failed to get status for pod" podUID="a74d06e3-e9f9-4b9e-a6d4-4d439a470df0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.25.248:6443: connect: connection refused" Jan 28 06:51:54 crc kubenswrapper[4642]: E0128 06:51:54.692140 4642 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.25.248:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188ed2775bccceff openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-28 06:51:53.686273791 +0000 UTC m=+236.918362601,LastTimestamp:2026-01-28 06:51:53.686273791 +0000 UTC m=+236.918362601,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 28 06:51:55 crc kubenswrapper[4642]: I0128 06:51:55.614504 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 28 06:51:55 crc kubenswrapper[4642]: I0128 06:51:55.616759 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 06:51:55 crc kubenswrapper[4642]: I0128 06:51:55.617318 4642 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.25.248:6443: connect: connection refused" Jan 28 06:51:55 crc kubenswrapper[4642]: I0128 06:51:55.617493 4642 status_manager.go:851] "Failed to get status for pod" podUID="a74d06e3-e9f9-4b9e-a6d4-4d439a470df0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.25.248:6443: connect: connection refused" Jan 28 06:51:55 crc kubenswrapper[4642]: I0128 06:51:55.617768 4642 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.25.248:6443: connect: connection refused" Jan 28 06:51:55 crc kubenswrapper[4642]: I0128 06:51:55.736549 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 28 06:51:55 crc kubenswrapper[4642]: I0128 06:51:55.736610 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 28 06:51:55 crc kubenswrapper[4642]: I0128 06:51:55.736672 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 06:51:55 crc kubenswrapper[4642]: I0128 06:51:55.736768 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 06:51:55 crc kubenswrapper[4642]: I0128 06:51:55.736845 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 28 06:51:55 crc kubenswrapper[4642]: I0128 06:51:55.736914 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 06:51:55 crc kubenswrapper[4642]: I0128 06:51:55.737513 4642 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 28 06:51:55 crc kubenswrapper[4642]: I0128 06:51:55.737539 4642 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 28 06:51:55 crc kubenswrapper[4642]: I0128 06:51:55.737549 4642 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 28 06:51:55 crc kubenswrapper[4642]: I0128 06:51:55.843991 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 28 06:51:55 crc kubenswrapper[4642]: I0128 06:51:55.844727 4642 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.25.248:6443: connect: connection refused" Jan 28 06:51:55 crc kubenswrapper[4642]: I0128 06:51:55.845489 4642 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.25.248:6443: connect: connection refused" Jan 28 06:51:55 crc kubenswrapper[4642]: I0128 06:51:55.846088 4642 status_manager.go:851] "Failed to get status for pod" podUID="a74d06e3-e9f9-4b9e-a6d4-4d439a470df0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.25.248:6443: connect: connection refused" Jan 28 06:51:55 crc kubenswrapper[4642]: I0128 06:51:55.940133 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a74d06e3-e9f9-4b9e-a6d4-4d439a470df0-kubelet-dir\") pod \"a74d06e3-e9f9-4b9e-a6d4-4d439a470df0\" (UID: \"a74d06e3-e9f9-4b9e-a6d4-4d439a470df0\") " Jan 28 06:51:55 crc kubenswrapper[4642]: I0128 06:51:55.940202 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a74d06e3-e9f9-4b9e-a6d4-4d439a470df0-var-lock\") pod \"a74d06e3-e9f9-4b9e-a6d4-4d439a470df0\" (UID: \"a74d06e3-e9f9-4b9e-a6d4-4d439a470df0\") " Jan 28 06:51:55 crc kubenswrapper[4642]: I0128 06:51:55.940242 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a74d06e3-e9f9-4b9e-a6d4-4d439a470df0-kube-api-access\") pod \"a74d06e3-e9f9-4b9e-a6d4-4d439a470df0\" (UID: \"a74d06e3-e9f9-4b9e-a6d4-4d439a470df0\") " Jan 28 06:51:55 crc kubenswrapper[4642]: I0128 06:51:55.940292 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a74d06e3-e9f9-4b9e-a6d4-4d439a470df0-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a74d06e3-e9f9-4b9e-a6d4-4d439a470df0" (UID: "a74d06e3-e9f9-4b9e-a6d4-4d439a470df0"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 06:51:55 crc kubenswrapper[4642]: I0128 06:51:55.940340 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a74d06e3-e9f9-4b9e-a6d4-4d439a470df0-var-lock" (OuterVolumeSpecName: "var-lock") pod "a74d06e3-e9f9-4b9e-a6d4-4d439a470df0" (UID: "a74d06e3-e9f9-4b9e-a6d4-4d439a470df0"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 06:51:55 crc kubenswrapper[4642]: I0128 06:51:55.940546 4642 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a74d06e3-e9f9-4b9e-a6d4-4d439a470df0-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 28 06:51:55 crc kubenswrapper[4642]: I0128 06:51:55.940568 4642 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a74d06e3-e9f9-4b9e-a6d4-4d439a470df0-var-lock\") on node \"crc\" DevicePath \"\"" Jan 28 06:51:55 crc kubenswrapper[4642]: I0128 06:51:55.946957 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a74d06e3-e9f9-4b9e-a6d4-4d439a470df0-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a74d06e3-e9f9-4b9e-a6d4-4d439a470df0" (UID: "a74d06e3-e9f9-4b9e-a6d4-4d439a470df0"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 06:51:56 crc kubenswrapper[4642]: I0128 06:51:56.041214 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a74d06e3-e9f9-4b9e-a6d4-4d439a470df0-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 06:51:56 crc kubenswrapper[4642]: I0128 06:51:56.565465 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 28 06:51:56 crc kubenswrapper[4642]: I0128 06:51:56.566415 4642 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6769a5284fe4dae7df7948532cabc41d1f4a27a7e61ab818d61b26eb8165a750" exitCode=0 Jan 28 06:51:56 crc kubenswrapper[4642]: I0128 06:51:56.566486 4642 scope.go:117] "RemoveContainer" containerID="7181241cbadd0b8edc9a6ff7f035813a1b2a06cdec451dc94d869bb56424a485" Jan 28 06:51:56 crc kubenswrapper[4642]: I0128 06:51:56.566502 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 06:51:56 crc kubenswrapper[4642]: I0128 06:51:56.568045 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"a74d06e3-e9f9-4b9e-a6d4-4d439a470df0","Type":"ContainerDied","Data":"a4de104489ec0d13abb778e6268a3881528551f36ac0c1823533394e9cdb2013"} Jan 28 06:51:56 crc kubenswrapper[4642]: I0128 06:51:56.568081 4642 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4de104489ec0d13abb778e6268a3881528551f36ac0c1823533394e9cdb2013" Jan 28 06:51:56 crc kubenswrapper[4642]: I0128 06:51:56.568099 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 28 06:51:56 crc kubenswrapper[4642]: I0128 06:51:56.579694 4642 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.25.248:6443: connect: connection refused" Jan 28 06:51:56 crc kubenswrapper[4642]: I0128 06:51:56.579897 4642 status_manager.go:851] "Failed to get status for pod" podUID="a74d06e3-e9f9-4b9e-a6d4-4d439a470df0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.25.248:6443: connect: connection refused" Jan 28 06:51:56 crc kubenswrapper[4642]: I0128 06:51:56.580156 4642 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.25.248:6443: connect: connection refused" Jan 28 06:51:56 crc kubenswrapper[4642]: I0128 06:51:56.581822 4642 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.25.248:6443: connect: connection refused" Jan 28 06:51:56 crc kubenswrapper[4642]: I0128 06:51:56.582071 4642 status_manager.go:851] "Failed to get status for pod" podUID="a74d06e3-e9f9-4b9e-a6d4-4d439a470df0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.25.248:6443: connect: connection refused" Jan 28 06:51:56 crc kubenswrapper[4642]: I0128 06:51:56.582279 4642 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.25.248:6443: connect: connection refused" Jan 28 06:51:56 crc kubenswrapper[4642]: I0128 06:51:56.582334 4642 scope.go:117] "RemoveContainer" containerID="ad6d564af60cb16fea7167aea68703adced71f83b8ff6a2270e2e6d109f6e9c9" Jan 28 06:51:56 crc kubenswrapper[4642]: I0128 06:51:56.595466 4642 scope.go:117] "RemoveContainer" containerID="75be46ee251716ce59f7d4e31811114c790e07ff2068465e1bba6ee4abc6909b" Jan 28 06:51:56 crc kubenswrapper[4642]: I0128 06:51:56.608180 4642 scope.go:117] "RemoveContainer" containerID="ee1c4b797e1cd78fa6a850c8c7491a690dab16a3c06eeb71b2dc7e6799cf285a" Jan 28 06:51:56 crc kubenswrapper[4642]: I0128 06:51:56.619467 4642 scope.go:117] "RemoveContainer" containerID="6769a5284fe4dae7df7948532cabc41d1f4a27a7e61ab818d61b26eb8165a750" Jan 28 06:51:56 crc kubenswrapper[4642]: I0128 06:51:56.630981 4642 scope.go:117] "RemoveContainer" containerID="d544adc85b9e84ad7685c6ce4e1ae6e7679fb24a21b7d44eb402b613ef6be7ad" Jan 28 06:51:56 crc kubenswrapper[4642]: I0128 06:51:56.648958 4642 scope.go:117] "RemoveContainer" containerID="7181241cbadd0b8edc9a6ff7f035813a1b2a06cdec451dc94d869bb56424a485" Jan 28 06:51:56 crc kubenswrapper[4642]: E0128 06:51:56.649279 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7181241cbadd0b8edc9a6ff7f035813a1b2a06cdec451dc94d869bb56424a485\": container with ID starting with 7181241cbadd0b8edc9a6ff7f035813a1b2a06cdec451dc94d869bb56424a485 not found: ID does not exist" containerID="7181241cbadd0b8edc9a6ff7f035813a1b2a06cdec451dc94d869bb56424a485" Jan 28 06:51:56 crc kubenswrapper[4642]: I0128 06:51:56.649306 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7181241cbadd0b8edc9a6ff7f035813a1b2a06cdec451dc94d869bb56424a485"} err="failed to get container status \"7181241cbadd0b8edc9a6ff7f035813a1b2a06cdec451dc94d869bb56424a485\": rpc error: code = NotFound desc = could not find container \"7181241cbadd0b8edc9a6ff7f035813a1b2a06cdec451dc94d869bb56424a485\": container with ID starting with 7181241cbadd0b8edc9a6ff7f035813a1b2a06cdec451dc94d869bb56424a485 not found: ID does not exist" Jan 28 06:51:56 crc kubenswrapper[4642]: I0128 06:51:56.649331 4642 scope.go:117] "RemoveContainer" containerID="ad6d564af60cb16fea7167aea68703adced71f83b8ff6a2270e2e6d109f6e9c9" Jan 28 06:51:56 crc kubenswrapper[4642]: E0128 06:51:56.649519 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad6d564af60cb16fea7167aea68703adced71f83b8ff6a2270e2e6d109f6e9c9\": container with ID starting with ad6d564af60cb16fea7167aea68703adced71f83b8ff6a2270e2e6d109f6e9c9 not found: ID does not exist" containerID="ad6d564af60cb16fea7167aea68703adced71f83b8ff6a2270e2e6d109f6e9c9" Jan 28 06:51:56 crc kubenswrapper[4642]: I0128 06:51:56.649547 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad6d564af60cb16fea7167aea68703adced71f83b8ff6a2270e2e6d109f6e9c9"} err="failed to get container status \"ad6d564af60cb16fea7167aea68703adced71f83b8ff6a2270e2e6d109f6e9c9\": rpc error: code = NotFound desc = could not find container \"ad6d564af60cb16fea7167aea68703adced71f83b8ff6a2270e2e6d109f6e9c9\": container with ID starting with ad6d564af60cb16fea7167aea68703adced71f83b8ff6a2270e2e6d109f6e9c9 not found: ID does not exist" Jan 28 06:51:56 crc kubenswrapper[4642]: I0128 06:51:56.649566 4642 scope.go:117] "RemoveContainer" containerID="75be46ee251716ce59f7d4e31811114c790e07ff2068465e1bba6ee4abc6909b" Jan 28 06:51:56 crc kubenswrapper[4642]: E0128 06:51:56.649991 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75be46ee251716ce59f7d4e31811114c790e07ff2068465e1bba6ee4abc6909b\": container with ID starting with 75be46ee251716ce59f7d4e31811114c790e07ff2068465e1bba6ee4abc6909b not found: ID does not exist" containerID="75be46ee251716ce59f7d4e31811114c790e07ff2068465e1bba6ee4abc6909b" Jan 28 06:51:56 crc kubenswrapper[4642]: I0128 06:51:56.650022 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75be46ee251716ce59f7d4e31811114c790e07ff2068465e1bba6ee4abc6909b"} err="failed to get container status \"75be46ee251716ce59f7d4e31811114c790e07ff2068465e1bba6ee4abc6909b\": rpc error: code = NotFound desc = could not find container \"75be46ee251716ce59f7d4e31811114c790e07ff2068465e1bba6ee4abc6909b\": container with ID starting with 75be46ee251716ce59f7d4e31811114c790e07ff2068465e1bba6ee4abc6909b not found: ID does not exist" Jan 28 06:51:56 crc kubenswrapper[4642]: I0128 06:51:56.650039 4642 scope.go:117] "RemoveContainer" containerID="ee1c4b797e1cd78fa6a850c8c7491a690dab16a3c06eeb71b2dc7e6799cf285a" Jan 28 06:51:56 crc kubenswrapper[4642]: E0128 06:51:56.650241 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee1c4b797e1cd78fa6a850c8c7491a690dab16a3c06eeb71b2dc7e6799cf285a\": container with ID starting with ee1c4b797e1cd78fa6a850c8c7491a690dab16a3c06eeb71b2dc7e6799cf285a not found: ID does not exist" containerID="ee1c4b797e1cd78fa6a850c8c7491a690dab16a3c06eeb71b2dc7e6799cf285a" Jan 28 06:51:56 crc kubenswrapper[4642]: I0128 06:51:56.650266 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee1c4b797e1cd78fa6a850c8c7491a690dab16a3c06eeb71b2dc7e6799cf285a"} err="failed to get container status \"ee1c4b797e1cd78fa6a850c8c7491a690dab16a3c06eeb71b2dc7e6799cf285a\": rpc error: code = NotFound desc = could not find container \"ee1c4b797e1cd78fa6a850c8c7491a690dab16a3c06eeb71b2dc7e6799cf285a\": container with ID starting with ee1c4b797e1cd78fa6a850c8c7491a690dab16a3c06eeb71b2dc7e6799cf285a not found: ID does not exist" Jan 28 06:51:56 crc kubenswrapper[4642]: I0128 06:51:56.650306 4642 scope.go:117] "RemoveContainer" containerID="6769a5284fe4dae7df7948532cabc41d1f4a27a7e61ab818d61b26eb8165a750" Jan 28 06:51:56 crc kubenswrapper[4642]: E0128 06:51:56.650592 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6769a5284fe4dae7df7948532cabc41d1f4a27a7e61ab818d61b26eb8165a750\": container with ID starting with 6769a5284fe4dae7df7948532cabc41d1f4a27a7e61ab818d61b26eb8165a750 not found: ID does not exist" containerID="6769a5284fe4dae7df7948532cabc41d1f4a27a7e61ab818d61b26eb8165a750" Jan 28 06:51:56 crc kubenswrapper[4642]: I0128 06:51:56.650646 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6769a5284fe4dae7df7948532cabc41d1f4a27a7e61ab818d61b26eb8165a750"} err="failed to get container status \"6769a5284fe4dae7df7948532cabc41d1f4a27a7e61ab818d61b26eb8165a750\": rpc error: code = NotFound desc = could not find container \"6769a5284fe4dae7df7948532cabc41d1f4a27a7e61ab818d61b26eb8165a750\": container with ID starting with 6769a5284fe4dae7df7948532cabc41d1f4a27a7e61ab818d61b26eb8165a750 not found: ID does not exist" Jan 28 06:51:56 crc kubenswrapper[4642]: I0128 06:51:56.650662 4642 scope.go:117] "RemoveContainer" containerID="d544adc85b9e84ad7685c6ce4e1ae6e7679fb24a21b7d44eb402b613ef6be7ad" Jan 28 06:51:56 crc kubenswrapper[4642]: E0128 06:51:56.650869 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d544adc85b9e84ad7685c6ce4e1ae6e7679fb24a21b7d44eb402b613ef6be7ad\": container with ID starting with d544adc85b9e84ad7685c6ce4e1ae6e7679fb24a21b7d44eb402b613ef6be7ad not found: ID does not exist" containerID="d544adc85b9e84ad7685c6ce4e1ae6e7679fb24a21b7d44eb402b613ef6be7ad" Jan 28 06:51:56 crc kubenswrapper[4642]: I0128 06:51:56.650888 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d544adc85b9e84ad7685c6ce4e1ae6e7679fb24a21b7d44eb402b613ef6be7ad"} err="failed to get container status \"d544adc85b9e84ad7685c6ce4e1ae6e7679fb24a21b7d44eb402b613ef6be7ad\": rpc error: code = NotFound desc = could not find container \"d544adc85b9e84ad7685c6ce4e1ae6e7679fb24a21b7d44eb402b613ef6be7ad\": container with ID starting with d544adc85b9e84ad7685c6ce4e1ae6e7679fb24a21b7d44eb402b613ef6be7ad not found: ID does not exist" Jan 28 06:51:57 crc kubenswrapper[4642]: I0128 06:51:57.100329 4642 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.25.248:6443: connect: connection refused" Jan 28 06:51:57 crc kubenswrapper[4642]: I0128 06:51:57.100700 4642 status_manager.go:851] "Failed to get status for pod" podUID="a74d06e3-e9f9-4b9e-a6d4-4d439a470df0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.25.248:6443: connect: connection refused" Jan 28 06:51:57 crc kubenswrapper[4642]: I0128 06:51:57.101759 4642 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.25.248:6443: connect: connection refused" Jan 28 06:51:57 crc kubenswrapper[4642]: I0128 06:51:57.103735 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 28 06:52:01 crc kubenswrapper[4642]: E0128 06:52:01.506887 4642 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.25.248:6443: connect: connection refused" Jan 28 06:52:01 crc kubenswrapper[4642]: E0128 06:52:01.508057 4642 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.25.248:6443: connect: connection refused" Jan 28 06:52:01 crc kubenswrapper[4642]: E0128 06:52:01.508489 4642 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.25.248:6443: connect: connection refused" Jan 28 06:52:01 crc kubenswrapper[4642]: E0128 06:52:01.508726 4642 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.25.248:6443: connect: connection refused" Jan 28 06:52:01 crc kubenswrapper[4642]: E0128 06:52:01.508922 4642 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.25.248:6443: connect: connection refused" Jan 28 06:52:01 crc kubenswrapper[4642]: I0128 06:52:01.508949 4642 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 28 06:52:01 crc kubenswrapper[4642]: E0128 06:52:01.509155 4642 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.25.248:6443: connect: connection refused" interval="200ms" Jan 28 06:52:01 crc kubenswrapper[4642]: E0128 06:52:01.710352 4642 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.25.248:6443: connect: connection refused" interval="400ms" Jan 28 06:52:02 crc kubenswrapper[4642]: E0128 06:52:02.111298 4642 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.25.248:6443: connect: connection refused" interval="800ms" Jan 28 06:52:02 crc kubenswrapper[4642]: E0128 06:52:02.911866 4642 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.25.248:6443: connect: connection refused" interval="1.6s" Jan 28 06:52:04 crc kubenswrapper[4642]: E0128 06:52:04.512790 4642 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.25.248:6443: connect: connection refused" interval="3.2s" Jan 28 06:52:04 crc kubenswrapper[4642]: E0128 06:52:04.692983 4642 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.25.248:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188ed2775bccceff openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-28 06:51:53.686273791 +0000 UTC m=+236.918362601,LastTimestamp:2026-01-28 06:51:53.686273791 +0000 UTC m=+236.918362601,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 28 06:52:05 crc kubenswrapper[4642]: I0128 06:52:05.608961 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 28 06:52:05 crc kubenswrapper[4642]: I0128 06:52:05.609012 4642 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="0f31fe6b99be2f9edd215e88aadd39790ce7f144061336772561af3eab969fb7" exitCode=1 Jan 28 06:52:05 crc kubenswrapper[4642]: I0128 06:52:05.609041 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"0f31fe6b99be2f9edd215e88aadd39790ce7f144061336772561af3eab969fb7"} Jan 28 06:52:05 crc kubenswrapper[4642]: I0128 06:52:05.609576 4642 scope.go:117] "RemoveContainer" containerID="0f31fe6b99be2f9edd215e88aadd39790ce7f144061336772561af3eab969fb7" Jan 28 06:52:05 crc kubenswrapper[4642]: I0128 06:52:05.609730 4642 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.25.248:6443: connect: connection refused" Jan 28 06:52:05 crc kubenswrapper[4642]: I0128 06:52:05.610087 4642 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.25.248:6443: connect: connection refused" Jan 28 06:52:05 crc kubenswrapper[4642]: I0128 06:52:05.610449 4642 status_manager.go:851] "Failed to get status for pod" podUID="a74d06e3-e9f9-4b9e-a6d4-4d439a470df0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.25.248:6443: connect: connection refused" Jan 28 06:52:06 crc kubenswrapper[4642]: I0128 06:52:06.290522 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 06:52:06 crc kubenswrapper[4642]: I0128 06:52:06.619787 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 28 06:52:06 crc kubenswrapper[4642]: I0128 06:52:06.619844 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"6fb0af6baa0a1db19e803a7759cb365bce8fa520fb213d48456422d84ff6a95a"} Jan 28 06:52:06 crc kubenswrapper[4642]: I0128 06:52:06.620460 4642 status_manager.go:851] "Failed to get status for pod" podUID="a74d06e3-e9f9-4b9e-a6d4-4d439a470df0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.25.248:6443: connect: connection refused" Jan 28 06:52:06 crc kubenswrapper[4642]: I0128 06:52:06.620663 4642 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.25.248:6443: connect: connection refused" Jan 28 06:52:06 crc kubenswrapper[4642]: I0128 06:52:06.620884 4642 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.25.248:6443: connect: connection refused" Jan 28 06:52:07 crc kubenswrapper[4642]: I0128 06:52:07.099833 4642 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.25.248:6443: connect: connection refused" Jan 28 06:52:07 crc kubenswrapper[4642]: I0128 06:52:07.100460 4642 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.25.248:6443: connect: connection refused" Jan 28 06:52:07 crc kubenswrapper[4642]: I0128 06:52:07.100802 4642 status_manager.go:851] "Failed to get status for pod" podUID="a74d06e3-e9f9-4b9e-a6d4-4d439a470df0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.25.248:6443: connect: connection refused" Jan 28 06:52:07 crc kubenswrapper[4642]: E0128 06:52:07.713982 4642 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.25.248:6443: connect: connection refused" interval="6.4s" Jan 28 06:52:08 crc kubenswrapper[4642]: I0128 06:52:08.098161 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 06:52:08 crc kubenswrapper[4642]: I0128 06:52:08.098913 4642 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.25.248:6443: connect: connection refused" Jan 28 06:52:08 crc kubenswrapper[4642]: I0128 06:52:08.099232 4642 status_manager.go:851] "Failed to get status for pod" podUID="a74d06e3-e9f9-4b9e-a6d4-4d439a470df0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.25.248:6443: connect: connection refused" Jan 28 06:52:08 crc kubenswrapper[4642]: I0128 06:52:08.099533 4642 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.25.248:6443: connect: connection refused" Jan 28 06:52:08 crc kubenswrapper[4642]: I0128 06:52:08.110598 4642 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="286f1a08-e4be-475e-a3ff-c37c99c41ea6" Jan 28 06:52:08 crc kubenswrapper[4642]: I0128 06:52:08.110626 4642 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="286f1a08-e4be-475e-a3ff-c37c99c41ea6" Jan 28 06:52:08 crc kubenswrapper[4642]: E0128 06:52:08.111023 4642 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.25.248:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 06:52:08 crc kubenswrapper[4642]: I0128 06:52:08.111848 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 06:52:08 crc kubenswrapper[4642]: W0128 06:52:08.125464 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-1e0914c645d91d16e8482c1883830a55d43ad48a75ac822704458255635742c4 WatchSource:0}: Error finding container 1e0914c645d91d16e8482c1883830a55d43ad48a75ac822704458255635742c4: Status 404 returned error can't find the container with id 1e0914c645d91d16e8482c1883830a55d43ad48a75ac822704458255635742c4 Jan 28 06:52:08 crc kubenswrapper[4642]: I0128 06:52:08.629939 4642 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="24fae5e919bf9cc96fc088c3331f2153877b953a304922f845045d3c48153cd2" exitCode=0 Jan 28 06:52:08 crc kubenswrapper[4642]: I0128 06:52:08.630025 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"24fae5e919bf9cc96fc088c3331f2153877b953a304922f845045d3c48153cd2"} Jan 28 06:52:08 crc kubenswrapper[4642]: I0128 06:52:08.630317 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"1e0914c645d91d16e8482c1883830a55d43ad48a75ac822704458255635742c4"} Jan 28 06:52:08 crc kubenswrapper[4642]: I0128 06:52:08.630594 4642 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="286f1a08-e4be-475e-a3ff-c37c99c41ea6" Jan 28 06:52:08 crc kubenswrapper[4642]: I0128 06:52:08.630615 4642 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="286f1a08-e4be-475e-a3ff-c37c99c41ea6" Jan 28 06:52:08 crc kubenswrapper[4642]: E0128 06:52:08.631010 4642 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.25.248:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 06:52:08 crc kubenswrapper[4642]: I0128 06:52:08.631028 4642 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.25.248:6443: connect: connection refused" Jan 28 06:52:08 crc kubenswrapper[4642]: I0128 06:52:08.631326 4642 status_manager.go:851] "Failed to get status for pod" podUID="a74d06e3-e9f9-4b9e-a6d4-4d439a470df0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.25.248:6443: connect: connection refused" Jan 28 06:52:08 crc kubenswrapper[4642]: I0128 06:52:08.631556 4642 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.25.248:6443: connect: connection refused" Jan 28 06:52:09 crc kubenswrapper[4642]: I0128 06:52:09.637595 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"bce724a973d675bcc50b3098e57d34b080c96cc731259f076fe5ff722e6fa901"} Jan 28 06:52:09 crc kubenswrapper[4642]: I0128 06:52:09.637925 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ea9c817b8e24e4c30b99c105896c14ef9214d0af4840d6d06762f6b0bc8d213b"} Jan 28 06:52:09 crc kubenswrapper[4642]: I0128 06:52:09.637937 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"504ea581e3ce49e7a8a732b575fb2a02a990eab4d0c9b3ded7856197eabf74ff"} Jan 28 06:52:09 crc kubenswrapper[4642]: I0128 06:52:09.637946 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"2ef3502bdab866cc7abeeaf3ed7a3583019f52bd223e397d0ac03d0830937607"} Jan 28 06:52:09 crc kubenswrapper[4642]: I0128 06:52:09.637954 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"283deea6e98c4d63b4519d762cc87c1051757dd4ca4aeadac7973d34e5cb80c7"} Jan 28 06:52:09 crc kubenswrapper[4642]: I0128 06:52:09.638153 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 06:52:09 crc kubenswrapper[4642]: I0128 06:52:09.638284 4642 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="286f1a08-e4be-475e-a3ff-c37c99c41ea6" Jan 28 06:52:09 crc kubenswrapper[4642]: I0128 06:52:09.638302 4642 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="286f1a08-e4be-475e-a3ff-c37c99c41ea6" Jan 28 06:52:09 crc kubenswrapper[4642]: I0128 06:52:09.921194 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 06:52:09 crc kubenswrapper[4642]: I0128 06:52:09.932443 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 06:52:10 crc kubenswrapper[4642]: I0128 06:52:10.647761 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 06:52:13 crc kubenswrapper[4642]: I0128 06:52:13.112702 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 06:52:13 crc kubenswrapper[4642]: I0128 06:52:13.112737 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 06:52:13 crc kubenswrapper[4642]: I0128 06:52:13.116469 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 06:52:14 crc kubenswrapper[4642]: I0128 06:52:14.839275 4642 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 06:52:15 crc kubenswrapper[4642]: I0128 06:52:15.670293 4642 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="286f1a08-e4be-475e-a3ff-c37c99c41ea6" Jan 28 06:52:15 crc kubenswrapper[4642]: I0128 06:52:15.670558 4642 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="286f1a08-e4be-475e-a3ff-c37c99c41ea6" Jan 28 06:52:15 crc kubenswrapper[4642]: I0128 06:52:15.673901 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 06:52:16 crc kubenswrapper[4642]: I0128 06:52:16.294863 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 06:52:16 crc kubenswrapper[4642]: I0128 06:52:16.674591 4642 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="286f1a08-e4be-475e-a3ff-c37c99c41ea6" Jan 28 06:52:16 crc kubenswrapper[4642]: I0128 06:52:16.674637 4642 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="286f1a08-e4be-475e-a3ff-c37c99c41ea6" Jan 28 06:52:17 crc kubenswrapper[4642]: I0128 06:52:17.110141 4642 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="879c0924-de5c-46ff-8469-e39aef8a54e4" Jan 28 06:52:24 crc kubenswrapper[4642]: I0128 06:52:24.481573 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 28 06:52:25 crc kubenswrapper[4642]: I0128 06:52:25.156215 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 28 06:52:25 crc kubenswrapper[4642]: I0128 06:52:25.213527 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 28 06:52:25 crc kubenswrapper[4642]: I0128 06:52:25.563822 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 28 06:52:25 crc kubenswrapper[4642]: I0128 06:52:25.956530 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 28 06:52:26 crc kubenswrapper[4642]: I0128 06:52:26.440081 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 28 06:52:26 crc kubenswrapper[4642]: I0128 06:52:26.664847 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 28 06:52:26 crc kubenswrapper[4642]: I0128 06:52:26.682815 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 28 06:52:26 crc kubenswrapper[4642]: I0128 06:52:26.797311 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 28 06:52:27 crc kubenswrapper[4642]: I0128 06:52:27.176340 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 28 06:52:27 crc kubenswrapper[4642]: I0128 06:52:27.261403 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 28 06:52:27 crc kubenswrapper[4642]: I0128 06:52:27.266057 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 28 06:52:27 crc kubenswrapper[4642]: I0128 06:52:27.298332 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 28 06:52:27 crc kubenswrapper[4642]: I0128 06:52:27.476076 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 28 06:52:27 crc kubenswrapper[4642]: I0128 06:52:27.553914 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 28 06:52:27 crc kubenswrapper[4642]: I0128 06:52:27.661598 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 28 06:52:27 crc kubenswrapper[4642]: I0128 06:52:27.936349 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 28 06:52:27 crc kubenswrapper[4642]: I0128 06:52:27.952783 4642 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 28 06:52:27 crc kubenswrapper[4642]: I0128 06:52:27.990765 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 28 06:52:28 crc kubenswrapper[4642]: I0128 06:52:28.018838 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 28 06:52:28 crc kubenswrapper[4642]: I0128 06:52:28.036597 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 28 06:52:28 crc kubenswrapper[4642]: I0128 06:52:28.104050 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 28 06:52:28 crc kubenswrapper[4642]: I0128 06:52:28.278426 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 28 06:52:28 crc kubenswrapper[4642]: I0128 06:52:28.484161 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 28 06:52:28 crc kubenswrapper[4642]: I0128 06:52:28.574821 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 28 06:52:28 crc kubenswrapper[4642]: I0128 06:52:28.655249 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 28 06:52:28 crc kubenswrapper[4642]: I0128 06:52:28.670663 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 28 06:52:28 crc kubenswrapper[4642]: I0128 06:52:28.784234 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 28 06:52:28 crc kubenswrapper[4642]: I0128 06:52:28.839155 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 28 06:52:28 crc kubenswrapper[4642]: I0128 06:52:28.902881 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 28 06:52:29 crc kubenswrapper[4642]: I0128 06:52:29.000032 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 28 06:52:29 crc kubenswrapper[4642]: I0128 06:52:29.026766 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 28 06:52:29 crc kubenswrapper[4642]: I0128 06:52:29.157450 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 28 06:52:29 crc kubenswrapper[4642]: I0128 06:52:29.172607 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 28 06:52:29 crc kubenswrapper[4642]: I0128 06:52:29.290390 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 28 06:52:29 crc kubenswrapper[4642]: I0128 06:52:29.311542 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 28 06:52:29 crc kubenswrapper[4642]: I0128 06:52:29.410315 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 28 06:52:29 crc kubenswrapper[4642]: I0128 06:52:29.506222 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 28 06:52:29 crc kubenswrapper[4642]: I0128 06:52:29.517512 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 28 06:52:29 crc kubenswrapper[4642]: I0128 06:52:29.519886 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 28 06:52:29 crc kubenswrapper[4642]: I0128 06:52:29.589174 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 28 06:52:29 crc kubenswrapper[4642]: I0128 06:52:29.717248 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 28 06:52:29 crc kubenswrapper[4642]: I0128 06:52:29.767410 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 28 06:52:29 crc kubenswrapper[4642]: I0128 06:52:29.902396 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 28 06:52:29 crc kubenswrapper[4642]: I0128 06:52:29.918684 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 28 06:52:29 crc kubenswrapper[4642]: I0128 06:52:29.938075 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 28 06:52:29 crc kubenswrapper[4642]: I0128 06:52:29.981155 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 28 06:52:30 crc kubenswrapper[4642]: I0128 06:52:30.147954 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 28 06:52:30 crc kubenswrapper[4642]: I0128 06:52:30.351019 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 28 06:52:30 crc kubenswrapper[4642]: I0128 06:52:30.377555 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 28 06:52:30 crc kubenswrapper[4642]: I0128 06:52:30.420634 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 28 06:52:30 crc kubenswrapper[4642]: I0128 06:52:30.425492 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 28 06:52:30 crc kubenswrapper[4642]: I0128 06:52:30.439554 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 28 06:52:30 crc kubenswrapper[4642]: I0128 06:52:30.466882 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 28 06:52:30 crc kubenswrapper[4642]: I0128 06:52:30.515115 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 28 06:52:30 crc kubenswrapper[4642]: I0128 06:52:30.531328 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 28 06:52:30 crc kubenswrapper[4642]: I0128 06:52:30.568630 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 28 06:52:30 crc kubenswrapper[4642]: I0128 06:52:30.654154 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 28 06:52:30 crc kubenswrapper[4642]: I0128 06:52:30.666130 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 28 06:52:30 crc kubenswrapper[4642]: I0128 06:52:30.685168 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 28 06:52:30 crc kubenswrapper[4642]: I0128 06:52:30.774386 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 28 06:52:30 crc kubenswrapper[4642]: I0128 06:52:30.780481 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 28 06:52:30 crc kubenswrapper[4642]: I0128 06:52:30.805508 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 28 06:52:30 crc kubenswrapper[4642]: I0128 06:52:30.851737 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 28 06:52:30 crc kubenswrapper[4642]: I0128 06:52:30.970794 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 28 06:52:31 crc kubenswrapper[4642]: I0128 06:52:31.154354 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 28 06:52:31 crc kubenswrapper[4642]: I0128 06:52:31.167298 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 28 06:52:31 crc kubenswrapper[4642]: I0128 06:52:31.260277 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 28 06:52:31 crc kubenswrapper[4642]: I0128 06:52:31.300377 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 28 06:52:31 crc kubenswrapper[4642]: I0128 06:52:31.315787 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 28 06:52:31 crc kubenswrapper[4642]: I0128 06:52:31.319808 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 28 06:52:31 crc kubenswrapper[4642]: I0128 06:52:31.322513 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 28 06:52:31 crc kubenswrapper[4642]: I0128 06:52:31.424439 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 28 06:52:31 crc kubenswrapper[4642]: I0128 06:52:31.473781 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 28 06:52:31 crc kubenswrapper[4642]: I0128 06:52:31.567146 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 28 06:52:31 crc kubenswrapper[4642]: I0128 06:52:31.677123 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 28 06:52:31 crc kubenswrapper[4642]: I0128 06:52:31.685418 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 28 06:52:31 crc kubenswrapper[4642]: I0128 06:52:31.699135 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 28 06:52:31 crc kubenswrapper[4642]: I0128 06:52:31.713526 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 28 06:52:31 crc kubenswrapper[4642]: I0128 06:52:31.776083 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 28 06:52:31 crc kubenswrapper[4642]: I0128 06:52:31.902974 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 28 06:52:31 crc kubenswrapper[4642]: I0128 06:52:31.915696 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 28 06:52:32 crc kubenswrapper[4642]: I0128 06:52:32.025987 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 28 06:52:32 crc kubenswrapper[4642]: I0128 06:52:32.180906 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 28 06:52:32 crc kubenswrapper[4642]: I0128 06:52:32.185029 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 28 06:52:32 crc kubenswrapper[4642]: I0128 06:52:32.202936 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 28 06:52:32 crc kubenswrapper[4642]: I0128 06:52:32.204062 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 28 06:52:32 crc kubenswrapper[4642]: I0128 06:52:32.342133 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 28 06:52:32 crc kubenswrapper[4642]: I0128 06:52:32.362169 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 28 06:52:32 crc kubenswrapper[4642]: I0128 06:52:32.519341 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 28 06:52:32 crc kubenswrapper[4642]: I0128 06:52:32.527068 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 28 06:52:32 crc kubenswrapper[4642]: I0128 06:52:32.580220 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 28 06:52:32 crc kubenswrapper[4642]: I0128 06:52:32.600307 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 28 06:52:32 crc kubenswrapper[4642]: I0128 06:52:32.608724 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 28 06:52:32 crc kubenswrapper[4642]: I0128 06:52:32.634882 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 28 06:52:32 crc kubenswrapper[4642]: I0128 06:52:32.689131 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 28 06:52:32 crc kubenswrapper[4642]: I0128 06:52:32.721387 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 28 06:52:32 crc kubenswrapper[4642]: I0128 06:52:32.735501 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 28 06:52:32 crc kubenswrapper[4642]: I0128 06:52:32.757806 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 28 06:52:32 crc kubenswrapper[4642]: I0128 06:52:32.829807 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 28 06:52:32 crc kubenswrapper[4642]: I0128 06:52:32.923319 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 28 06:52:33 crc kubenswrapper[4642]: I0128 06:52:33.111836 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 28 06:52:33 crc kubenswrapper[4642]: I0128 06:52:33.309008 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 28 06:52:33 crc kubenswrapper[4642]: I0128 06:52:33.343708 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 28 06:52:33 crc kubenswrapper[4642]: I0128 06:52:33.466095 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 28 06:52:33 crc kubenswrapper[4642]: I0128 06:52:33.474676 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 28 06:52:33 crc kubenswrapper[4642]: I0128 06:52:33.475102 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 28 06:52:33 crc kubenswrapper[4642]: I0128 06:52:33.518546 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 28 06:52:33 crc kubenswrapper[4642]: I0128 06:52:33.601939 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 28 06:52:33 crc kubenswrapper[4642]: I0128 06:52:33.665599 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 28 06:52:33 crc kubenswrapper[4642]: I0128 06:52:33.721630 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 28 06:52:33 crc kubenswrapper[4642]: I0128 06:52:33.748626 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 28 06:52:33 crc kubenswrapper[4642]: I0128 06:52:33.876097 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 28 06:52:33 crc kubenswrapper[4642]: I0128 06:52:33.932933 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 28 06:52:33 crc kubenswrapper[4642]: I0128 06:52:33.961692 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 28 06:52:33 crc kubenswrapper[4642]: I0128 06:52:33.966433 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 28 06:52:33 crc kubenswrapper[4642]: I0128 06:52:33.981218 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 28 06:52:34 crc kubenswrapper[4642]: I0128 06:52:34.007681 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 28 06:52:34 crc kubenswrapper[4642]: I0128 06:52:34.023034 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 28 06:52:34 crc kubenswrapper[4642]: I0128 06:52:34.141130 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 28 06:52:34 crc kubenswrapper[4642]: I0128 06:52:34.164235 4642 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 28 06:52:34 crc kubenswrapper[4642]: I0128 06:52:34.188104 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 28 06:52:34 crc kubenswrapper[4642]: I0128 06:52:34.260827 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 28 06:52:34 crc kubenswrapper[4642]: I0128 06:52:34.333637 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 28 06:52:34 crc kubenswrapper[4642]: I0128 06:52:34.344962 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 28 06:52:34 crc kubenswrapper[4642]: I0128 06:52:34.402108 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 28 06:52:34 crc kubenswrapper[4642]: I0128 06:52:34.424024 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 28 06:52:34 crc kubenswrapper[4642]: I0128 06:52:34.439803 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 28 06:52:34 crc kubenswrapper[4642]: I0128 06:52:34.542532 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 28 06:52:34 crc kubenswrapper[4642]: I0128 06:52:34.549855 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 28 06:52:34 crc kubenswrapper[4642]: I0128 06:52:34.584926 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 28 06:52:34 crc kubenswrapper[4642]: I0128 06:52:34.586959 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 28 06:52:34 crc kubenswrapper[4642]: I0128 06:52:34.655250 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 28 06:52:34 crc kubenswrapper[4642]: I0128 06:52:34.737512 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 28 06:52:34 crc kubenswrapper[4642]: I0128 06:52:34.822059 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 28 06:52:34 crc kubenswrapper[4642]: I0128 06:52:34.854467 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 28 06:52:34 crc kubenswrapper[4642]: I0128 06:52:34.887818 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 28 06:52:34 crc kubenswrapper[4642]: I0128 06:52:34.901802 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 28 06:52:34 crc kubenswrapper[4642]: I0128 06:52:34.925588 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 28 06:52:34 crc kubenswrapper[4642]: I0128 06:52:34.934183 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 28 06:52:34 crc kubenswrapper[4642]: I0128 06:52:34.964757 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 28 06:52:35 crc kubenswrapper[4642]: I0128 06:52:35.029001 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 28 06:52:35 crc kubenswrapper[4642]: I0128 06:52:35.074722 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 28 06:52:35 crc kubenswrapper[4642]: I0128 06:52:35.119235 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 28 06:52:35 crc kubenswrapper[4642]: I0128 06:52:35.133124 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 28 06:52:35 crc kubenswrapper[4642]: I0128 06:52:35.171529 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 28 06:52:35 crc kubenswrapper[4642]: I0128 06:52:35.284415 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 28 06:52:35 crc kubenswrapper[4642]: I0128 06:52:35.340130 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 28 06:52:35 crc kubenswrapper[4642]: I0128 06:52:35.346075 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 28 06:52:35 crc kubenswrapper[4642]: I0128 06:52:35.361723 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 28 06:52:35 crc kubenswrapper[4642]: I0128 06:52:35.438399 4642 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 28 06:52:35 crc kubenswrapper[4642]: I0128 06:52:35.441734 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=42.441716558 podStartE2EDuration="42.441716558s" podCreationTimestamp="2026-01-28 06:51:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 06:52:14.850482796 +0000 UTC m=+258.082571605" watchObservedRunningTime="2026-01-28 06:52:35.441716558 +0000 UTC m=+278.673805367" Jan 28 06:52:35 crc kubenswrapper[4642]: I0128 06:52:35.442085 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 28 06:52:35 crc kubenswrapper[4642]: I0128 06:52:35.442128 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 28 06:52:35 crc kubenswrapper[4642]: I0128 06:52:35.445669 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 06:52:35 crc kubenswrapper[4642]: I0128 06:52:35.454267 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 28 06:52:35 crc kubenswrapper[4642]: I0128 06:52:35.457667 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=21.457650041 podStartE2EDuration="21.457650041s" podCreationTimestamp="2026-01-28 06:52:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 06:52:35.453819399 +0000 UTC m=+278.685908208" watchObservedRunningTime="2026-01-28 06:52:35.457650041 +0000 UTC m=+278.689738850" Jan 28 06:52:35 crc kubenswrapper[4642]: I0128 06:52:35.465934 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 28 06:52:35 crc kubenswrapper[4642]: I0128 06:52:35.537855 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 28 06:52:35 crc kubenswrapper[4642]: I0128 06:52:35.582211 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 28 06:52:35 crc kubenswrapper[4642]: I0128 06:52:35.692232 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 28 06:52:35 crc kubenswrapper[4642]: I0128 06:52:35.756366 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 28 06:52:35 crc kubenswrapper[4642]: I0128 06:52:35.861039 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 28 06:52:35 crc kubenswrapper[4642]: I0128 06:52:35.912641 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 28 06:52:35 crc kubenswrapper[4642]: I0128 06:52:35.921774 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 28 06:52:35 crc kubenswrapper[4642]: I0128 06:52:35.952088 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 28 06:52:35 crc kubenswrapper[4642]: I0128 06:52:35.952179 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 28 06:52:35 crc kubenswrapper[4642]: I0128 06:52:35.954914 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 28 06:52:36 crc kubenswrapper[4642]: I0128 06:52:36.168551 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 28 06:52:36 crc kubenswrapper[4642]: I0128 06:52:36.179303 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 28 06:52:36 crc kubenswrapper[4642]: I0128 06:52:36.203280 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 28 06:52:36 crc kubenswrapper[4642]: I0128 06:52:36.251086 4642 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 28 06:52:36 crc kubenswrapper[4642]: I0128 06:52:36.251321 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://55e7e1b8d62eaa26d493e3e461f1a76d8ecd3878ab0965d1fb59c0f4ca0a605f" gracePeriod=5 Jan 28 06:52:36 crc kubenswrapper[4642]: I0128 06:52:36.254913 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 28 06:52:36 crc kubenswrapper[4642]: I0128 06:52:36.267770 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 28 06:52:36 crc kubenswrapper[4642]: I0128 06:52:36.310646 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 28 06:52:36 crc kubenswrapper[4642]: I0128 06:52:36.327339 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 28 06:52:36 crc kubenswrapper[4642]: I0128 06:52:36.412126 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 28 06:52:36 crc kubenswrapper[4642]: I0128 06:52:36.444076 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 28 06:52:36 crc kubenswrapper[4642]: I0128 06:52:36.450092 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 28 06:52:36 crc kubenswrapper[4642]: I0128 06:52:36.477484 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 28 06:52:36 crc kubenswrapper[4642]: I0128 06:52:36.477697 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 28 06:52:36 crc kubenswrapper[4642]: I0128 06:52:36.498565 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 28 06:52:36 crc kubenswrapper[4642]: I0128 06:52:36.530708 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 28 06:52:36 crc kubenswrapper[4642]: I0128 06:52:36.547680 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 28 06:52:36 crc kubenswrapper[4642]: I0128 06:52:36.552452 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 28 06:52:36 crc kubenswrapper[4642]: I0128 06:52:36.685125 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 28 06:52:36 crc kubenswrapper[4642]: I0128 06:52:36.711462 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 28 06:52:36 crc kubenswrapper[4642]: I0128 06:52:36.729929 4642 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 28 06:52:36 crc kubenswrapper[4642]: I0128 06:52:36.816363 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 28 06:52:36 crc kubenswrapper[4642]: I0128 06:52:36.921724 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 28 06:52:36 crc kubenswrapper[4642]: I0128 06:52:36.972178 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 28 06:52:36 crc kubenswrapper[4642]: I0128 06:52:36.998558 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 28 06:52:37 crc kubenswrapper[4642]: I0128 06:52:37.085927 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 28 06:52:37 crc kubenswrapper[4642]: I0128 06:52:37.152654 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 28 06:52:37 crc kubenswrapper[4642]: I0128 06:52:37.165959 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 28 06:52:37 crc kubenswrapper[4642]: I0128 06:52:37.186685 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 28 06:52:37 crc kubenswrapper[4642]: I0128 06:52:37.242468 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 28 06:52:37 crc kubenswrapper[4642]: I0128 06:52:37.286048 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 28 06:52:37 crc kubenswrapper[4642]: I0128 06:52:37.293397 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 28 06:52:37 crc kubenswrapper[4642]: I0128 06:52:37.316767 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 28 06:52:37 crc kubenswrapper[4642]: I0128 06:52:37.355921 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 28 06:52:37 crc kubenswrapper[4642]: I0128 06:52:37.457062 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 28 06:52:37 crc kubenswrapper[4642]: I0128 06:52:37.530782 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bsvlt"] Jan 28 06:52:37 crc kubenswrapper[4642]: I0128 06:52:37.531353 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bsvlt" podUID="a4be5139-f33d-4cb9-829c-cfe1116c1305" containerName="registry-server" containerID="cri-o://c4c6b60c32d2c8da76a328af6e7bdacac70b4f8ead6192dbf9e2c55ad4718e5d" gracePeriod=30 Jan 28 06:52:37 crc kubenswrapper[4642]: I0128 06:52:37.532855 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 28 06:52:37 crc kubenswrapper[4642]: I0128 06:52:37.534690 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qgssm"] Jan 28 06:52:37 crc kubenswrapper[4642]: I0128 06:52:37.534933 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qgssm" podUID="46ff4beb-e52c-4d77-96a2-0dcde4c8c516" containerName="registry-server" containerID="cri-o://9a4f2a4a4a51d678b3b607501d73aeeb68695f503d4ff64e0bba6cd3e096ebf6" gracePeriod=30 Jan 28 06:52:37 crc kubenswrapper[4642]: I0128 06:52:37.544960 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-965rk"] Jan 28 06:52:37 crc kubenswrapper[4642]: I0128 06:52:37.545129 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-965rk" podUID="ab246574-4025-47da-a132-1d1e72b35a00" containerName="marketplace-operator" containerID="cri-o://a417b9b24d811f44d5802c81373a18a801aa7cfdaa6850f2e277887bbdac72f0" gracePeriod=30 Jan 28 06:52:37 crc kubenswrapper[4642]: I0128 06:52:37.559487 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sv75h"] Jan 28 06:52:37 crc kubenswrapper[4642]: I0128 06:52:37.559710 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-sv75h" podUID="bd03d734-1b6b-4f56-bb31-19bcb87a0250" containerName="registry-server" containerID="cri-o://ce5ed18a12f3a09fb50ca4995f92c25febe840f58670239b767e7e60fbc44c53" gracePeriod=30 Jan 28 06:52:37 crc kubenswrapper[4642]: I0128 06:52:37.562950 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ps9zm"] Jan 28 06:52:37 crc kubenswrapper[4642]: I0128 06:52:37.563133 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ps9zm" podUID="8f1e7744-f44d-4430-915b-59821d507da1" containerName="registry-server" containerID="cri-o://10b1cb7a9a54d17a2a80282801b2159ac4fa81cbab270e39da3158d43568609d" gracePeriod=30 Jan 28 06:52:37 crc kubenswrapper[4642]: I0128 06:52:37.575965 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 28 06:52:37 crc kubenswrapper[4642]: I0128 06:52:37.579337 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mfmj2"] Jan 28 06:52:37 crc kubenswrapper[4642]: E0128 06:52:37.579571 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a74d06e3-e9f9-4b9e-a6d4-4d439a470df0" containerName="installer" Jan 28 06:52:37 crc kubenswrapper[4642]: I0128 06:52:37.579589 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="a74d06e3-e9f9-4b9e-a6d4-4d439a470df0" containerName="installer" Jan 28 06:52:37 crc kubenswrapper[4642]: E0128 06:52:37.579599 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 28 06:52:37 crc kubenswrapper[4642]: I0128 06:52:37.579606 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 28 06:52:37 crc kubenswrapper[4642]: I0128 06:52:37.579705 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="a74d06e3-e9f9-4b9e-a6d4-4d439a470df0" containerName="installer" Jan 28 06:52:37 crc kubenswrapper[4642]: I0128 06:52:37.579721 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 28 06:52:37 crc kubenswrapper[4642]: I0128 06:52:37.580167 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-mfmj2" Jan 28 06:52:37 crc kubenswrapper[4642]: I0128 06:52:37.589237 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mfmj2"] Jan 28 06:52:37 crc kubenswrapper[4642]: I0128 06:52:37.594530 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/09fc0334-7203-49cf-958d-0c34a6dc1bdc-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-mfmj2\" (UID: \"09fc0334-7203-49cf-958d-0c34a6dc1bdc\") " pod="openshift-marketplace/marketplace-operator-79b997595-mfmj2" Jan 28 06:52:37 crc kubenswrapper[4642]: I0128 06:52:37.594658 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rtrg\" (UniqueName: \"kubernetes.io/projected/09fc0334-7203-49cf-958d-0c34a6dc1bdc-kube-api-access-7rtrg\") pod \"marketplace-operator-79b997595-mfmj2\" (UID: \"09fc0334-7203-49cf-958d-0c34a6dc1bdc\") " pod="openshift-marketplace/marketplace-operator-79b997595-mfmj2" Jan 28 06:52:37 crc kubenswrapper[4642]: I0128 06:52:37.594737 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/09fc0334-7203-49cf-958d-0c34a6dc1bdc-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-mfmj2\" (UID: \"09fc0334-7203-49cf-958d-0c34a6dc1bdc\") " pod="openshift-marketplace/marketplace-operator-79b997595-mfmj2" Jan 28 06:52:37 crc kubenswrapper[4642]: I0128 06:52:37.598777 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 28 06:52:37 crc kubenswrapper[4642]: I0128 06:52:37.652726 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 28 06:52:37 crc kubenswrapper[4642]: I0128 06:52:37.695629 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rtrg\" (UniqueName: \"kubernetes.io/projected/09fc0334-7203-49cf-958d-0c34a6dc1bdc-kube-api-access-7rtrg\") pod \"marketplace-operator-79b997595-mfmj2\" (UID: \"09fc0334-7203-49cf-958d-0c34a6dc1bdc\") " pod="openshift-marketplace/marketplace-operator-79b997595-mfmj2" Jan 28 06:52:37 crc kubenswrapper[4642]: I0128 06:52:37.695690 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/09fc0334-7203-49cf-958d-0c34a6dc1bdc-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-mfmj2\" (UID: \"09fc0334-7203-49cf-958d-0c34a6dc1bdc\") " pod="openshift-marketplace/marketplace-operator-79b997595-mfmj2" Jan 28 06:52:37 crc kubenswrapper[4642]: I0128 06:52:37.696324 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/09fc0334-7203-49cf-958d-0c34a6dc1bdc-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-mfmj2\" (UID: \"09fc0334-7203-49cf-958d-0c34a6dc1bdc\") " pod="openshift-marketplace/marketplace-operator-79b997595-mfmj2" Jan 28 06:52:37 crc kubenswrapper[4642]: I0128 06:52:37.698063 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/09fc0334-7203-49cf-958d-0c34a6dc1bdc-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-mfmj2\" (UID: \"09fc0334-7203-49cf-958d-0c34a6dc1bdc\") " pod="openshift-marketplace/marketplace-operator-79b997595-mfmj2" Jan 28 06:52:37 crc kubenswrapper[4642]: I0128 06:52:37.706227 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 28 06:52:37 crc kubenswrapper[4642]: I0128 06:52:37.707676 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/09fc0334-7203-49cf-958d-0c34a6dc1bdc-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-mfmj2\" (UID: \"09fc0334-7203-49cf-958d-0c34a6dc1bdc\") " pod="openshift-marketplace/marketplace-operator-79b997595-mfmj2" Jan 28 06:52:37 crc kubenswrapper[4642]: I0128 06:52:37.713530 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rtrg\" (UniqueName: \"kubernetes.io/projected/09fc0334-7203-49cf-958d-0c34a6dc1bdc-kube-api-access-7rtrg\") pod \"marketplace-operator-79b997595-mfmj2\" (UID: \"09fc0334-7203-49cf-958d-0c34a6dc1bdc\") " pod="openshift-marketplace/marketplace-operator-79b997595-mfmj2" Jan 28 06:52:37 crc kubenswrapper[4642]: I0128 06:52:37.798256 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 28 06:52:37 crc kubenswrapper[4642]: I0128 06:52:37.803311 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bsvlt" event={"ID":"a4be5139-f33d-4cb9-829c-cfe1116c1305","Type":"ContainerDied","Data":"c4c6b60c32d2c8da76a328af6e7bdacac70b4f8ead6192dbf9e2c55ad4718e5d"} Jan 28 06:52:37 crc kubenswrapper[4642]: I0128 06:52:37.803260 4642 generic.go:334] "Generic (PLEG): container finished" podID="a4be5139-f33d-4cb9-829c-cfe1116c1305" containerID="c4c6b60c32d2c8da76a328af6e7bdacac70b4f8ead6192dbf9e2c55ad4718e5d" exitCode=0 Jan 28 06:52:37 crc kubenswrapper[4642]: I0128 06:52:37.805411 4642 generic.go:334] "Generic (PLEG): container finished" podID="8f1e7744-f44d-4430-915b-59821d507da1" containerID="10b1cb7a9a54d17a2a80282801b2159ac4fa81cbab270e39da3158d43568609d" exitCode=0 Jan 28 06:52:37 crc kubenswrapper[4642]: I0128 06:52:37.805479 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ps9zm" event={"ID":"8f1e7744-f44d-4430-915b-59821d507da1","Type":"ContainerDied","Data":"10b1cb7a9a54d17a2a80282801b2159ac4fa81cbab270e39da3158d43568609d"} Jan 28 06:52:37 crc kubenswrapper[4642]: I0128 06:52:37.807104 4642 generic.go:334] "Generic (PLEG): container finished" podID="bd03d734-1b6b-4f56-bb31-19bcb87a0250" containerID="ce5ed18a12f3a09fb50ca4995f92c25febe840f58670239b767e7e60fbc44c53" exitCode=0 Jan 28 06:52:37 crc kubenswrapper[4642]: I0128 06:52:37.807169 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sv75h" event={"ID":"bd03d734-1b6b-4f56-bb31-19bcb87a0250","Type":"ContainerDied","Data":"ce5ed18a12f3a09fb50ca4995f92c25febe840f58670239b767e7e60fbc44c53"} Jan 28 06:52:37 crc kubenswrapper[4642]: I0128 06:52:37.813313 4642 generic.go:334] "Generic (PLEG): container finished" podID="46ff4beb-e52c-4d77-96a2-0dcde4c8c516" containerID="9a4f2a4a4a51d678b3b607501d73aeeb68695f503d4ff64e0bba6cd3e096ebf6" exitCode=0 Jan 28 06:52:37 crc kubenswrapper[4642]: I0128 06:52:37.813375 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qgssm" event={"ID":"46ff4beb-e52c-4d77-96a2-0dcde4c8c516","Type":"ContainerDied","Data":"9a4f2a4a4a51d678b3b607501d73aeeb68695f503d4ff64e0bba6cd3e096ebf6"} Jan 28 06:52:37 crc kubenswrapper[4642]: I0128 06:52:37.814471 4642 generic.go:334] "Generic (PLEG): container finished" podID="ab246574-4025-47da-a132-1d1e72b35a00" containerID="a417b9b24d811f44d5802c81373a18a801aa7cfdaa6850f2e277887bbdac72f0" exitCode=0 Jan 28 06:52:37 crc kubenswrapper[4642]: I0128 06:52:37.814531 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-965rk" event={"ID":"ab246574-4025-47da-a132-1d1e72b35a00","Type":"ContainerDied","Data":"a417b9b24d811f44d5802c81373a18a801aa7cfdaa6850f2e277887bbdac72f0"} Jan 28 06:52:37 crc kubenswrapper[4642]: I0128 06:52:37.847589 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 28 06:52:37 crc kubenswrapper[4642]: I0128 06:52:37.880374 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-mfmj2" Jan 28 06:52:37 crc kubenswrapper[4642]: I0128 06:52:37.886438 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 28 06:52:37 crc kubenswrapper[4642]: I0128 06:52:37.909269 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 28 06:52:37 crc kubenswrapper[4642]: I0128 06:52:37.945161 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bsvlt" Jan 28 06:52:37 crc kubenswrapper[4642]: I0128 06:52:37.950060 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.082382 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qgssm" Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.090910 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ps9zm" Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.109622 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sv75h" Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.112089 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-965rk" Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.113324 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4be5139-f33d-4cb9-829c-cfe1116c1305-utilities\") pod \"a4be5139-f33d-4cb9-829c-cfe1116c1305\" (UID: \"a4be5139-f33d-4cb9-829c-cfe1116c1305\") " Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.113380 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f1e7744-f44d-4430-915b-59821d507da1-catalog-content\") pod \"8f1e7744-f44d-4430-915b-59821d507da1\" (UID: \"8f1e7744-f44d-4430-915b-59821d507da1\") " Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.113418 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5k5nw\" (UniqueName: \"kubernetes.io/projected/a4be5139-f33d-4cb9-829c-cfe1116c1305-kube-api-access-5k5nw\") pod \"a4be5139-f33d-4cb9-829c-cfe1116c1305\" (UID: \"a4be5139-f33d-4cb9-829c-cfe1116c1305\") " Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.113462 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46ff4beb-e52c-4d77-96a2-0dcde4c8c516-catalog-content\") pod \"46ff4beb-e52c-4d77-96a2-0dcde4c8c516\" (UID: \"46ff4beb-e52c-4d77-96a2-0dcde4c8c516\") " Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.113483 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k7d6q\" (UniqueName: \"kubernetes.io/projected/46ff4beb-e52c-4d77-96a2-0dcde4c8c516-kube-api-access-k7d6q\") pod \"46ff4beb-e52c-4d77-96a2-0dcde4c8c516\" (UID: \"46ff4beb-e52c-4d77-96a2-0dcde4c8c516\") " Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.113520 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjvrv\" (UniqueName: \"kubernetes.io/projected/8f1e7744-f44d-4430-915b-59821d507da1-kube-api-access-rjvrv\") pod \"8f1e7744-f44d-4430-915b-59821d507da1\" (UID: \"8f1e7744-f44d-4430-915b-59821d507da1\") " Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.113543 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46ff4beb-e52c-4d77-96a2-0dcde4c8c516-utilities\") pod \"46ff4beb-e52c-4d77-96a2-0dcde4c8c516\" (UID: \"46ff4beb-e52c-4d77-96a2-0dcde4c8c516\") " Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.113566 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f1e7744-f44d-4430-915b-59821d507da1-utilities\") pod \"8f1e7744-f44d-4430-915b-59821d507da1\" (UID: \"8f1e7744-f44d-4430-915b-59821d507da1\") " Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.113589 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4be5139-f33d-4cb9-829c-cfe1116c1305-catalog-content\") pod \"a4be5139-f33d-4cb9-829c-cfe1116c1305\" (UID: \"a4be5139-f33d-4cb9-829c-cfe1116c1305\") " Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.121866 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46ff4beb-e52c-4d77-96a2-0dcde4c8c516-utilities" (OuterVolumeSpecName: "utilities") pod "46ff4beb-e52c-4d77-96a2-0dcde4c8c516" (UID: "46ff4beb-e52c-4d77-96a2-0dcde4c8c516"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.121901 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f1e7744-f44d-4430-915b-59821d507da1-utilities" (OuterVolumeSpecName: "utilities") pod "8f1e7744-f44d-4430-915b-59821d507da1" (UID: "8f1e7744-f44d-4430-915b-59821d507da1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.122121 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4be5139-f33d-4cb9-829c-cfe1116c1305-utilities" (OuterVolumeSpecName: "utilities") pod "a4be5139-f33d-4cb9-829c-cfe1116c1305" (UID: "a4be5139-f33d-4cb9-829c-cfe1116c1305"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.126727 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46ff4beb-e52c-4d77-96a2-0dcde4c8c516-kube-api-access-k7d6q" (OuterVolumeSpecName: "kube-api-access-k7d6q") pod "46ff4beb-e52c-4d77-96a2-0dcde4c8c516" (UID: "46ff4beb-e52c-4d77-96a2-0dcde4c8c516"). InnerVolumeSpecName "kube-api-access-k7d6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.131050 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f1e7744-f44d-4430-915b-59821d507da1-kube-api-access-rjvrv" (OuterVolumeSpecName: "kube-api-access-rjvrv") pod "8f1e7744-f44d-4430-915b-59821d507da1" (UID: "8f1e7744-f44d-4430-915b-59821d507da1"). InnerVolumeSpecName "kube-api-access-rjvrv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.131704 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4be5139-f33d-4cb9-829c-cfe1116c1305-kube-api-access-5k5nw" (OuterVolumeSpecName: "kube-api-access-5k5nw") pod "a4be5139-f33d-4cb9-829c-cfe1116c1305" (UID: "a4be5139-f33d-4cb9-829c-cfe1116c1305"). InnerVolumeSpecName "kube-api-access-5k5nw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.167884 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4be5139-f33d-4cb9-829c-cfe1116c1305-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a4be5139-f33d-4cb9-829c-cfe1116c1305" (UID: "a4be5139-f33d-4cb9-829c-cfe1116c1305"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.176941 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46ff4beb-e52c-4d77-96a2-0dcde4c8c516-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "46ff4beb-e52c-4d77-96a2-0dcde4c8c516" (UID: "46ff4beb-e52c-4d77-96a2-0dcde4c8c516"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.215573 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ab246574-4025-47da-a132-1d1e72b35a00-marketplace-operator-metrics\") pod \"ab246574-4025-47da-a132-1d1e72b35a00\" (UID: \"ab246574-4025-47da-a132-1d1e72b35a00\") " Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.215650 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd03d734-1b6b-4f56-bb31-19bcb87a0250-utilities\") pod \"bd03d734-1b6b-4f56-bb31-19bcb87a0250\" (UID: \"bd03d734-1b6b-4f56-bb31-19bcb87a0250\") " Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.215676 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-47t7v\" (UniqueName: \"kubernetes.io/projected/ab246574-4025-47da-a132-1d1e72b35a00-kube-api-access-47t7v\") pod \"ab246574-4025-47da-a132-1d1e72b35a00\" (UID: \"ab246574-4025-47da-a132-1d1e72b35a00\") " Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.215701 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd03d734-1b6b-4f56-bb31-19bcb87a0250-catalog-content\") pod \"bd03d734-1b6b-4f56-bb31-19bcb87a0250\" (UID: \"bd03d734-1b6b-4f56-bb31-19bcb87a0250\") " Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.215719 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ab246574-4025-47da-a132-1d1e72b35a00-marketplace-trusted-ca\") pod \"ab246574-4025-47da-a132-1d1e72b35a00\" (UID: \"ab246574-4025-47da-a132-1d1e72b35a00\") " Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.215764 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5td7\" (UniqueName: \"kubernetes.io/projected/bd03d734-1b6b-4f56-bb31-19bcb87a0250-kube-api-access-f5td7\") pod \"bd03d734-1b6b-4f56-bb31-19bcb87a0250\" (UID: \"bd03d734-1b6b-4f56-bb31-19bcb87a0250\") " Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.216009 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5k5nw\" (UniqueName: \"kubernetes.io/projected/a4be5139-f33d-4cb9-829c-cfe1116c1305-kube-api-access-5k5nw\") on node \"crc\" DevicePath \"\"" Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.216027 4642 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46ff4beb-e52c-4d77-96a2-0dcde4c8c516-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.216036 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k7d6q\" (UniqueName: \"kubernetes.io/projected/46ff4beb-e52c-4d77-96a2-0dcde4c8c516-kube-api-access-k7d6q\") on node \"crc\" DevicePath \"\"" Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.216044 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rjvrv\" (UniqueName: \"kubernetes.io/projected/8f1e7744-f44d-4430-915b-59821d507da1-kube-api-access-rjvrv\") on node \"crc\" DevicePath \"\"" Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.216053 4642 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46ff4beb-e52c-4d77-96a2-0dcde4c8c516-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.216062 4642 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f1e7744-f44d-4430-915b-59821d507da1-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.216072 4642 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4be5139-f33d-4cb9-829c-cfe1116c1305-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.216081 4642 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4be5139-f33d-4cb9-829c-cfe1116c1305-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.216417 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd03d734-1b6b-4f56-bb31-19bcb87a0250-utilities" (OuterVolumeSpecName: "utilities") pod "bd03d734-1b6b-4f56-bb31-19bcb87a0250" (UID: "bd03d734-1b6b-4f56-bb31-19bcb87a0250"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.216552 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab246574-4025-47da-a132-1d1e72b35a00-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "ab246574-4025-47da-a132-1d1e72b35a00" (UID: "ab246574-4025-47da-a132-1d1e72b35a00"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.217144 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f1e7744-f44d-4430-915b-59821d507da1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8f1e7744-f44d-4430-915b-59821d507da1" (UID: "8f1e7744-f44d-4430-915b-59821d507da1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.218686 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab246574-4025-47da-a132-1d1e72b35a00-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "ab246574-4025-47da-a132-1d1e72b35a00" (UID: "ab246574-4025-47da-a132-1d1e72b35a00"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.218967 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd03d734-1b6b-4f56-bb31-19bcb87a0250-kube-api-access-f5td7" (OuterVolumeSpecName: "kube-api-access-f5td7") pod "bd03d734-1b6b-4f56-bb31-19bcb87a0250" (UID: "bd03d734-1b6b-4f56-bb31-19bcb87a0250"). InnerVolumeSpecName "kube-api-access-f5td7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.219721 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab246574-4025-47da-a132-1d1e72b35a00-kube-api-access-47t7v" (OuterVolumeSpecName: "kube-api-access-47t7v") pod "ab246574-4025-47da-a132-1d1e72b35a00" (UID: "ab246574-4025-47da-a132-1d1e72b35a00"). InnerVolumeSpecName "kube-api-access-47t7v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.232626 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.235155 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd03d734-1b6b-4f56-bb31-19bcb87a0250-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bd03d734-1b6b-4f56-bb31-19bcb87a0250" (UID: "bd03d734-1b6b-4f56-bb31-19bcb87a0250"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.318098 4642 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd03d734-1b6b-4f56-bb31-19bcb87a0250-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.318126 4642 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ab246574-4025-47da-a132-1d1e72b35a00-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.318135 4642 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f1e7744-f44d-4430-915b-59821d507da1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.318144 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5td7\" (UniqueName: \"kubernetes.io/projected/bd03d734-1b6b-4f56-bb31-19bcb87a0250-kube-api-access-f5td7\") on node \"crc\" DevicePath \"\"" Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.318152 4642 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ab246574-4025-47da-a132-1d1e72b35a00-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.318160 4642 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd03d734-1b6b-4f56-bb31-19bcb87a0250-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.318167 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-47t7v\" (UniqueName: \"kubernetes.io/projected/ab246574-4025-47da-a132-1d1e72b35a00-kube-api-access-47t7v\") on node \"crc\" DevicePath \"\"" Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.335566 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mfmj2"] Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.365036 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.392420 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.425861 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.440940 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.481955 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.505154 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.539816 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.691337 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.807202 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.819770 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-mfmj2" event={"ID":"09fc0334-7203-49cf-958d-0c34a6dc1bdc","Type":"ContainerStarted","Data":"5bc734cd0ee715dc4198f4f01265ca96e5d48914e210f83e0914511f89f3ca9a"} Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.819808 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-mfmj2" event={"ID":"09fc0334-7203-49cf-958d-0c34a6dc1bdc","Type":"ContainerStarted","Data":"4a9d54cde967a0c30c2456a1d42f311d53a738a898b0adb3ff9e441c78893b4c"} Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.820700 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-mfmj2" Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.823623 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sv75h" event={"ID":"bd03d734-1b6b-4f56-bb31-19bcb87a0250","Type":"ContainerDied","Data":"ae653b857e42a05b66d316809aa5f800a426c71fd39b64772a36af740e3df473"} Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.823656 4642 scope.go:117] "RemoveContainer" containerID="ce5ed18a12f3a09fb50ca4995f92c25febe840f58670239b767e7e60fbc44c53" Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.823678 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sv75h" Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.830503 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-mfmj2" Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.831291 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.832774 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qgssm" event={"ID":"46ff4beb-e52c-4d77-96a2-0dcde4c8c516","Type":"ContainerDied","Data":"60d7f53884b14ea88aa498ba7b371ed3f3458e675dcda661eb40969f5c9eda64"} Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.832868 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qgssm" Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.834071 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-965rk" event={"ID":"ab246574-4025-47da-a132-1d1e72b35a00","Type":"ContainerDied","Data":"53f77fca7253f5374987afb92b9534da91f86cedbd690edc761a95d211efc77b"} Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.834141 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-965rk" Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.836707 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-mfmj2" podStartSLOduration=1.8366938130000001 podStartE2EDuration="1.836693813s" podCreationTimestamp="2026-01-28 06:52:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 06:52:38.830875988 +0000 UTC m=+282.062964797" watchObservedRunningTime="2026-01-28 06:52:38.836693813 +0000 UTC m=+282.068782622" Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.840485 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bsvlt" event={"ID":"a4be5139-f33d-4cb9-829c-cfe1116c1305","Type":"ContainerDied","Data":"5cfb88e7738a7228f57500aad6298643ba0107a3ea1666b810b2a530af005711"} Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.840556 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bsvlt" Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.842566 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ps9zm" event={"ID":"8f1e7744-f44d-4430-915b-59821d507da1","Type":"ContainerDied","Data":"4fdfeeb6a5f3b120698f1ff3bfc8ed152db78426a1bf0d277cc1f4ccfb16ac14"} Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.842630 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ps9zm" Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.847169 4642 scope.go:117] "RemoveContainer" containerID="1372f6e5fc16b214b7227c199fa4ca959da0a3bc4d41cd2ccf5da306356c3add" Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.859200 4642 scope.go:117] "RemoveContainer" containerID="63a90fe72aae5ef74166010e6d29732114bac7891331dfae1b42946811dcd881" Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.867350 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.877511 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qgssm"] Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.881014 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qgssm"] Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.883567 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sv75h"] Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.886101 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-sv75h"] Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.889813 4642 scope.go:117] "RemoveContainer" containerID="9a4f2a4a4a51d678b3b607501d73aeeb68695f503d4ff64e0bba6cd3e096ebf6" Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.905369 4642 scope.go:117] "RemoveContainer" containerID="4e61c37e6054682ba02c423cf3a754a2084ab573094068b472db7b8939be74cc" Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.910472 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bsvlt"] Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.912204 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bsvlt"] Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.920627 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.927539 4642 scope.go:117] "RemoveContainer" containerID="c70efe38afa75888ebb53c0bbed0c932d5495056f6a07ba922210376c5988b2d" Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.928893 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ps9zm"] Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.931483 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ps9zm"] Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.935309 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-965rk"] Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.937312 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-965rk"] Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.941173 4642 scope.go:117] "RemoveContainer" containerID="a417b9b24d811f44d5802c81373a18a801aa7cfdaa6850f2e277887bbdac72f0" Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.941567 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.955379 4642 scope.go:117] "RemoveContainer" containerID="c4c6b60c32d2c8da76a328af6e7bdacac70b4f8ead6192dbf9e2c55ad4718e5d" Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.965639 4642 scope.go:117] "RemoveContainer" containerID="e8932f18cf2283617e4e09e5994b3d217308d40b6950a94b2b1367fc704fde5f" Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.975814 4642 scope.go:117] "RemoveContainer" containerID="29bced9bb267e10a1e7d93cad436adf7afecc2dd1e8f2a23e112d32129a42314" Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.990041 4642 scope.go:117] "RemoveContainer" containerID="10b1cb7a9a54d17a2a80282801b2159ac4fa81cbab270e39da3158d43568609d" Jan 28 06:52:38 crc kubenswrapper[4642]: I0128 06:52:38.998822 4642 scope.go:117] "RemoveContainer" containerID="df9fe6ecacf0625c219021d2a1208998007b6196c960d3b1a6159e1924c49204" Jan 28 06:52:39 crc kubenswrapper[4642]: I0128 06:52:39.013115 4642 scope.go:117] "RemoveContainer" containerID="5cd4336221342b67043af1e26cfbc7e85f41145bb6fd5c2bde811f721c94e123" Jan 28 06:52:39 crc kubenswrapper[4642]: I0128 06:52:39.023349 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 28 06:52:39 crc kubenswrapper[4642]: I0128 06:52:39.035628 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 28 06:52:39 crc kubenswrapper[4642]: I0128 06:52:39.036430 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 28 06:52:39 crc kubenswrapper[4642]: I0128 06:52:39.036940 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 28 06:52:39 crc kubenswrapper[4642]: I0128 06:52:39.103955 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46ff4beb-e52c-4d77-96a2-0dcde4c8c516" path="/var/lib/kubelet/pods/46ff4beb-e52c-4d77-96a2-0dcde4c8c516/volumes" Jan 28 06:52:39 crc kubenswrapper[4642]: I0128 06:52:39.104573 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f1e7744-f44d-4430-915b-59821d507da1" path="/var/lib/kubelet/pods/8f1e7744-f44d-4430-915b-59821d507da1/volumes" Jan 28 06:52:39 crc kubenswrapper[4642]: I0128 06:52:39.105110 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4be5139-f33d-4cb9-829c-cfe1116c1305" path="/var/lib/kubelet/pods/a4be5139-f33d-4cb9-829c-cfe1116c1305/volumes" Jan 28 06:52:39 crc kubenswrapper[4642]: I0128 06:52:39.106093 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab246574-4025-47da-a132-1d1e72b35a00" path="/var/lib/kubelet/pods/ab246574-4025-47da-a132-1d1e72b35a00/volumes" Jan 28 06:52:39 crc kubenswrapper[4642]: I0128 06:52:39.106513 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd03d734-1b6b-4f56-bb31-19bcb87a0250" path="/var/lib/kubelet/pods/bd03d734-1b6b-4f56-bb31-19bcb87a0250/volumes" Jan 28 06:52:39 crc kubenswrapper[4642]: I0128 06:52:39.259570 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 28 06:52:39 crc kubenswrapper[4642]: I0128 06:52:39.260859 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 28 06:52:39 crc kubenswrapper[4642]: I0128 06:52:39.343754 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 28 06:52:39 crc kubenswrapper[4642]: I0128 06:52:39.356163 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 28 06:52:39 crc kubenswrapper[4642]: I0128 06:52:39.371421 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 28 06:52:39 crc kubenswrapper[4642]: I0128 06:52:39.374359 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 28 06:52:39 crc kubenswrapper[4642]: I0128 06:52:39.465554 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 28 06:52:39 crc kubenswrapper[4642]: I0128 06:52:39.580540 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 28 06:52:39 crc kubenswrapper[4642]: I0128 06:52:39.720142 4642 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 28 06:52:39 crc kubenswrapper[4642]: I0128 06:52:39.726759 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 28 06:52:39 crc kubenswrapper[4642]: I0128 06:52:39.730724 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 28 06:52:39 crc kubenswrapper[4642]: I0128 06:52:39.867563 4642 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 28 06:52:39 crc kubenswrapper[4642]: I0128 06:52:39.869176 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 28 06:52:39 crc kubenswrapper[4642]: I0128 06:52:39.931303 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 28 06:52:40 crc kubenswrapper[4642]: I0128 06:52:40.201327 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 28 06:52:40 crc kubenswrapper[4642]: I0128 06:52:40.267452 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 28 06:52:40 crc kubenswrapper[4642]: I0128 06:52:40.283964 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 28 06:52:40 crc kubenswrapper[4642]: I0128 06:52:40.355525 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 28 06:52:40 crc kubenswrapper[4642]: I0128 06:52:40.426286 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 28 06:52:40 crc kubenswrapper[4642]: I0128 06:52:40.448201 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 28 06:52:40 crc kubenswrapper[4642]: I0128 06:52:40.681447 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 28 06:52:40 crc kubenswrapper[4642]: I0128 06:52:40.844071 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 28 06:52:41 crc kubenswrapper[4642]: I0128 06:52:41.026753 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 28 06:52:41 crc kubenswrapper[4642]: I0128 06:52:41.499473 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 28 06:52:41 crc kubenswrapper[4642]: I0128 06:52:41.807833 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 28 06:52:41 crc kubenswrapper[4642]: I0128 06:52:41.808174 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 06:52:41 crc kubenswrapper[4642]: I0128 06:52:41.859331 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 28 06:52:41 crc kubenswrapper[4642]: I0128 06:52:41.859372 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 28 06:52:41 crc kubenswrapper[4642]: I0128 06:52:41.859389 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 28 06:52:41 crc kubenswrapper[4642]: I0128 06:52:41.859407 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 28 06:52:41 crc kubenswrapper[4642]: I0128 06:52:41.859451 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 28 06:52:41 crc kubenswrapper[4642]: I0128 06:52:41.859659 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 06:52:41 crc kubenswrapper[4642]: I0128 06:52:41.859707 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 06:52:41 crc kubenswrapper[4642]: I0128 06:52:41.859725 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 06:52:41 crc kubenswrapper[4642]: I0128 06:52:41.859758 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 06:52:41 crc kubenswrapper[4642]: I0128 06:52:41.860959 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 28 06:52:41 crc kubenswrapper[4642]: I0128 06:52:41.860999 4642 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="55e7e1b8d62eaa26d493e3e461f1a76d8ecd3878ab0965d1fb59c0f4ca0a605f" exitCode=137 Jan 28 06:52:41 crc kubenswrapper[4642]: I0128 06:52:41.861042 4642 scope.go:117] "RemoveContainer" containerID="55e7e1b8d62eaa26d493e3e461f1a76d8ecd3878ab0965d1fb59c0f4ca0a605f" Jan 28 06:52:41 crc kubenswrapper[4642]: I0128 06:52:41.861153 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 06:52:41 crc kubenswrapper[4642]: I0128 06:52:41.867494 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 06:52:41 crc kubenswrapper[4642]: I0128 06:52:41.890569 4642 scope.go:117] "RemoveContainer" containerID="55e7e1b8d62eaa26d493e3e461f1a76d8ecd3878ab0965d1fb59c0f4ca0a605f" Jan 28 06:52:41 crc kubenswrapper[4642]: E0128 06:52:41.890949 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55e7e1b8d62eaa26d493e3e461f1a76d8ecd3878ab0965d1fb59c0f4ca0a605f\": container with ID starting with 55e7e1b8d62eaa26d493e3e461f1a76d8ecd3878ab0965d1fb59c0f4ca0a605f not found: ID does not exist" containerID="55e7e1b8d62eaa26d493e3e461f1a76d8ecd3878ab0965d1fb59c0f4ca0a605f" Jan 28 06:52:41 crc kubenswrapper[4642]: I0128 06:52:41.890981 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55e7e1b8d62eaa26d493e3e461f1a76d8ecd3878ab0965d1fb59c0f4ca0a605f"} err="failed to get container status \"55e7e1b8d62eaa26d493e3e461f1a76d8ecd3878ab0965d1fb59c0f4ca0a605f\": rpc error: code = NotFound desc = could not find container \"55e7e1b8d62eaa26d493e3e461f1a76d8ecd3878ab0965d1fb59c0f4ca0a605f\": container with ID starting with 55e7e1b8d62eaa26d493e3e461f1a76d8ecd3878ab0965d1fb59c0f4ca0a605f not found: ID does not exist" Jan 28 06:52:41 crc kubenswrapper[4642]: I0128 06:52:41.960634 4642 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 28 06:52:41 crc kubenswrapper[4642]: I0128 06:52:41.960668 4642 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 28 06:52:41 crc kubenswrapper[4642]: I0128 06:52:41.960677 4642 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 28 06:52:41 crc kubenswrapper[4642]: I0128 06:52:41.960685 4642 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 28 06:52:41 crc kubenswrapper[4642]: I0128 06:52:41.960695 4642 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 28 06:52:43 crc kubenswrapper[4642]: I0128 06:52:43.104249 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 28 06:52:43 crc kubenswrapper[4642]: I0128 06:52:43.104473 4642 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 28 06:52:43 crc kubenswrapper[4642]: I0128 06:52:43.113163 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 28 06:52:43 crc kubenswrapper[4642]: I0128 06:52:43.113209 4642 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="a8d23ec9-709f-46fc-be13-38f394678beb" Jan 28 06:52:43 crc kubenswrapper[4642]: I0128 06:52:43.115984 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 28 06:52:43 crc kubenswrapper[4642]: I0128 06:52:43.116008 4642 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="a8d23ec9-709f-46fc-be13-38f394678beb" Jan 28 06:52:43 crc kubenswrapper[4642]: I0128 06:52:43.139856 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 28 06:52:56 crc kubenswrapper[4642]: I0128 06:52:56.984586 4642 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 28 06:53:07 crc kubenswrapper[4642]: I0128 06:53:07.962315 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 28 06:53:09 crc kubenswrapper[4642]: I0128 06:53:09.299155 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 28 06:53:15 crc kubenswrapper[4642]: I0128 06:53:15.876878 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 28 06:53:25 crc kubenswrapper[4642]: I0128 06:53:25.777928 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-65j4l"] Jan 28 06:53:25 crc kubenswrapper[4642]: E0128 06:53:25.778662 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46ff4beb-e52c-4d77-96a2-0dcde4c8c516" containerName="registry-server" Jan 28 06:53:25 crc kubenswrapper[4642]: I0128 06:53:25.778675 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="46ff4beb-e52c-4d77-96a2-0dcde4c8c516" containerName="registry-server" Jan 28 06:53:25 crc kubenswrapper[4642]: E0128 06:53:25.778685 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46ff4beb-e52c-4d77-96a2-0dcde4c8c516" containerName="extract-utilities" Jan 28 06:53:25 crc kubenswrapper[4642]: I0128 06:53:25.778691 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="46ff4beb-e52c-4d77-96a2-0dcde4c8c516" containerName="extract-utilities" Jan 28 06:53:25 crc kubenswrapper[4642]: E0128 06:53:25.778699 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f1e7744-f44d-4430-915b-59821d507da1" containerName="extract-utilities" Jan 28 06:53:25 crc kubenswrapper[4642]: I0128 06:53:25.778705 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f1e7744-f44d-4430-915b-59821d507da1" containerName="extract-utilities" Jan 28 06:53:25 crc kubenswrapper[4642]: E0128 06:53:25.778713 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4be5139-f33d-4cb9-829c-cfe1116c1305" containerName="extract-utilities" Jan 28 06:53:25 crc kubenswrapper[4642]: I0128 06:53:25.778727 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4be5139-f33d-4cb9-829c-cfe1116c1305" containerName="extract-utilities" Jan 28 06:53:25 crc kubenswrapper[4642]: E0128 06:53:25.778733 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f1e7744-f44d-4430-915b-59821d507da1" containerName="extract-content" Jan 28 06:53:25 crc kubenswrapper[4642]: I0128 06:53:25.778738 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f1e7744-f44d-4430-915b-59821d507da1" containerName="extract-content" Jan 28 06:53:25 crc kubenswrapper[4642]: E0128 06:53:25.778746 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f1e7744-f44d-4430-915b-59821d507da1" containerName="registry-server" Jan 28 06:53:25 crc kubenswrapper[4642]: I0128 06:53:25.778752 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f1e7744-f44d-4430-915b-59821d507da1" containerName="registry-server" Jan 28 06:53:25 crc kubenswrapper[4642]: E0128 06:53:25.778762 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd03d734-1b6b-4f56-bb31-19bcb87a0250" containerName="extract-content" Jan 28 06:53:25 crc kubenswrapper[4642]: I0128 06:53:25.778767 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd03d734-1b6b-4f56-bb31-19bcb87a0250" containerName="extract-content" Jan 28 06:53:25 crc kubenswrapper[4642]: E0128 06:53:25.778773 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab246574-4025-47da-a132-1d1e72b35a00" containerName="marketplace-operator" Jan 28 06:53:25 crc kubenswrapper[4642]: I0128 06:53:25.778779 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab246574-4025-47da-a132-1d1e72b35a00" containerName="marketplace-operator" Jan 28 06:53:25 crc kubenswrapper[4642]: E0128 06:53:25.778786 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd03d734-1b6b-4f56-bb31-19bcb87a0250" containerName="registry-server" Jan 28 06:53:25 crc kubenswrapper[4642]: I0128 06:53:25.778791 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd03d734-1b6b-4f56-bb31-19bcb87a0250" containerName="registry-server" Jan 28 06:53:25 crc kubenswrapper[4642]: E0128 06:53:25.778799 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46ff4beb-e52c-4d77-96a2-0dcde4c8c516" containerName="extract-content" Jan 28 06:53:25 crc kubenswrapper[4642]: I0128 06:53:25.778804 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="46ff4beb-e52c-4d77-96a2-0dcde4c8c516" containerName="extract-content" Jan 28 06:53:25 crc kubenswrapper[4642]: E0128 06:53:25.778814 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd03d734-1b6b-4f56-bb31-19bcb87a0250" containerName="extract-utilities" Jan 28 06:53:25 crc kubenswrapper[4642]: I0128 06:53:25.778819 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd03d734-1b6b-4f56-bb31-19bcb87a0250" containerName="extract-utilities" Jan 28 06:53:25 crc kubenswrapper[4642]: E0128 06:53:25.778828 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4be5139-f33d-4cb9-829c-cfe1116c1305" containerName="extract-content" Jan 28 06:53:25 crc kubenswrapper[4642]: I0128 06:53:25.778833 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4be5139-f33d-4cb9-829c-cfe1116c1305" containerName="extract-content" Jan 28 06:53:25 crc kubenswrapper[4642]: E0128 06:53:25.778840 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4be5139-f33d-4cb9-829c-cfe1116c1305" containerName="registry-server" Jan 28 06:53:25 crc kubenswrapper[4642]: I0128 06:53:25.778845 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4be5139-f33d-4cb9-829c-cfe1116c1305" containerName="registry-server" Jan 28 06:53:25 crc kubenswrapper[4642]: I0128 06:53:25.778923 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f1e7744-f44d-4430-915b-59821d507da1" containerName="registry-server" Jan 28 06:53:25 crc kubenswrapper[4642]: I0128 06:53:25.778931 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4be5139-f33d-4cb9-829c-cfe1116c1305" containerName="registry-server" Jan 28 06:53:25 crc kubenswrapper[4642]: I0128 06:53:25.778938 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="46ff4beb-e52c-4d77-96a2-0dcde4c8c516" containerName="registry-server" Jan 28 06:53:25 crc kubenswrapper[4642]: I0128 06:53:25.778944 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd03d734-1b6b-4f56-bb31-19bcb87a0250" containerName="registry-server" Jan 28 06:53:25 crc kubenswrapper[4642]: I0128 06:53:25.778954 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab246574-4025-47da-a132-1d1e72b35a00" containerName="marketplace-operator" Jan 28 06:53:25 crc kubenswrapper[4642]: I0128 06:53:25.779566 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-65j4l" Jan 28 06:53:25 crc kubenswrapper[4642]: I0128 06:53:25.781133 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 28 06:53:25 crc kubenswrapper[4642]: I0128 06:53:25.787446 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-65j4l"] Jan 28 06:53:25 crc kubenswrapper[4642]: I0128 06:53:25.806598 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggkt6\" (UniqueName: \"kubernetes.io/projected/04f55e4c-8e13-4147-87e5-c69535042a39-kube-api-access-ggkt6\") pod \"certified-operators-65j4l\" (UID: \"04f55e4c-8e13-4147-87e5-c69535042a39\") " pod="openshift-marketplace/certified-operators-65j4l" Jan 28 06:53:25 crc kubenswrapper[4642]: I0128 06:53:25.806659 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04f55e4c-8e13-4147-87e5-c69535042a39-utilities\") pod \"certified-operators-65j4l\" (UID: \"04f55e4c-8e13-4147-87e5-c69535042a39\") " pod="openshift-marketplace/certified-operators-65j4l" Jan 28 06:53:25 crc kubenswrapper[4642]: I0128 06:53:25.806762 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04f55e4c-8e13-4147-87e5-c69535042a39-catalog-content\") pod \"certified-operators-65j4l\" (UID: \"04f55e4c-8e13-4147-87e5-c69535042a39\") " pod="openshift-marketplace/certified-operators-65j4l" Jan 28 06:53:25 crc kubenswrapper[4642]: I0128 06:53:25.908236 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggkt6\" (UniqueName: \"kubernetes.io/projected/04f55e4c-8e13-4147-87e5-c69535042a39-kube-api-access-ggkt6\") pod \"certified-operators-65j4l\" (UID: \"04f55e4c-8e13-4147-87e5-c69535042a39\") " pod="openshift-marketplace/certified-operators-65j4l" Jan 28 06:53:25 crc kubenswrapper[4642]: I0128 06:53:25.908309 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04f55e4c-8e13-4147-87e5-c69535042a39-utilities\") pod \"certified-operators-65j4l\" (UID: \"04f55e4c-8e13-4147-87e5-c69535042a39\") " pod="openshift-marketplace/certified-operators-65j4l" Jan 28 06:53:25 crc kubenswrapper[4642]: I0128 06:53:25.908370 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04f55e4c-8e13-4147-87e5-c69535042a39-catalog-content\") pod \"certified-operators-65j4l\" (UID: \"04f55e4c-8e13-4147-87e5-c69535042a39\") " pod="openshift-marketplace/certified-operators-65j4l" Jan 28 06:53:25 crc kubenswrapper[4642]: I0128 06:53:25.908912 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04f55e4c-8e13-4147-87e5-c69535042a39-catalog-content\") pod \"certified-operators-65j4l\" (UID: \"04f55e4c-8e13-4147-87e5-c69535042a39\") " pod="openshift-marketplace/certified-operators-65j4l" Jan 28 06:53:25 crc kubenswrapper[4642]: I0128 06:53:25.908944 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04f55e4c-8e13-4147-87e5-c69535042a39-utilities\") pod \"certified-operators-65j4l\" (UID: \"04f55e4c-8e13-4147-87e5-c69535042a39\") " pod="openshift-marketplace/certified-operators-65j4l" Jan 28 06:53:25 crc kubenswrapper[4642]: I0128 06:53:25.925681 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggkt6\" (UniqueName: \"kubernetes.io/projected/04f55e4c-8e13-4147-87e5-c69535042a39-kube-api-access-ggkt6\") pod \"certified-operators-65j4l\" (UID: \"04f55e4c-8e13-4147-87e5-c69535042a39\") " pod="openshift-marketplace/certified-operators-65j4l" Jan 28 06:53:25 crc kubenswrapper[4642]: I0128 06:53:25.976485 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pm857"] Jan 28 06:53:25 crc kubenswrapper[4642]: I0128 06:53:25.977522 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pm857" Jan 28 06:53:25 crc kubenswrapper[4642]: I0128 06:53:25.979914 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 28 06:53:25 crc kubenswrapper[4642]: I0128 06:53:25.984522 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pm857"] Jan 28 06:53:26 crc kubenswrapper[4642]: I0128 06:53:26.009668 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ecdab48-c9af-40b5-9fbb-8c290986cef1-utilities\") pod \"community-operators-pm857\" (UID: \"4ecdab48-c9af-40b5-9fbb-8c290986cef1\") " pod="openshift-marketplace/community-operators-pm857" Jan 28 06:53:26 crc kubenswrapper[4642]: I0128 06:53:26.009739 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ecdab48-c9af-40b5-9fbb-8c290986cef1-catalog-content\") pod \"community-operators-pm857\" (UID: \"4ecdab48-c9af-40b5-9fbb-8c290986cef1\") " pod="openshift-marketplace/community-operators-pm857" Jan 28 06:53:26 crc kubenswrapper[4642]: I0128 06:53:26.009890 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jv48\" (UniqueName: \"kubernetes.io/projected/4ecdab48-c9af-40b5-9fbb-8c290986cef1-kube-api-access-7jv48\") pod \"community-operators-pm857\" (UID: \"4ecdab48-c9af-40b5-9fbb-8c290986cef1\") " pod="openshift-marketplace/community-operators-pm857" Jan 28 06:53:26 crc kubenswrapper[4642]: I0128 06:53:26.093518 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-65j4l" Jan 28 06:53:26 crc kubenswrapper[4642]: I0128 06:53:26.110913 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ecdab48-c9af-40b5-9fbb-8c290986cef1-catalog-content\") pod \"community-operators-pm857\" (UID: \"4ecdab48-c9af-40b5-9fbb-8c290986cef1\") " pod="openshift-marketplace/community-operators-pm857" Jan 28 06:53:26 crc kubenswrapper[4642]: I0128 06:53:26.111003 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jv48\" (UniqueName: \"kubernetes.io/projected/4ecdab48-c9af-40b5-9fbb-8c290986cef1-kube-api-access-7jv48\") pod \"community-operators-pm857\" (UID: \"4ecdab48-c9af-40b5-9fbb-8c290986cef1\") " pod="openshift-marketplace/community-operators-pm857" Jan 28 06:53:26 crc kubenswrapper[4642]: I0128 06:53:26.111051 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ecdab48-c9af-40b5-9fbb-8c290986cef1-utilities\") pod \"community-operators-pm857\" (UID: \"4ecdab48-c9af-40b5-9fbb-8c290986cef1\") " pod="openshift-marketplace/community-operators-pm857" Jan 28 06:53:26 crc kubenswrapper[4642]: I0128 06:53:26.111448 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ecdab48-c9af-40b5-9fbb-8c290986cef1-utilities\") pod \"community-operators-pm857\" (UID: \"4ecdab48-c9af-40b5-9fbb-8c290986cef1\") " pod="openshift-marketplace/community-operators-pm857" Jan 28 06:53:26 crc kubenswrapper[4642]: I0128 06:53:26.111935 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ecdab48-c9af-40b5-9fbb-8c290986cef1-catalog-content\") pod \"community-operators-pm857\" (UID: \"4ecdab48-c9af-40b5-9fbb-8c290986cef1\") " pod="openshift-marketplace/community-operators-pm857" Jan 28 06:53:26 crc kubenswrapper[4642]: I0128 06:53:26.127489 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jv48\" (UniqueName: \"kubernetes.io/projected/4ecdab48-c9af-40b5-9fbb-8c290986cef1-kube-api-access-7jv48\") pod \"community-operators-pm857\" (UID: \"4ecdab48-c9af-40b5-9fbb-8c290986cef1\") " pod="openshift-marketplace/community-operators-pm857" Jan 28 06:53:26 crc kubenswrapper[4642]: I0128 06:53:26.289402 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pm857" Jan 28 06:53:26 crc kubenswrapper[4642]: I0128 06:53:26.435861 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-65j4l"] Jan 28 06:53:26 crc kubenswrapper[4642]: I0128 06:53:26.663079 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pm857"] Jan 28 06:53:26 crc kubenswrapper[4642]: W0128 06:53:26.667563 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ecdab48_c9af_40b5_9fbb_8c290986cef1.slice/crio-d5253b10013ba51818338d6ef1765b9b3c85c0fc0ec4ee6d7f9a2eb545c0ab5e WatchSource:0}: Error finding container d5253b10013ba51818338d6ef1765b9b3c85c0fc0ec4ee6d7f9a2eb545c0ab5e: Status 404 returned error can't find the container with id d5253b10013ba51818338d6ef1765b9b3c85c0fc0ec4ee6d7f9a2eb545c0ab5e Jan 28 06:53:27 crc kubenswrapper[4642]: I0128 06:53:27.046424 4642 generic.go:334] "Generic (PLEG): container finished" podID="4ecdab48-c9af-40b5-9fbb-8c290986cef1" containerID="51bb4c902cef575ad21bcf8b15321a4879a77db6bb2300e78e3e6e29f2ca500e" exitCode=0 Jan 28 06:53:27 crc kubenswrapper[4642]: I0128 06:53:27.046503 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pm857" event={"ID":"4ecdab48-c9af-40b5-9fbb-8c290986cef1","Type":"ContainerDied","Data":"51bb4c902cef575ad21bcf8b15321a4879a77db6bb2300e78e3e6e29f2ca500e"} Jan 28 06:53:27 crc kubenswrapper[4642]: I0128 06:53:27.046529 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pm857" event={"ID":"4ecdab48-c9af-40b5-9fbb-8c290986cef1","Type":"ContainerStarted","Data":"d5253b10013ba51818338d6ef1765b9b3c85c0fc0ec4ee6d7f9a2eb545c0ab5e"} Jan 28 06:53:27 crc kubenswrapper[4642]: I0128 06:53:27.048240 4642 generic.go:334] "Generic (PLEG): container finished" podID="04f55e4c-8e13-4147-87e5-c69535042a39" containerID="1411d6c14824356628af58bbc850947215060777f70007510eaeaa9dc34498eb" exitCode=0 Jan 28 06:53:27 crc kubenswrapper[4642]: I0128 06:53:27.048272 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-65j4l" event={"ID":"04f55e4c-8e13-4147-87e5-c69535042a39","Type":"ContainerDied","Data":"1411d6c14824356628af58bbc850947215060777f70007510eaeaa9dc34498eb"} Jan 28 06:53:27 crc kubenswrapper[4642]: I0128 06:53:27.048290 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-65j4l" event={"ID":"04f55e4c-8e13-4147-87e5-c69535042a39","Type":"ContainerStarted","Data":"82261f04d625ed24b3592d1a7638b3c4cd86da557b96add150b7904793aeab7d"} Jan 28 06:53:28 crc kubenswrapper[4642]: I0128 06:53:28.054235 4642 generic.go:334] "Generic (PLEG): container finished" podID="4ecdab48-c9af-40b5-9fbb-8c290986cef1" containerID="e18fcae8e7768fe229387f4b71f874462e90e40891f3d7eec956040cb86609ad" exitCode=0 Jan 28 06:53:28 crc kubenswrapper[4642]: I0128 06:53:28.054333 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pm857" event={"ID":"4ecdab48-c9af-40b5-9fbb-8c290986cef1","Type":"ContainerDied","Data":"e18fcae8e7768fe229387f4b71f874462e90e40891f3d7eec956040cb86609ad"} Jan 28 06:53:28 crc kubenswrapper[4642]: I0128 06:53:28.057559 4642 generic.go:334] "Generic (PLEG): container finished" podID="04f55e4c-8e13-4147-87e5-c69535042a39" containerID="b1b589db47e880c2d32c8e009f48a4dddd31bd003cfcc266e5b2161835ce41cf" exitCode=0 Jan 28 06:53:28 crc kubenswrapper[4642]: I0128 06:53:28.057622 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-65j4l" event={"ID":"04f55e4c-8e13-4147-87e5-c69535042a39","Type":"ContainerDied","Data":"b1b589db47e880c2d32c8e009f48a4dddd31bd003cfcc266e5b2161835ce41cf"} Jan 28 06:53:28 crc kubenswrapper[4642]: I0128 06:53:28.176042 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pnx2g"] Jan 28 06:53:28 crc kubenswrapper[4642]: I0128 06:53:28.176937 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pnx2g" Jan 28 06:53:28 crc kubenswrapper[4642]: I0128 06:53:28.178520 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 28 06:53:28 crc kubenswrapper[4642]: I0128 06:53:28.183539 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pnx2g"] Jan 28 06:53:28 crc kubenswrapper[4642]: I0128 06:53:28.230572 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fk4j\" (UniqueName: \"kubernetes.io/projected/01782f50-40a0-4a5d-ba1d-0fd6846cb642-kube-api-access-6fk4j\") pod \"redhat-marketplace-pnx2g\" (UID: \"01782f50-40a0-4a5d-ba1d-0fd6846cb642\") " pod="openshift-marketplace/redhat-marketplace-pnx2g" Jan 28 06:53:28 crc kubenswrapper[4642]: I0128 06:53:28.230702 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01782f50-40a0-4a5d-ba1d-0fd6846cb642-utilities\") pod \"redhat-marketplace-pnx2g\" (UID: \"01782f50-40a0-4a5d-ba1d-0fd6846cb642\") " pod="openshift-marketplace/redhat-marketplace-pnx2g" Jan 28 06:53:28 crc kubenswrapper[4642]: I0128 06:53:28.230725 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01782f50-40a0-4a5d-ba1d-0fd6846cb642-catalog-content\") pod \"redhat-marketplace-pnx2g\" (UID: \"01782f50-40a0-4a5d-ba1d-0fd6846cb642\") " pod="openshift-marketplace/redhat-marketplace-pnx2g" Jan 28 06:53:28 crc kubenswrapper[4642]: I0128 06:53:28.331666 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fk4j\" (UniqueName: \"kubernetes.io/projected/01782f50-40a0-4a5d-ba1d-0fd6846cb642-kube-api-access-6fk4j\") pod \"redhat-marketplace-pnx2g\" (UID: \"01782f50-40a0-4a5d-ba1d-0fd6846cb642\") " pod="openshift-marketplace/redhat-marketplace-pnx2g" Jan 28 06:53:28 crc kubenswrapper[4642]: I0128 06:53:28.331781 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01782f50-40a0-4a5d-ba1d-0fd6846cb642-utilities\") pod \"redhat-marketplace-pnx2g\" (UID: \"01782f50-40a0-4a5d-ba1d-0fd6846cb642\") " pod="openshift-marketplace/redhat-marketplace-pnx2g" Jan 28 06:53:28 crc kubenswrapper[4642]: I0128 06:53:28.331810 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01782f50-40a0-4a5d-ba1d-0fd6846cb642-catalog-content\") pod \"redhat-marketplace-pnx2g\" (UID: \"01782f50-40a0-4a5d-ba1d-0fd6846cb642\") " pod="openshift-marketplace/redhat-marketplace-pnx2g" Jan 28 06:53:28 crc kubenswrapper[4642]: I0128 06:53:28.332341 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01782f50-40a0-4a5d-ba1d-0fd6846cb642-catalog-content\") pod \"redhat-marketplace-pnx2g\" (UID: \"01782f50-40a0-4a5d-ba1d-0fd6846cb642\") " pod="openshift-marketplace/redhat-marketplace-pnx2g" Jan 28 06:53:28 crc kubenswrapper[4642]: I0128 06:53:28.332428 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01782f50-40a0-4a5d-ba1d-0fd6846cb642-utilities\") pod \"redhat-marketplace-pnx2g\" (UID: \"01782f50-40a0-4a5d-ba1d-0fd6846cb642\") " pod="openshift-marketplace/redhat-marketplace-pnx2g" Jan 28 06:53:28 crc kubenswrapper[4642]: I0128 06:53:28.349717 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fk4j\" (UniqueName: \"kubernetes.io/projected/01782f50-40a0-4a5d-ba1d-0fd6846cb642-kube-api-access-6fk4j\") pod \"redhat-marketplace-pnx2g\" (UID: \"01782f50-40a0-4a5d-ba1d-0fd6846cb642\") " pod="openshift-marketplace/redhat-marketplace-pnx2g" Jan 28 06:53:28 crc kubenswrapper[4642]: I0128 06:53:28.379255 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6gz7t"] Jan 28 06:53:28 crc kubenswrapper[4642]: I0128 06:53:28.380113 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6gz7t" Jan 28 06:53:28 crc kubenswrapper[4642]: I0128 06:53:28.382261 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 28 06:53:28 crc kubenswrapper[4642]: I0128 06:53:28.390032 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6gz7t"] Jan 28 06:53:28 crc kubenswrapper[4642]: I0128 06:53:28.432721 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f07b2642-09a0-4490-bbe8-3e3a48e2a81a-catalog-content\") pod \"redhat-operators-6gz7t\" (UID: \"f07b2642-09a0-4490-bbe8-3e3a48e2a81a\") " pod="openshift-marketplace/redhat-operators-6gz7t" Jan 28 06:53:28 crc kubenswrapper[4642]: I0128 06:53:28.432871 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xqp8\" (UniqueName: \"kubernetes.io/projected/f07b2642-09a0-4490-bbe8-3e3a48e2a81a-kube-api-access-6xqp8\") pod \"redhat-operators-6gz7t\" (UID: \"f07b2642-09a0-4490-bbe8-3e3a48e2a81a\") " pod="openshift-marketplace/redhat-operators-6gz7t" Jan 28 06:53:28 crc kubenswrapper[4642]: I0128 06:53:28.432908 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f07b2642-09a0-4490-bbe8-3e3a48e2a81a-utilities\") pod \"redhat-operators-6gz7t\" (UID: \"f07b2642-09a0-4490-bbe8-3e3a48e2a81a\") " pod="openshift-marketplace/redhat-operators-6gz7t" Jan 28 06:53:28 crc kubenswrapper[4642]: I0128 06:53:28.489529 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pnx2g" Jan 28 06:53:28 crc kubenswrapper[4642]: I0128 06:53:28.533359 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f07b2642-09a0-4490-bbe8-3e3a48e2a81a-catalog-content\") pod \"redhat-operators-6gz7t\" (UID: \"f07b2642-09a0-4490-bbe8-3e3a48e2a81a\") " pod="openshift-marketplace/redhat-operators-6gz7t" Jan 28 06:53:28 crc kubenswrapper[4642]: I0128 06:53:28.533444 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xqp8\" (UniqueName: \"kubernetes.io/projected/f07b2642-09a0-4490-bbe8-3e3a48e2a81a-kube-api-access-6xqp8\") pod \"redhat-operators-6gz7t\" (UID: \"f07b2642-09a0-4490-bbe8-3e3a48e2a81a\") " pod="openshift-marketplace/redhat-operators-6gz7t" Jan 28 06:53:28 crc kubenswrapper[4642]: I0128 06:53:28.533466 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f07b2642-09a0-4490-bbe8-3e3a48e2a81a-utilities\") pod \"redhat-operators-6gz7t\" (UID: \"f07b2642-09a0-4490-bbe8-3e3a48e2a81a\") " pod="openshift-marketplace/redhat-operators-6gz7t" Jan 28 06:53:28 crc kubenswrapper[4642]: I0128 06:53:28.533932 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f07b2642-09a0-4490-bbe8-3e3a48e2a81a-utilities\") pod \"redhat-operators-6gz7t\" (UID: \"f07b2642-09a0-4490-bbe8-3e3a48e2a81a\") " pod="openshift-marketplace/redhat-operators-6gz7t" Jan 28 06:53:28 crc kubenswrapper[4642]: I0128 06:53:28.533931 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f07b2642-09a0-4490-bbe8-3e3a48e2a81a-catalog-content\") pod \"redhat-operators-6gz7t\" (UID: \"f07b2642-09a0-4490-bbe8-3e3a48e2a81a\") " pod="openshift-marketplace/redhat-operators-6gz7t" Jan 28 06:53:28 crc kubenswrapper[4642]: I0128 06:53:28.554362 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xqp8\" (UniqueName: \"kubernetes.io/projected/f07b2642-09a0-4490-bbe8-3e3a48e2a81a-kube-api-access-6xqp8\") pod \"redhat-operators-6gz7t\" (UID: \"f07b2642-09a0-4490-bbe8-3e3a48e2a81a\") " pod="openshift-marketplace/redhat-operators-6gz7t" Jan 28 06:53:28 crc kubenswrapper[4642]: I0128 06:53:28.692837 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6gz7t" Jan 28 06:53:28 crc kubenswrapper[4642]: I0128 06:53:28.861692 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pnx2g"] Jan 28 06:53:28 crc kubenswrapper[4642]: W0128 06:53:28.869547 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod01782f50_40a0_4a5d_ba1d_0fd6846cb642.slice/crio-3465932f6b28f67516b3b20278534515d00efdf470b72c4fd1102e2769e02121 WatchSource:0}: Error finding container 3465932f6b28f67516b3b20278534515d00efdf470b72c4fd1102e2769e02121: Status 404 returned error can't find the container with id 3465932f6b28f67516b3b20278534515d00efdf470b72c4fd1102e2769e02121 Jan 28 06:53:28 crc kubenswrapper[4642]: I0128 06:53:28.896686 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6gz7t"] Jan 28 06:53:29 crc kubenswrapper[4642]: I0128 06:53:29.066308 4642 generic.go:334] "Generic (PLEG): container finished" podID="f07b2642-09a0-4490-bbe8-3e3a48e2a81a" containerID="d0c500ca69c868af01943d1f284514283bb4ae6ea9b01378279ac35a8930e7d2" exitCode=0 Jan 28 06:53:29 crc kubenswrapper[4642]: I0128 06:53:29.066425 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6gz7t" event={"ID":"f07b2642-09a0-4490-bbe8-3e3a48e2a81a","Type":"ContainerDied","Data":"d0c500ca69c868af01943d1f284514283bb4ae6ea9b01378279ac35a8930e7d2"} Jan 28 06:53:29 crc kubenswrapper[4642]: I0128 06:53:29.066497 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6gz7t" event={"ID":"f07b2642-09a0-4490-bbe8-3e3a48e2a81a","Type":"ContainerStarted","Data":"12bd809a4d4acbd6aac275839958df69c51e5aab911b49762673c7cfd17d4e5a"} Jan 28 06:53:29 crc kubenswrapper[4642]: I0128 06:53:29.071507 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-65j4l" event={"ID":"04f55e4c-8e13-4147-87e5-c69535042a39","Type":"ContainerStarted","Data":"82f65eb570563aa73264bc2304d105ef1de4fca2a2d258e66c7d9518118d1285"} Jan 28 06:53:29 crc kubenswrapper[4642]: I0128 06:53:29.073849 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pm857" event={"ID":"4ecdab48-c9af-40b5-9fbb-8c290986cef1","Type":"ContainerStarted","Data":"45a63fa4cd5dbd942ee01e7213dc178a94512dbc13a144028444c316cf620c14"} Jan 28 06:53:29 crc kubenswrapper[4642]: I0128 06:53:29.076079 4642 generic.go:334] "Generic (PLEG): container finished" podID="01782f50-40a0-4a5d-ba1d-0fd6846cb642" containerID="af51f1b57848663f4c4ff2723cca8ab8345b751d5588ebc047ea7b20a2a77225" exitCode=0 Jan 28 06:53:29 crc kubenswrapper[4642]: I0128 06:53:29.076141 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pnx2g" event={"ID":"01782f50-40a0-4a5d-ba1d-0fd6846cb642","Type":"ContainerDied","Data":"af51f1b57848663f4c4ff2723cca8ab8345b751d5588ebc047ea7b20a2a77225"} Jan 28 06:53:29 crc kubenswrapper[4642]: I0128 06:53:29.076233 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pnx2g" event={"ID":"01782f50-40a0-4a5d-ba1d-0fd6846cb642","Type":"ContainerStarted","Data":"3465932f6b28f67516b3b20278534515d00efdf470b72c4fd1102e2769e02121"} Jan 28 06:53:29 crc kubenswrapper[4642]: I0128 06:53:29.116233 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-65j4l" podStartSLOduration=2.554421181 podStartE2EDuration="4.116208422s" podCreationTimestamp="2026-01-28 06:53:25 +0000 UTC" firstStartedPulling="2026-01-28 06:53:27.049423122 +0000 UTC m=+330.281511931" lastFinishedPulling="2026-01-28 06:53:28.611210364 +0000 UTC m=+331.843299172" observedRunningTime="2026-01-28 06:53:29.114297464 +0000 UTC m=+332.346386273" watchObservedRunningTime="2026-01-28 06:53:29.116208422 +0000 UTC m=+332.348297231" Jan 28 06:53:29 crc kubenswrapper[4642]: I0128 06:53:29.128098 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pm857" podStartSLOduration=2.565857453 podStartE2EDuration="4.128089532s" podCreationTimestamp="2026-01-28 06:53:25 +0000 UTC" firstStartedPulling="2026-01-28 06:53:27.048091855 +0000 UTC m=+330.280180664" lastFinishedPulling="2026-01-28 06:53:28.610323934 +0000 UTC m=+331.842412743" observedRunningTime="2026-01-28 06:53:29.126753647 +0000 UTC m=+332.358842466" watchObservedRunningTime="2026-01-28 06:53:29.128089532 +0000 UTC m=+332.360178341" Jan 28 06:53:30 crc kubenswrapper[4642]: I0128 06:53:30.082138 4642 generic.go:334] "Generic (PLEG): container finished" podID="01782f50-40a0-4a5d-ba1d-0fd6846cb642" containerID="6b01764ed95eef2053bacab29015846019d2b6387e9583512424a8ac9220d9b8" exitCode=0 Jan 28 06:53:30 crc kubenswrapper[4642]: I0128 06:53:30.082197 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pnx2g" event={"ID":"01782f50-40a0-4a5d-ba1d-0fd6846cb642","Type":"ContainerDied","Data":"6b01764ed95eef2053bacab29015846019d2b6387e9583512424a8ac9220d9b8"} Jan 28 06:53:31 crc kubenswrapper[4642]: I0128 06:53:31.091137 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pnx2g" event={"ID":"01782f50-40a0-4a5d-ba1d-0fd6846cb642","Type":"ContainerStarted","Data":"6d851455ae01c59b8adcac27293b7e2fc52c881e295378cd1845c65d868a511c"} Jan 28 06:53:31 crc kubenswrapper[4642]: I0128 06:53:31.094178 4642 generic.go:334] "Generic (PLEG): container finished" podID="f07b2642-09a0-4490-bbe8-3e3a48e2a81a" containerID="941dba7b1b41287326927d14140d43ee5a582fa448cbb9ac6e8f3feff5198e4b" exitCode=0 Jan 28 06:53:31 crc kubenswrapper[4642]: I0128 06:53:31.094262 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6gz7t" event={"ID":"f07b2642-09a0-4490-bbe8-3e3a48e2a81a","Type":"ContainerDied","Data":"941dba7b1b41287326927d14140d43ee5a582fa448cbb9ac6e8f3feff5198e4b"} Jan 28 06:53:31 crc kubenswrapper[4642]: I0128 06:53:31.110737 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pnx2g" podStartSLOduration=1.611529298 podStartE2EDuration="3.110718906s" podCreationTimestamp="2026-01-28 06:53:28 +0000 UTC" firstStartedPulling="2026-01-28 06:53:29.077427633 +0000 UTC m=+332.309516441" lastFinishedPulling="2026-01-28 06:53:30.57661724 +0000 UTC m=+333.808706049" observedRunningTime="2026-01-28 06:53:31.108600307 +0000 UTC m=+334.340689116" watchObservedRunningTime="2026-01-28 06:53:31.110718906 +0000 UTC m=+334.342807715" Jan 28 06:53:32 crc kubenswrapper[4642]: I0128 06:53:32.101348 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6gz7t" event={"ID":"f07b2642-09a0-4490-bbe8-3e3a48e2a81a","Type":"ContainerStarted","Data":"ce993dc57c1d9fc45eb7f4001131ecc457b0a3543016f6cbc63cc5aaf869a940"} Jan 28 06:53:32 crc kubenswrapper[4642]: I0128 06:53:32.117710 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6gz7t" podStartSLOduration=1.631964166 podStartE2EDuration="4.11769107s" podCreationTimestamp="2026-01-28 06:53:28 +0000 UTC" firstStartedPulling="2026-01-28 06:53:29.068575555 +0000 UTC m=+332.300664364" lastFinishedPulling="2026-01-28 06:53:31.554302459 +0000 UTC m=+334.786391268" observedRunningTime="2026-01-28 06:53:32.115769141 +0000 UTC m=+335.347857950" watchObservedRunningTime="2026-01-28 06:53:32.11769107 +0000 UTC m=+335.349779878" Jan 28 06:53:36 crc kubenswrapper[4642]: I0128 06:53:36.094619 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-65j4l" Jan 28 06:53:36 crc kubenswrapper[4642]: I0128 06:53:36.094675 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-65j4l" Jan 28 06:53:36 crc kubenswrapper[4642]: I0128 06:53:36.127494 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-65j4l" Jan 28 06:53:36 crc kubenswrapper[4642]: I0128 06:53:36.168242 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-65j4l" Jan 28 06:53:36 crc kubenswrapper[4642]: I0128 06:53:36.290022 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pm857" Jan 28 06:53:36 crc kubenswrapper[4642]: I0128 06:53:36.290087 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pm857" Jan 28 06:53:36 crc kubenswrapper[4642]: I0128 06:53:36.317106 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pm857" Jan 28 06:53:37 crc kubenswrapper[4642]: I0128 06:53:37.150771 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pm857" Jan 28 06:53:38 crc kubenswrapper[4642]: I0128 06:53:38.199495 4642 patch_prober.go:28] interesting pod/machine-config-daemon-hdsmf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 06:53:38 crc kubenswrapper[4642]: I0128 06:53:38.200040 4642 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 06:53:38 crc kubenswrapper[4642]: I0128 06:53:38.489698 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pnx2g" Jan 28 06:53:38 crc kubenswrapper[4642]: I0128 06:53:38.489801 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-pnx2g" Jan 28 06:53:38 crc kubenswrapper[4642]: I0128 06:53:38.520770 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pnx2g" Jan 28 06:53:38 crc kubenswrapper[4642]: I0128 06:53:38.693577 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6gz7t" Jan 28 06:53:38 crc kubenswrapper[4642]: I0128 06:53:38.693665 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6gz7t" Jan 28 06:53:38 crc kubenswrapper[4642]: I0128 06:53:38.720851 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6gz7t" Jan 28 06:53:39 crc kubenswrapper[4642]: I0128 06:53:39.154891 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pnx2g" Jan 28 06:53:39 crc kubenswrapper[4642]: I0128 06:53:39.154940 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6gz7t" Jan 28 06:54:08 crc kubenswrapper[4642]: I0128 06:54:08.199803 4642 patch_prober.go:28] interesting pod/machine-config-daemon-hdsmf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 06:54:08 crc kubenswrapper[4642]: I0128 06:54:08.200206 4642 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 06:54:14 crc kubenswrapper[4642]: I0128 06:54:14.398269 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-9xx5w"] Jan 28 06:54:14 crc kubenswrapper[4642]: I0128 06:54:14.399162 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-9xx5w" Jan 28 06:54:14 crc kubenswrapper[4642]: I0128 06:54:14.405769 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-9xx5w"] Jan 28 06:54:14 crc kubenswrapper[4642]: I0128 06:54:14.545646 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/28e6d1b9-720d-439d-ab0e-14e2a288dc9d-ca-trust-extracted\") pod \"image-registry-66df7c8f76-9xx5w\" (UID: \"28e6d1b9-720d-439d-ab0e-14e2a288dc9d\") " pod="openshift-image-registry/image-registry-66df7c8f76-9xx5w" Jan 28 06:54:14 crc kubenswrapper[4642]: I0128 06:54:14.545893 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/28e6d1b9-720d-439d-ab0e-14e2a288dc9d-bound-sa-token\") pod \"image-registry-66df7c8f76-9xx5w\" (UID: \"28e6d1b9-720d-439d-ab0e-14e2a288dc9d\") " pod="openshift-image-registry/image-registry-66df7c8f76-9xx5w" Jan 28 06:54:14 crc kubenswrapper[4642]: I0128 06:54:14.545923 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/28e6d1b9-720d-439d-ab0e-14e2a288dc9d-registry-certificates\") pod \"image-registry-66df7c8f76-9xx5w\" (UID: \"28e6d1b9-720d-439d-ab0e-14e2a288dc9d\") " pod="openshift-image-registry/image-registry-66df7c8f76-9xx5w" Jan 28 06:54:14 crc kubenswrapper[4642]: I0128 06:54:14.545957 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-9xx5w\" (UID: \"28e6d1b9-720d-439d-ab0e-14e2a288dc9d\") " pod="openshift-image-registry/image-registry-66df7c8f76-9xx5w" Jan 28 06:54:14 crc kubenswrapper[4642]: I0128 06:54:14.545980 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/28e6d1b9-720d-439d-ab0e-14e2a288dc9d-installation-pull-secrets\") pod \"image-registry-66df7c8f76-9xx5w\" (UID: \"28e6d1b9-720d-439d-ab0e-14e2a288dc9d\") " pod="openshift-image-registry/image-registry-66df7c8f76-9xx5w" Jan 28 06:54:14 crc kubenswrapper[4642]: I0128 06:54:14.546003 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/28e6d1b9-720d-439d-ab0e-14e2a288dc9d-trusted-ca\") pod \"image-registry-66df7c8f76-9xx5w\" (UID: \"28e6d1b9-720d-439d-ab0e-14e2a288dc9d\") " pod="openshift-image-registry/image-registry-66df7c8f76-9xx5w" Jan 28 06:54:14 crc kubenswrapper[4642]: I0128 06:54:14.546030 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/28e6d1b9-720d-439d-ab0e-14e2a288dc9d-registry-tls\") pod \"image-registry-66df7c8f76-9xx5w\" (UID: \"28e6d1b9-720d-439d-ab0e-14e2a288dc9d\") " pod="openshift-image-registry/image-registry-66df7c8f76-9xx5w" Jan 28 06:54:14 crc kubenswrapper[4642]: I0128 06:54:14.546062 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vcs4\" (UniqueName: \"kubernetes.io/projected/28e6d1b9-720d-439d-ab0e-14e2a288dc9d-kube-api-access-8vcs4\") pod \"image-registry-66df7c8f76-9xx5w\" (UID: \"28e6d1b9-720d-439d-ab0e-14e2a288dc9d\") " pod="openshift-image-registry/image-registry-66df7c8f76-9xx5w" Jan 28 06:54:14 crc kubenswrapper[4642]: I0128 06:54:14.560756 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-9xx5w\" (UID: \"28e6d1b9-720d-439d-ab0e-14e2a288dc9d\") " pod="openshift-image-registry/image-registry-66df7c8f76-9xx5w" Jan 28 06:54:14 crc kubenswrapper[4642]: I0128 06:54:14.647580 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/28e6d1b9-720d-439d-ab0e-14e2a288dc9d-ca-trust-extracted\") pod \"image-registry-66df7c8f76-9xx5w\" (UID: \"28e6d1b9-720d-439d-ab0e-14e2a288dc9d\") " pod="openshift-image-registry/image-registry-66df7c8f76-9xx5w" Jan 28 06:54:14 crc kubenswrapper[4642]: I0128 06:54:14.647624 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/28e6d1b9-720d-439d-ab0e-14e2a288dc9d-bound-sa-token\") pod \"image-registry-66df7c8f76-9xx5w\" (UID: \"28e6d1b9-720d-439d-ab0e-14e2a288dc9d\") " pod="openshift-image-registry/image-registry-66df7c8f76-9xx5w" Jan 28 06:54:14 crc kubenswrapper[4642]: I0128 06:54:14.647644 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/28e6d1b9-720d-439d-ab0e-14e2a288dc9d-registry-certificates\") pod \"image-registry-66df7c8f76-9xx5w\" (UID: \"28e6d1b9-720d-439d-ab0e-14e2a288dc9d\") " pod="openshift-image-registry/image-registry-66df7c8f76-9xx5w" Jan 28 06:54:14 crc kubenswrapper[4642]: I0128 06:54:14.647667 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/28e6d1b9-720d-439d-ab0e-14e2a288dc9d-installation-pull-secrets\") pod \"image-registry-66df7c8f76-9xx5w\" (UID: \"28e6d1b9-720d-439d-ab0e-14e2a288dc9d\") " pod="openshift-image-registry/image-registry-66df7c8f76-9xx5w" Jan 28 06:54:14 crc kubenswrapper[4642]: I0128 06:54:14.647687 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/28e6d1b9-720d-439d-ab0e-14e2a288dc9d-trusted-ca\") pod \"image-registry-66df7c8f76-9xx5w\" (UID: \"28e6d1b9-720d-439d-ab0e-14e2a288dc9d\") " pod="openshift-image-registry/image-registry-66df7c8f76-9xx5w" Jan 28 06:54:14 crc kubenswrapper[4642]: I0128 06:54:14.647710 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/28e6d1b9-720d-439d-ab0e-14e2a288dc9d-registry-tls\") pod \"image-registry-66df7c8f76-9xx5w\" (UID: \"28e6d1b9-720d-439d-ab0e-14e2a288dc9d\") " pod="openshift-image-registry/image-registry-66df7c8f76-9xx5w" Jan 28 06:54:14 crc kubenswrapper[4642]: I0128 06:54:14.647734 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vcs4\" (UniqueName: \"kubernetes.io/projected/28e6d1b9-720d-439d-ab0e-14e2a288dc9d-kube-api-access-8vcs4\") pod \"image-registry-66df7c8f76-9xx5w\" (UID: \"28e6d1b9-720d-439d-ab0e-14e2a288dc9d\") " pod="openshift-image-registry/image-registry-66df7c8f76-9xx5w" Jan 28 06:54:14 crc kubenswrapper[4642]: I0128 06:54:14.648302 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/28e6d1b9-720d-439d-ab0e-14e2a288dc9d-ca-trust-extracted\") pod \"image-registry-66df7c8f76-9xx5w\" (UID: \"28e6d1b9-720d-439d-ab0e-14e2a288dc9d\") " pod="openshift-image-registry/image-registry-66df7c8f76-9xx5w" Jan 28 06:54:14 crc kubenswrapper[4642]: I0128 06:54:14.649600 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/28e6d1b9-720d-439d-ab0e-14e2a288dc9d-trusted-ca\") pod \"image-registry-66df7c8f76-9xx5w\" (UID: \"28e6d1b9-720d-439d-ab0e-14e2a288dc9d\") " pod="openshift-image-registry/image-registry-66df7c8f76-9xx5w" Jan 28 06:54:14 crc kubenswrapper[4642]: I0128 06:54:14.649959 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/28e6d1b9-720d-439d-ab0e-14e2a288dc9d-registry-certificates\") pod \"image-registry-66df7c8f76-9xx5w\" (UID: \"28e6d1b9-720d-439d-ab0e-14e2a288dc9d\") " pod="openshift-image-registry/image-registry-66df7c8f76-9xx5w" Jan 28 06:54:14 crc kubenswrapper[4642]: I0128 06:54:14.658520 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/28e6d1b9-720d-439d-ab0e-14e2a288dc9d-registry-tls\") pod \"image-registry-66df7c8f76-9xx5w\" (UID: \"28e6d1b9-720d-439d-ab0e-14e2a288dc9d\") " pod="openshift-image-registry/image-registry-66df7c8f76-9xx5w" Jan 28 06:54:14 crc kubenswrapper[4642]: I0128 06:54:14.658605 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/28e6d1b9-720d-439d-ab0e-14e2a288dc9d-installation-pull-secrets\") pod \"image-registry-66df7c8f76-9xx5w\" (UID: \"28e6d1b9-720d-439d-ab0e-14e2a288dc9d\") " pod="openshift-image-registry/image-registry-66df7c8f76-9xx5w" Jan 28 06:54:14 crc kubenswrapper[4642]: I0128 06:54:14.660559 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/28e6d1b9-720d-439d-ab0e-14e2a288dc9d-bound-sa-token\") pod \"image-registry-66df7c8f76-9xx5w\" (UID: \"28e6d1b9-720d-439d-ab0e-14e2a288dc9d\") " pod="openshift-image-registry/image-registry-66df7c8f76-9xx5w" Jan 28 06:54:14 crc kubenswrapper[4642]: I0128 06:54:14.660708 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vcs4\" (UniqueName: \"kubernetes.io/projected/28e6d1b9-720d-439d-ab0e-14e2a288dc9d-kube-api-access-8vcs4\") pod \"image-registry-66df7c8f76-9xx5w\" (UID: \"28e6d1b9-720d-439d-ab0e-14e2a288dc9d\") " pod="openshift-image-registry/image-registry-66df7c8f76-9xx5w" Jan 28 06:54:14 crc kubenswrapper[4642]: I0128 06:54:14.712657 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-9xx5w" Jan 28 06:54:15 crc kubenswrapper[4642]: I0128 06:54:15.061051 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-9xx5w"] Jan 28 06:54:15 crc kubenswrapper[4642]: I0128 06:54:15.278768 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-9xx5w" event={"ID":"28e6d1b9-720d-439d-ab0e-14e2a288dc9d","Type":"ContainerStarted","Data":"0aef8601e01762c4b8084bcd204169ad87750fa32b84ca71d5ab6f53d12d807b"} Jan 28 06:54:15 crc kubenswrapper[4642]: I0128 06:54:15.278808 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-9xx5w" event={"ID":"28e6d1b9-720d-439d-ab0e-14e2a288dc9d","Type":"ContainerStarted","Data":"8b50726c2fcb157b44c0dace47c188120758cf5147bdad3ed7f1574fa146440d"} Jan 28 06:54:15 crc kubenswrapper[4642]: I0128 06:54:15.279557 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-9xx5w" Jan 28 06:54:15 crc kubenswrapper[4642]: I0128 06:54:15.292523 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-9xx5w" podStartSLOduration=1.2925115169999999 podStartE2EDuration="1.292511517s" podCreationTimestamp="2026-01-28 06:54:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 06:54:15.290110696 +0000 UTC m=+378.522199505" watchObservedRunningTime="2026-01-28 06:54:15.292511517 +0000 UTC m=+378.524600325" Jan 28 06:54:34 crc kubenswrapper[4642]: I0128 06:54:34.717278 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-9xx5w" Jan 28 06:54:34 crc kubenswrapper[4642]: I0128 06:54:34.756363 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fg5mw"] Jan 28 06:54:38 crc kubenswrapper[4642]: I0128 06:54:38.199704 4642 patch_prober.go:28] interesting pod/machine-config-daemon-hdsmf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 06:54:38 crc kubenswrapper[4642]: I0128 06:54:38.199968 4642 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 06:54:38 crc kubenswrapper[4642]: I0128 06:54:38.200003 4642 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" Jan 28 06:54:38 crc kubenswrapper[4642]: I0128 06:54:38.200505 4642 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"48bbb04e7e00b1645e7fb76e2fc1e8e43eb0a9a8d80349bf4cb4e5bb19fe3f52"} pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 06:54:38 crc kubenswrapper[4642]: I0128 06:54:38.200555 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" containerName="machine-config-daemon" containerID="cri-o://48bbb04e7e00b1645e7fb76e2fc1e8e43eb0a9a8d80349bf4cb4e5bb19fe3f52" gracePeriod=600 Jan 28 06:54:38 crc kubenswrapper[4642]: I0128 06:54:38.374679 4642 generic.go:334] "Generic (PLEG): container finished" podID="338ae955-434d-40bd-8519-580badf3e175" containerID="48bbb04e7e00b1645e7fb76e2fc1e8e43eb0a9a8d80349bf4cb4e5bb19fe3f52" exitCode=0 Jan 28 06:54:38 crc kubenswrapper[4642]: I0128 06:54:38.374805 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" event={"ID":"338ae955-434d-40bd-8519-580badf3e175","Type":"ContainerDied","Data":"48bbb04e7e00b1645e7fb76e2fc1e8e43eb0a9a8d80349bf4cb4e5bb19fe3f52"} Jan 28 06:54:38 crc kubenswrapper[4642]: I0128 06:54:38.374838 4642 scope.go:117] "RemoveContainer" containerID="f517e5ecc6bf060062365392506b3ee3a09354a4f5cfaa980e0e3a2507f298c3" Jan 28 06:54:39 crc kubenswrapper[4642]: I0128 06:54:39.380476 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" event={"ID":"338ae955-434d-40bd-8519-580badf3e175","Type":"ContainerStarted","Data":"2613a0bc4b999e8496454b6a0b4fbac4291bb4c032e65bdad7717ec571a658c4"} Jan 28 06:54:59 crc kubenswrapper[4642]: I0128 06:54:59.786699 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-fg5mw" podUID="1f06bd76-391b-4d80-ba76-a992ee54241a" containerName="registry" containerID="cri-o://1e1d34cf5c7e64257213213484e2133c1a3d05c8668f138ce538851d32f483d6" gracePeriod=30 Jan 28 06:55:00 crc kubenswrapper[4642]: I0128 06:55:00.050714 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-fg5mw" Jan 28 06:55:00 crc kubenswrapper[4642]: I0128 06:55:00.242512 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1f06bd76-391b-4d80-ba76-a992ee54241a-trusted-ca\") pod \"1f06bd76-391b-4d80-ba76-a992ee54241a\" (UID: \"1f06bd76-391b-4d80-ba76-a992ee54241a\") " Jan 28 06:55:00 crc kubenswrapper[4642]: I0128 06:55:00.242559 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1f06bd76-391b-4d80-ba76-a992ee54241a-bound-sa-token\") pod \"1f06bd76-391b-4d80-ba76-a992ee54241a\" (UID: \"1f06bd76-391b-4d80-ba76-a992ee54241a\") " Jan 28 06:55:00 crc kubenswrapper[4642]: I0128 06:55:00.242595 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1f06bd76-391b-4d80-ba76-a992ee54241a-ca-trust-extracted\") pod \"1f06bd76-391b-4d80-ba76-a992ee54241a\" (UID: \"1f06bd76-391b-4d80-ba76-a992ee54241a\") " Jan 28 06:55:00 crc kubenswrapper[4642]: I0128 06:55:00.242692 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"1f06bd76-391b-4d80-ba76-a992ee54241a\" (UID: \"1f06bd76-391b-4d80-ba76-a992ee54241a\") " Jan 28 06:55:00 crc kubenswrapper[4642]: I0128 06:55:00.242737 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1f06bd76-391b-4d80-ba76-a992ee54241a-registry-certificates\") pod \"1f06bd76-391b-4d80-ba76-a992ee54241a\" (UID: \"1f06bd76-391b-4d80-ba76-a992ee54241a\") " Jan 28 06:55:00 crc kubenswrapper[4642]: I0128 06:55:00.242784 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1f06bd76-391b-4d80-ba76-a992ee54241a-registry-tls\") pod \"1f06bd76-391b-4d80-ba76-a992ee54241a\" (UID: \"1f06bd76-391b-4d80-ba76-a992ee54241a\") " Jan 28 06:55:00 crc kubenswrapper[4642]: I0128 06:55:00.242798 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1f06bd76-391b-4d80-ba76-a992ee54241a-installation-pull-secrets\") pod \"1f06bd76-391b-4d80-ba76-a992ee54241a\" (UID: \"1f06bd76-391b-4d80-ba76-a992ee54241a\") " Jan 28 06:55:00 crc kubenswrapper[4642]: I0128 06:55:00.242830 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4k75\" (UniqueName: \"kubernetes.io/projected/1f06bd76-391b-4d80-ba76-a992ee54241a-kube-api-access-z4k75\") pod \"1f06bd76-391b-4d80-ba76-a992ee54241a\" (UID: \"1f06bd76-391b-4d80-ba76-a992ee54241a\") " Jan 28 06:55:00 crc kubenswrapper[4642]: I0128 06:55:00.243142 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f06bd76-391b-4d80-ba76-a992ee54241a-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "1f06bd76-391b-4d80-ba76-a992ee54241a" (UID: "1f06bd76-391b-4d80-ba76-a992ee54241a"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:55:00 crc kubenswrapper[4642]: I0128 06:55:00.243447 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f06bd76-391b-4d80-ba76-a992ee54241a-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "1f06bd76-391b-4d80-ba76-a992ee54241a" (UID: "1f06bd76-391b-4d80-ba76-a992ee54241a"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:55:00 crc kubenswrapper[4642]: I0128 06:55:00.247045 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f06bd76-391b-4d80-ba76-a992ee54241a-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "1f06bd76-391b-4d80-ba76-a992ee54241a" (UID: "1f06bd76-391b-4d80-ba76-a992ee54241a"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 06:55:00 crc kubenswrapper[4642]: I0128 06:55:00.248403 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f06bd76-391b-4d80-ba76-a992ee54241a-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "1f06bd76-391b-4d80-ba76-a992ee54241a" (UID: "1f06bd76-391b-4d80-ba76-a992ee54241a"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 06:55:00 crc kubenswrapper[4642]: I0128 06:55:00.248669 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f06bd76-391b-4d80-ba76-a992ee54241a-kube-api-access-z4k75" (OuterVolumeSpecName: "kube-api-access-z4k75") pod "1f06bd76-391b-4d80-ba76-a992ee54241a" (UID: "1f06bd76-391b-4d80-ba76-a992ee54241a"). InnerVolumeSpecName "kube-api-access-z4k75". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 06:55:00 crc kubenswrapper[4642]: I0128 06:55:00.248796 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f06bd76-391b-4d80-ba76-a992ee54241a-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "1f06bd76-391b-4d80-ba76-a992ee54241a" (UID: "1f06bd76-391b-4d80-ba76-a992ee54241a"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 06:55:00 crc kubenswrapper[4642]: I0128 06:55:00.249339 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "1f06bd76-391b-4d80-ba76-a992ee54241a" (UID: "1f06bd76-391b-4d80-ba76-a992ee54241a"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 28 06:55:00 crc kubenswrapper[4642]: I0128 06:55:00.256701 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f06bd76-391b-4d80-ba76-a992ee54241a-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "1f06bd76-391b-4d80-ba76-a992ee54241a" (UID: "1f06bd76-391b-4d80-ba76-a992ee54241a"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 06:55:00 crc kubenswrapper[4642]: I0128 06:55:00.343939 4642 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1f06bd76-391b-4d80-ba76-a992ee54241a-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 06:55:00 crc kubenswrapper[4642]: I0128 06:55:00.343961 4642 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1f06bd76-391b-4d80-ba76-a992ee54241a-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 28 06:55:00 crc kubenswrapper[4642]: I0128 06:55:00.343971 4642 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1f06bd76-391b-4d80-ba76-a992ee54241a-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 28 06:55:00 crc kubenswrapper[4642]: I0128 06:55:00.343980 4642 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1f06bd76-391b-4d80-ba76-a992ee54241a-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 28 06:55:00 crc kubenswrapper[4642]: I0128 06:55:00.343988 4642 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1f06bd76-391b-4d80-ba76-a992ee54241a-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 28 06:55:00 crc kubenswrapper[4642]: I0128 06:55:00.343996 4642 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1f06bd76-391b-4d80-ba76-a992ee54241a-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 28 06:55:00 crc kubenswrapper[4642]: I0128 06:55:00.344004 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4k75\" (UniqueName: \"kubernetes.io/projected/1f06bd76-391b-4d80-ba76-a992ee54241a-kube-api-access-z4k75\") on node \"crc\" DevicePath \"\"" Jan 28 06:55:00 crc kubenswrapper[4642]: I0128 06:55:00.456789 4642 generic.go:334] "Generic (PLEG): container finished" podID="1f06bd76-391b-4d80-ba76-a992ee54241a" containerID="1e1d34cf5c7e64257213213484e2133c1a3d05c8668f138ce538851d32f483d6" exitCode=0 Jan 28 06:55:00 crc kubenswrapper[4642]: I0128 06:55:00.456828 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-fg5mw" Jan 28 06:55:00 crc kubenswrapper[4642]: I0128 06:55:00.456848 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-fg5mw" event={"ID":"1f06bd76-391b-4d80-ba76-a992ee54241a","Type":"ContainerDied","Data":"1e1d34cf5c7e64257213213484e2133c1a3d05c8668f138ce538851d32f483d6"} Jan 28 06:55:00 crc kubenswrapper[4642]: I0128 06:55:00.457063 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-fg5mw" event={"ID":"1f06bd76-391b-4d80-ba76-a992ee54241a","Type":"ContainerDied","Data":"6b7d1d00b651d16f996e43a6d5ba0a163eda64334f5cd8018ddf39ee6ea513f5"} Jan 28 06:55:00 crc kubenswrapper[4642]: I0128 06:55:00.457082 4642 scope.go:117] "RemoveContainer" containerID="1e1d34cf5c7e64257213213484e2133c1a3d05c8668f138ce538851d32f483d6" Jan 28 06:55:00 crc kubenswrapper[4642]: I0128 06:55:00.468963 4642 scope.go:117] "RemoveContainer" containerID="1e1d34cf5c7e64257213213484e2133c1a3d05c8668f138ce538851d32f483d6" Jan 28 06:55:00 crc kubenswrapper[4642]: E0128 06:55:00.469415 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e1d34cf5c7e64257213213484e2133c1a3d05c8668f138ce538851d32f483d6\": container with ID starting with 1e1d34cf5c7e64257213213484e2133c1a3d05c8668f138ce538851d32f483d6 not found: ID does not exist" containerID="1e1d34cf5c7e64257213213484e2133c1a3d05c8668f138ce538851d32f483d6" Jan 28 06:55:00 crc kubenswrapper[4642]: I0128 06:55:00.469485 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e1d34cf5c7e64257213213484e2133c1a3d05c8668f138ce538851d32f483d6"} err="failed to get container status \"1e1d34cf5c7e64257213213484e2133c1a3d05c8668f138ce538851d32f483d6\": rpc error: code = NotFound desc = could not find container \"1e1d34cf5c7e64257213213484e2133c1a3d05c8668f138ce538851d32f483d6\": container with ID starting with 1e1d34cf5c7e64257213213484e2133c1a3d05c8668f138ce538851d32f483d6 not found: ID does not exist" Jan 28 06:55:00 crc kubenswrapper[4642]: I0128 06:55:00.477934 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fg5mw"] Jan 28 06:55:00 crc kubenswrapper[4642]: I0128 06:55:00.480198 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fg5mw"] Jan 28 06:55:01 crc kubenswrapper[4642]: I0128 06:55:01.102990 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f06bd76-391b-4d80-ba76-a992ee54241a" path="/var/lib/kubelet/pods/1f06bd76-391b-4d80-ba76-a992ee54241a/volumes" Jan 28 06:56:38 crc kubenswrapper[4642]: I0128 06:56:38.199840 4642 patch_prober.go:28] interesting pod/machine-config-daemon-hdsmf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 06:56:38 crc kubenswrapper[4642]: I0128 06:56:38.200254 4642 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 06:57:08 crc kubenswrapper[4642]: I0128 06:57:08.199092 4642 patch_prober.go:28] interesting pod/machine-config-daemon-hdsmf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 06:57:08 crc kubenswrapper[4642]: I0128 06:57:08.199427 4642 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 06:57:25 crc kubenswrapper[4642]: I0128 06:57:25.915418 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-qv4ls"] Jan 28 06:57:25 crc kubenswrapper[4642]: E0128 06:57:25.915917 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f06bd76-391b-4d80-ba76-a992ee54241a" containerName="registry" Jan 28 06:57:25 crc kubenswrapper[4642]: I0128 06:57:25.915928 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f06bd76-391b-4d80-ba76-a992ee54241a" containerName="registry" Jan 28 06:57:25 crc kubenswrapper[4642]: I0128 06:57:25.916007 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f06bd76-391b-4d80-ba76-a992ee54241a" containerName="registry" Jan 28 06:57:25 crc kubenswrapper[4642]: I0128 06:57:25.916392 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-qv4ls" Jan 28 06:57:25 crc kubenswrapper[4642]: I0128 06:57:25.919212 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-l8bb4"] Jan 28 06:57:25 crc kubenswrapper[4642]: I0128 06:57:25.919222 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 28 06:57:25 crc kubenswrapper[4642]: I0128 06:57:25.919776 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-l8bb4" Jan 28 06:57:25 crc kubenswrapper[4642]: I0128 06:57:25.921093 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 28 06:57:25 crc kubenswrapper[4642]: I0128 06:57:25.921211 4642 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-7djxs" Jan 28 06:57:25 crc kubenswrapper[4642]: I0128 06:57:25.921231 4642 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-7ldxf" Jan 28 06:57:25 crc kubenswrapper[4642]: I0128 06:57:25.927585 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-2kkdr"] Jan 28 06:57:25 crc kubenswrapper[4642]: I0128 06:57:25.928113 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-2kkdr" Jan 28 06:57:25 crc kubenswrapper[4642]: I0128 06:57:25.939948 4642 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-jmpzb" Jan 28 06:57:25 crc kubenswrapper[4642]: I0128 06:57:25.956985 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-l8bb4"] Jan 28 06:57:25 crc kubenswrapper[4642]: I0128 06:57:25.964946 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-2kkdr"] Jan 28 06:57:25 crc kubenswrapper[4642]: I0128 06:57:25.969986 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-qv4ls"] Jan 28 06:57:26 crc kubenswrapper[4642]: I0128 06:57:26.023913 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czcs6\" (UniqueName: \"kubernetes.io/projected/130b06ad-fdbf-4c37-b60e-4a6893a00984-kube-api-access-czcs6\") pod \"cert-manager-cainjector-cf98fcc89-qv4ls\" (UID: \"130b06ad-fdbf-4c37-b60e-4a6893a00984\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-qv4ls" Jan 28 06:57:26 crc kubenswrapper[4642]: I0128 06:57:26.023951 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6n56\" (UniqueName: \"kubernetes.io/projected/e8183488-30bb-4dad-affe-d8ac650f1396-kube-api-access-c6n56\") pod \"cert-manager-858654f9db-l8bb4\" (UID: \"e8183488-30bb-4dad-affe-d8ac650f1396\") " pod="cert-manager/cert-manager-858654f9db-l8bb4" Jan 28 06:57:26 crc kubenswrapper[4642]: I0128 06:57:26.124663 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6n56\" (UniqueName: \"kubernetes.io/projected/e8183488-30bb-4dad-affe-d8ac650f1396-kube-api-access-c6n56\") pod \"cert-manager-858654f9db-l8bb4\" (UID: \"e8183488-30bb-4dad-affe-d8ac650f1396\") " pod="cert-manager/cert-manager-858654f9db-l8bb4" Jan 28 06:57:26 crc kubenswrapper[4642]: I0128 06:57:26.124908 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g85bl\" (UniqueName: \"kubernetes.io/projected/f0d82d56-7c08-4a56-9d8d-14f1b372c248-kube-api-access-g85bl\") pod \"cert-manager-webhook-687f57d79b-2kkdr\" (UID: \"f0d82d56-7c08-4a56-9d8d-14f1b372c248\") " pod="cert-manager/cert-manager-webhook-687f57d79b-2kkdr" Jan 28 06:57:26 crc kubenswrapper[4642]: I0128 06:57:26.124980 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czcs6\" (UniqueName: \"kubernetes.io/projected/130b06ad-fdbf-4c37-b60e-4a6893a00984-kube-api-access-czcs6\") pod \"cert-manager-cainjector-cf98fcc89-qv4ls\" (UID: \"130b06ad-fdbf-4c37-b60e-4a6893a00984\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-qv4ls" Jan 28 06:57:26 crc kubenswrapper[4642]: I0128 06:57:26.140137 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czcs6\" (UniqueName: \"kubernetes.io/projected/130b06ad-fdbf-4c37-b60e-4a6893a00984-kube-api-access-czcs6\") pod \"cert-manager-cainjector-cf98fcc89-qv4ls\" (UID: \"130b06ad-fdbf-4c37-b60e-4a6893a00984\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-qv4ls" Jan 28 06:57:26 crc kubenswrapper[4642]: I0128 06:57:26.140503 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6n56\" (UniqueName: \"kubernetes.io/projected/e8183488-30bb-4dad-affe-d8ac650f1396-kube-api-access-c6n56\") pod \"cert-manager-858654f9db-l8bb4\" (UID: \"e8183488-30bb-4dad-affe-d8ac650f1396\") " pod="cert-manager/cert-manager-858654f9db-l8bb4" Jan 28 06:57:26 crc kubenswrapper[4642]: I0128 06:57:26.225732 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g85bl\" (UniqueName: \"kubernetes.io/projected/f0d82d56-7c08-4a56-9d8d-14f1b372c248-kube-api-access-g85bl\") pod \"cert-manager-webhook-687f57d79b-2kkdr\" (UID: \"f0d82d56-7c08-4a56-9d8d-14f1b372c248\") " pod="cert-manager/cert-manager-webhook-687f57d79b-2kkdr" Jan 28 06:57:26 crc kubenswrapper[4642]: I0128 06:57:26.229453 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-qv4ls" Jan 28 06:57:26 crc kubenswrapper[4642]: I0128 06:57:26.242695 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g85bl\" (UniqueName: \"kubernetes.io/projected/f0d82d56-7c08-4a56-9d8d-14f1b372c248-kube-api-access-g85bl\") pod \"cert-manager-webhook-687f57d79b-2kkdr\" (UID: \"f0d82d56-7c08-4a56-9d8d-14f1b372c248\") " pod="cert-manager/cert-manager-webhook-687f57d79b-2kkdr" Jan 28 06:57:26 crc kubenswrapper[4642]: I0128 06:57:26.246397 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-l8bb4" Jan 28 06:57:26 crc kubenswrapper[4642]: I0128 06:57:26.261700 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-2kkdr" Jan 28 06:57:26 crc kubenswrapper[4642]: I0128 06:57:26.602119 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-l8bb4"] Jan 28 06:57:26 crc kubenswrapper[4642]: I0128 06:57:26.607005 4642 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 06:57:26 crc kubenswrapper[4642]: I0128 06:57:26.640144 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-2kkdr"] Jan 28 06:57:26 crc kubenswrapper[4642]: I0128 06:57:26.642367 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-qv4ls"] Jan 28 06:57:26 crc kubenswrapper[4642]: W0128 06:57:26.644497 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod130b06ad_fdbf_4c37_b60e_4a6893a00984.slice/crio-e00a68fd317729c67f1cf0df0d4e35d72d5f86445d9c2384a061de57ca51db3e WatchSource:0}: Error finding container e00a68fd317729c67f1cf0df0d4e35d72d5f86445d9c2384a061de57ca51db3e: Status 404 returned error can't find the container with id e00a68fd317729c67f1cf0df0d4e35d72d5f86445d9c2384a061de57ca51db3e Jan 28 06:57:26 crc kubenswrapper[4642]: I0128 06:57:26.979626 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-2kkdr" event={"ID":"f0d82d56-7c08-4a56-9d8d-14f1b372c248","Type":"ContainerStarted","Data":"64819d30f7575607621785b12c55609a46de527cfed97716313ae6ed35363654"} Jan 28 06:57:26 crc kubenswrapper[4642]: I0128 06:57:26.980415 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-l8bb4" event={"ID":"e8183488-30bb-4dad-affe-d8ac650f1396","Type":"ContainerStarted","Data":"55949a8e32d0f16adb88a1232bd021c58a942bfa07be468e8edab7ecbc35e972"} Jan 28 06:57:26 crc kubenswrapper[4642]: I0128 06:57:26.981138 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-qv4ls" event={"ID":"130b06ad-fdbf-4c37-b60e-4a6893a00984","Type":"ContainerStarted","Data":"e00a68fd317729c67f1cf0df0d4e35d72d5f86445d9c2384a061de57ca51db3e"} Jan 28 06:57:28 crc kubenswrapper[4642]: I0128 06:57:28.993442 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-l8bb4" event={"ID":"e8183488-30bb-4dad-affe-d8ac650f1396","Type":"ContainerStarted","Data":"1a7441623c25439cf63844863cc58ea0587d721517e0dbf468925908a5f781a9"} Jan 28 06:57:28 crc kubenswrapper[4642]: I0128 06:57:28.995839 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-qv4ls" event={"ID":"130b06ad-fdbf-4c37-b60e-4a6893a00984","Type":"ContainerStarted","Data":"b38d7dd21946617c980997d4a4079def95f707147164da91c5e555a5ac0c006e"} Jan 28 06:57:29 crc kubenswrapper[4642]: I0128 06:57:29.013697 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-l8bb4" podStartSLOduration=1.848429113 podStartE2EDuration="4.013684702s" podCreationTimestamp="2026-01-28 06:57:25 +0000 UTC" firstStartedPulling="2026-01-28 06:57:26.606805746 +0000 UTC m=+569.838894556" lastFinishedPulling="2026-01-28 06:57:28.772061336 +0000 UTC m=+572.004150145" observedRunningTime="2026-01-28 06:57:29.012291912 +0000 UTC m=+572.244380721" watchObservedRunningTime="2026-01-28 06:57:29.013684702 +0000 UTC m=+572.245773501" Jan 28 06:57:29 crc kubenswrapper[4642]: I0128 06:57:29.023880 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-qv4ls" podStartSLOduration=1.894340508 podStartE2EDuration="4.023867916s" podCreationTimestamp="2026-01-28 06:57:25 +0000 UTC" firstStartedPulling="2026-01-28 06:57:26.645809569 +0000 UTC m=+569.877898378" lastFinishedPulling="2026-01-28 06:57:28.775336977 +0000 UTC m=+572.007425786" observedRunningTime="2026-01-28 06:57:29.022526643 +0000 UTC m=+572.254615452" watchObservedRunningTime="2026-01-28 06:57:29.023867916 +0000 UTC m=+572.255956726" Jan 28 06:57:30 crc kubenswrapper[4642]: I0128 06:57:30.001988 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-2kkdr" event={"ID":"f0d82d56-7c08-4a56-9d8d-14f1b372c248","Type":"ContainerStarted","Data":"03c775f1e1adee60c54492a70155bcaaa475b3cb375632c1fa0296ac96fb7673"} Jan 28 06:57:30 crc kubenswrapper[4642]: I0128 06:57:30.014426 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-2kkdr" podStartSLOduration=2.240598102 podStartE2EDuration="5.014410906s" podCreationTimestamp="2026-01-28 06:57:25 +0000 UTC" firstStartedPulling="2026-01-28 06:57:26.643728254 +0000 UTC m=+569.875817063" lastFinishedPulling="2026-01-28 06:57:29.417541057 +0000 UTC m=+572.649629867" observedRunningTime="2026-01-28 06:57:30.011206048 +0000 UTC m=+573.243294857" watchObservedRunningTime="2026-01-28 06:57:30.014410906 +0000 UTC m=+573.246499715" Jan 28 06:57:31 crc kubenswrapper[4642]: I0128 06:57:31.006821 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-2kkdr" Jan 28 06:57:36 crc kubenswrapper[4642]: I0128 06:57:36.264748 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-2kkdr" Jan 28 06:57:36 crc kubenswrapper[4642]: I0128 06:57:36.687692 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-7fdwx"] Jan 28 06:57:36 crc kubenswrapper[4642]: I0128 06:57:36.687999 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" podUID="0f5d2a3f-25d8-4051-8000-30ec01a14eb0" containerName="ovn-controller" containerID="cri-o://a6187660025657deea7e0cd68d1341136261c5e66c9e32c3b00b181df029843b" gracePeriod=30 Jan 28 06:57:36 crc kubenswrapper[4642]: I0128 06:57:36.688088 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" podUID="0f5d2a3f-25d8-4051-8000-30ec01a14eb0" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://d9c679b07a820cca68800f672c74f133d871d41e4cfe6c18e101febd7110b497" gracePeriod=30 Jan 28 06:57:36 crc kubenswrapper[4642]: I0128 06:57:36.688107 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" podUID="0f5d2a3f-25d8-4051-8000-30ec01a14eb0" containerName="ovn-acl-logging" containerID="cri-o://e87e992d302acbefdcab2e332757e1f6d6535a0acdbe268b1e2eb1d49a615940" gracePeriod=30 Jan 28 06:57:36 crc kubenswrapper[4642]: I0128 06:57:36.688131 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" podUID="0f5d2a3f-25d8-4051-8000-30ec01a14eb0" containerName="sbdb" containerID="cri-o://4cdaa92914e736d4130cee0be4f11381b0edec756ca70235c6a4205f04777874" gracePeriod=30 Jan 28 06:57:36 crc kubenswrapper[4642]: I0128 06:57:36.688099 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" podUID="0f5d2a3f-25d8-4051-8000-30ec01a14eb0" containerName="nbdb" containerID="cri-o://fba5c27390ef97be2917d1b375ca741d7a74118ba523043b1c826e4fccaee05d" gracePeriod=30 Jan 28 06:57:36 crc kubenswrapper[4642]: I0128 06:57:36.688150 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" podUID="0f5d2a3f-25d8-4051-8000-30ec01a14eb0" containerName="kube-rbac-proxy-node" containerID="cri-o://89c33d9fa76f5feb29f150ff38354d292c6cb68dedb78eb8496754c23272d137" gracePeriod=30 Jan 28 06:57:36 crc kubenswrapper[4642]: I0128 06:57:36.688223 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" podUID="0f5d2a3f-25d8-4051-8000-30ec01a14eb0" containerName="northd" containerID="cri-o://a597a1b6d4b61efc1b46773965ad598626e298815da9ceffaf97965d72286d3f" gracePeriod=30 Jan 28 06:57:36 crc kubenswrapper[4642]: I0128 06:57:36.711778 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" podUID="0f5d2a3f-25d8-4051-8000-30ec01a14eb0" containerName="ovnkube-controller" containerID="cri-o://eeed5a705967acb120d93dcd944360096cb0ce38a2ba8309391a39f2f8b33c60" gracePeriod=30 Jan 28 06:57:36 crc kubenswrapper[4642]: I0128 06:57:36.943293 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7fdwx_0f5d2a3f-25d8-4051-8000-30ec01a14eb0/ovnkube-controller/3.log" Jan 28 06:57:36 crc kubenswrapper[4642]: I0128 06:57:36.945068 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7fdwx_0f5d2a3f-25d8-4051-8000-30ec01a14eb0/ovn-acl-logging/0.log" Jan 28 06:57:36 crc kubenswrapper[4642]: I0128 06:57:36.945495 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7fdwx_0f5d2a3f-25d8-4051-8000-30ec01a14eb0/ovn-controller/0.log" Jan 28 06:57:36 crc kubenswrapper[4642]: I0128 06:57:36.945842 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" Jan 28 06:57:36 crc kubenswrapper[4642]: I0128 06:57:36.987375 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-tdjgt"] Jan 28 06:57:36 crc kubenswrapper[4642]: E0128 06:57:36.987544 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f5d2a3f-25d8-4051-8000-30ec01a14eb0" containerName="ovn-acl-logging" Jan 28 06:57:36 crc kubenswrapper[4642]: I0128 06:57:36.987555 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f5d2a3f-25d8-4051-8000-30ec01a14eb0" containerName="ovn-acl-logging" Jan 28 06:57:36 crc kubenswrapper[4642]: E0128 06:57:36.987565 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f5d2a3f-25d8-4051-8000-30ec01a14eb0" containerName="kube-rbac-proxy-ovn-metrics" Jan 28 06:57:36 crc kubenswrapper[4642]: I0128 06:57:36.987571 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f5d2a3f-25d8-4051-8000-30ec01a14eb0" containerName="kube-rbac-proxy-ovn-metrics" Jan 28 06:57:36 crc kubenswrapper[4642]: E0128 06:57:36.987580 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f5d2a3f-25d8-4051-8000-30ec01a14eb0" containerName="ovnkube-controller" Jan 28 06:57:36 crc kubenswrapper[4642]: I0128 06:57:36.987586 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f5d2a3f-25d8-4051-8000-30ec01a14eb0" containerName="ovnkube-controller" Jan 28 06:57:36 crc kubenswrapper[4642]: E0128 06:57:36.987594 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f5d2a3f-25d8-4051-8000-30ec01a14eb0" containerName="ovnkube-controller" Jan 28 06:57:36 crc kubenswrapper[4642]: I0128 06:57:36.987599 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f5d2a3f-25d8-4051-8000-30ec01a14eb0" containerName="ovnkube-controller" Jan 28 06:57:36 crc kubenswrapper[4642]: E0128 06:57:36.987605 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f5d2a3f-25d8-4051-8000-30ec01a14eb0" containerName="northd" Jan 28 06:57:36 crc kubenswrapper[4642]: I0128 06:57:36.987610 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f5d2a3f-25d8-4051-8000-30ec01a14eb0" containerName="northd" Jan 28 06:57:36 crc kubenswrapper[4642]: E0128 06:57:36.987618 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f5d2a3f-25d8-4051-8000-30ec01a14eb0" containerName="nbdb" Jan 28 06:57:36 crc kubenswrapper[4642]: I0128 06:57:36.987625 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f5d2a3f-25d8-4051-8000-30ec01a14eb0" containerName="nbdb" Jan 28 06:57:36 crc kubenswrapper[4642]: E0128 06:57:36.987633 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f5d2a3f-25d8-4051-8000-30ec01a14eb0" containerName="kube-rbac-proxy-node" Jan 28 06:57:36 crc kubenswrapper[4642]: I0128 06:57:36.987638 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f5d2a3f-25d8-4051-8000-30ec01a14eb0" containerName="kube-rbac-proxy-node" Jan 28 06:57:36 crc kubenswrapper[4642]: E0128 06:57:36.987645 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f5d2a3f-25d8-4051-8000-30ec01a14eb0" containerName="ovnkube-controller" Jan 28 06:57:36 crc kubenswrapper[4642]: I0128 06:57:36.987650 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f5d2a3f-25d8-4051-8000-30ec01a14eb0" containerName="ovnkube-controller" Jan 28 06:57:36 crc kubenswrapper[4642]: E0128 06:57:36.987657 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f5d2a3f-25d8-4051-8000-30ec01a14eb0" containerName="sbdb" Jan 28 06:57:36 crc kubenswrapper[4642]: I0128 06:57:36.987661 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f5d2a3f-25d8-4051-8000-30ec01a14eb0" containerName="sbdb" Jan 28 06:57:36 crc kubenswrapper[4642]: E0128 06:57:36.987670 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f5d2a3f-25d8-4051-8000-30ec01a14eb0" containerName="ovnkube-controller" Jan 28 06:57:36 crc kubenswrapper[4642]: I0128 06:57:36.987675 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f5d2a3f-25d8-4051-8000-30ec01a14eb0" containerName="ovnkube-controller" Jan 28 06:57:36 crc kubenswrapper[4642]: E0128 06:57:36.987683 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f5d2a3f-25d8-4051-8000-30ec01a14eb0" containerName="ovn-controller" Jan 28 06:57:36 crc kubenswrapper[4642]: I0128 06:57:36.987687 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f5d2a3f-25d8-4051-8000-30ec01a14eb0" containerName="ovn-controller" Jan 28 06:57:36 crc kubenswrapper[4642]: E0128 06:57:36.987696 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f5d2a3f-25d8-4051-8000-30ec01a14eb0" containerName="kubecfg-setup" Jan 28 06:57:36 crc kubenswrapper[4642]: I0128 06:57:36.987701 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f5d2a3f-25d8-4051-8000-30ec01a14eb0" containerName="kubecfg-setup" Jan 28 06:57:36 crc kubenswrapper[4642]: I0128 06:57:36.987776 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f5d2a3f-25d8-4051-8000-30ec01a14eb0" containerName="ovnkube-controller" Jan 28 06:57:36 crc kubenswrapper[4642]: I0128 06:57:36.987784 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f5d2a3f-25d8-4051-8000-30ec01a14eb0" containerName="ovn-controller" Jan 28 06:57:36 crc kubenswrapper[4642]: I0128 06:57:36.987792 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f5d2a3f-25d8-4051-8000-30ec01a14eb0" containerName="ovnkube-controller" Jan 28 06:57:36 crc kubenswrapper[4642]: I0128 06:57:36.987797 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f5d2a3f-25d8-4051-8000-30ec01a14eb0" containerName="ovnkube-controller" Jan 28 06:57:36 crc kubenswrapper[4642]: I0128 06:57:36.987803 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f5d2a3f-25d8-4051-8000-30ec01a14eb0" containerName="northd" Jan 28 06:57:36 crc kubenswrapper[4642]: I0128 06:57:36.987811 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f5d2a3f-25d8-4051-8000-30ec01a14eb0" containerName="nbdb" Jan 28 06:57:36 crc kubenswrapper[4642]: I0128 06:57:36.987817 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f5d2a3f-25d8-4051-8000-30ec01a14eb0" containerName="sbdb" Jan 28 06:57:36 crc kubenswrapper[4642]: I0128 06:57:36.987823 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f5d2a3f-25d8-4051-8000-30ec01a14eb0" containerName="kube-rbac-proxy-node" Jan 28 06:57:36 crc kubenswrapper[4642]: I0128 06:57:36.987828 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f5d2a3f-25d8-4051-8000-30ec01a14eb0" containerName="kube-rbac-proxy-ovn-metrics" Jan 28 06:57:36 crc kubenswrapper[4642]: I0128 06:57:36.987838 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f5d2a3f-25d8-4051-8000-30ec01a14eb0" containerName="ovn-acl-logging" Jan 28 06:57:36 crc kubenswrapper[4642]: E0128 06:57:36.987908 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f5d2a3f-25d8-4051-8000-30ec01a14eb0" containerName="ovnkube-controller" Jan 28 06:57:36 crc kubenswrapper[4642]: I0128 06:57:36.987914 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f5d2a3f-25d8-4051-8000-30ec01a14eb0" containerName="ovnkube-controller" Jan 28 06:57:36 crc kubenswrapper[4642]: I0128 06:57:36.987991 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f5d2a3f-25d8-4051-8000-30ec01a14eb0" containerName="ovnkube-controller" Jan 28 06:57:36 crc kubenswrapper[4642]: I0128 06:57:36.987997 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f5d2a3f-25d8-4051-8000-30ec01a14eb0" containerName="ovnkube-controller" Jan 28 06:57:36 crc kubenswrapper[4642]: I0128 06:57:36.989744 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-tdjgt" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.030869 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-28n48_3d569b7c-8a0e-4074-b61f-4139413b9849/kube-multus/2.log" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.031258 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-28n48_3d569b7c-8a0e-4074-b61f-4139413b9849/kube-multus/1.log" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.031301 4642 generic.go:334] "Generic (PLEG): container finished" podID="3d569b7c-8a0e-4074-b61f-4139413b9849" containerID="284b63834bf5eec6caab472794a1dfb8ec01f2a5fa5e3807db375880b7b556eb" exitCode=2 Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.031349 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-28n48" event={"ID":"3d569b7c-8a0e-4074-b61f-4139413b9849","Type":"ContainerDied","Data":"284b63834bf5eec6caab472794a1dfb8ec01f2a5fa5e3807db375880b7b556eb"} Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.031379 4642 scope.go:117] "RemoveContainer" containerID="0226cb06c2fc831da7dadb94e0ca448e1b610cc146da17dd286aae90a38aa7c5" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.031788 4642 scope.go:117] "RemoveContainer" containerID="284b63834bf5eec6caab472794a1dfb8ec01f2a5fa5e3807db375880b7b556eb" Jan 28 06:57:37 crc kubenswrapper[4642]: E0128 06:57:37.031957 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-28n48_openshift-multus(3d569b7c-8a0e-4074-b61f-4139413b9849)\"" pod="openshift-multus/multus-28n48" podUID="3d569b7c-8a0e-4074-b61f-4139413b9849" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.033321 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7fdwx_0f5d2a3f-25d8-4051-8000-30ec01a14eb0/ovnkube-controller/3.log" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.035010 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7fdwx_0f5d2a3f-25d8-4051-8000-30ec01a14eb0/ovn-acl-logging/0.log" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.035445 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7fdwx_0f5d2a3f-25d8-4051-8000-30ec01a14eb0/ovn-controller/0.log" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.035763 4642 generic.go:334] "Generic (PLEG): container finished" podID="0f5d2a3f-25d8-4051-8000-30ec01a14eb0" containerID="eeed5a705967acb120d93dcd944360096cb0ce38a2ba8309391a39f2f8b33c60" exitCode=0 Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.035782 4642 generic.go:334] "Generic (PLEG): container finished" podID="0f5d2a3f-25d8-4051-8000-30ec01a14eb0" containerID="4cdaa92914e736d4130cee0be4f11381b0edec756ca70235c6a4205f04777874" exitCode=0 Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.035789 4642 generic.go:334] "Generic (PLEG): container finished" podID="0f5d2a3f-25d8-4051-8000-30ec01a14eb0" containerID="fba5c27390ef97be2917d1b375ca741d7a74118ba523043b1c826e4fccaee05d" exitCode=0 Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.035797 4642 generic.go:334] "Generic (PLEG): container finished" podID="0f5d2a3f-25d8-4051-8000-30ec01a14eb0" containerID="a597a1b6d4b61efc1b46773965ad598626e298815da9ceffaf97965d72286d3f" exitCode=0 Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.035808 4642 generic.go:334] "Generic (PLEG): container finished" podID="0f5d2a3f-25d8-4051-8000-30ec01a14eb0" containerID="d9c679b07a820cca68800f672c74f133d871d41e4cfe6c18e101febd7110b497" exitCode=0 Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.035793 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" event={"ID":"0f5d2a3f-25d8-4051-8000-30ec01a14eb0","Type":"ContainerDied","Data":"eeed5a705967acb120d93dcd944360096cb0ce38a2ba8309391a39f2f8b33c60"} Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.035833 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" event={"ID":"0f5d2a3f-25d8-4051-8000-30ec01a14eb0","Type":"ContainerDied","Data":"4cdaa92914e736d4130cee0be4f11381b0edec756ca70235c6a4205f04777874"} Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.035845 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" event={"ID":"0f5d2a3f-25d8-4051-8000-30ec01a14eb0","Type":"ContainerDied","Data":"fba5c27390ef97be2917d1b375ca741d7a74118ba523043b1c826e4fccaee05d"} Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.035814 4642 generic.go:334] "Generic (PLEG): container finished" podID="0f5d2a3f-25d8-4051-8000-30ec01a14eb0" containerID="89c33d9fa76f5feb29f150ff38354d292c6cb68dedb78eb8496754c23272d137" exitCode=0 Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.035870 4642 generic.go:334] "Generic (PLEG): container finished" podID="0f5d2a3f-25d8-4051-8000-30ec01a14eb0" containerID="e87e992d302acbefdcab2e332757e1f6d6535a0acdbe268b1e2eb1d49a615940" exitCode=143 Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.035885 4642 generic.go:334] "Generic (PLEG): container finished" podID="0f5d2a3f-25d8-4051-8000-30ec01a14eb0" containerID="a6187660025657deea7e0cd68d1341136261c5e66c9e32c3b00b181df029843b" exitCode=143 Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.035843 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.035856 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" event={"ID":"0f5d2a3f-25d8-4051-8000-30ec01a14eb0","Type":"ContainerDied","Data":"a597a1b6d4b61efc1b46773965ad598626e298815da9ceffaf97965d72286d3f"} Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.035960 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" event={"ID":"0f5d2a3f-25d8-4051-8000-30ec01a14eb0","Type":"ContainerDied","Data":"d9c679b07a820cca68800f672c74f133d871d41e4cfe6c18e101febd7110b497"} Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.035979 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" event={"ID":"0f5d2a3f-25d8-4051-8000-30ec01a14eb0","Type":"ContainerDied","Data":"89c33d9fa76f5feb29f150ff38354d292c6cb68dedb78eb8496754c23272d137"} Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.035989 4642 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"eeed5a705967acb120d93dcd944360096cb0ce38a2ba8309391a39f2f8b33c60"} Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.036001 4642 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f946a3d181059c560eee8b44f77e584a9b2169a8a7c6ee76a78036779954e5ec"} Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.036007 4642 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4cdaa92914e736d4130cee0be4f11381b0edec756ca70235c6a4205f04777874"} Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.036012 4642 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fba5c27390ef97be2917d1b375ca741d7a74118ba523043b1c826e4fccaee05d"} Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.036016 4642 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a597a1b6d4b61efc1b46773965ad598626e298815da9ceffaf97965d72286d3f"} Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.036023 4642 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d9c679b07a820cca68800f672c74f133d871d41e4cfe6c18e101febd7110b497"} Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.036027 4642 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"89c33d9fa76f5feb29f150ff38354d292c6cb68dedb78eb8496754c23272d137"} Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.036032 4642 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e87e992d302acbefdcab2e332757e1f6d6535a0acdbe268b1e2eb1d49a615940"} Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.036037 4642 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a6187660025657deea7e0cd68d1341136261c5e66c9e32c3b00b181df029843b"} Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.036043 4642 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ecf4d023197544de3b0fe0f541dc75332bb08d0f44046e46fd468e3c5a0f251e"} Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.036049 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" event={"ID":"0f5d2a3f-25d8-4051-8000-30ec01a14eb0","Type":"ContainerDied","Data":"e87e992d302acbefdcab2e332757e1f6d6535a0acdbe268b1e2eb1d49a615940"} Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.036057 4642 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"eeed5a705967acb120d93dcd944360096cb0ce38a2ba8309391a39f2f8b33c60"} Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.036062 4642 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f946a3d181059c560eee8b44f77e584a9b2169a8a7c6ee76a78036779954e5ec"} Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.036067 4642 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4cdaa92914e736d4130cee0be4f11381b0edec756ca70235c6a4205f04777874"} Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.036072 4642 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fba5c27390ef97be2917d1b375ca741d7a74118ba523043b1c826e4fccaee05d"} Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.036076 4642 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a597a1b6d4b61efc1b46773965ad598626e298815da9ceffaf97965d72286d3f"} Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.036081 4642 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d9c679b07a820cca68800f672c74f133d871d41e4cfe6c18e101febd7110b497"} Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.036085 4642 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"89c33d9fa76f5feb29f150ff38354d292c6cb68dedb78eb8496754c23272d137"} Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.036090 4642 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e87e992d302acbefdcab2e332757e1f6d6535a0acdbe268b1e2eb1d49a615940"} Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.036095 4642 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a6187660025657deea7e0cd68d1341136261c5e66c9e32c3b00b181df029843b"} Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.036099 4642 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ecf4d023197544de3b0fe0f541dc75332bb08d0f44046e46fd468e3c5a0f251e"} Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.036105 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" event={"ID":"0f5d2a3f-25d8-4051-8000-30ec01a14eb0","Type":"ContainerDied","Data":"a6187660025657deea7e0cd68d1341136261c5e66c9e32c3b00b181df029843b"} Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.036112 4642 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"eeed5a705967acb120d93dcd944360096cb0ce38a2ba8309391a39f2f8b33c60"} Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.036117 4642 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f946a3d181059c560eee8b44f77e584a9b2169a8a7c6ee76a78036779954e5ec"} Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.036123 4642 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4cdaa92914e736d4130cee0be4f11381b0edec756ca70235c6a4205f04777874"} Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.036130 4642 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fba5c27390ef97be2917d1b375ca741d7a74118ba523043b1c826e4fccaee05d"} Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.036135 4642 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a597a1b6d4b61efc1b46773965ad598626e298815da9ceffaf97965d72286d3f"} Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.036139 4642 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d9c679b07a820cca68800f672c74f133d871d41e4cfe6c18e101febd7110b497"} Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.036145 4642 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"89c33d9fa76f5feb29f150ff38354d292c6cb68dedb78eb8496754c23272d137"} Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.036150 4642 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e87e992d302acbefdcab2e332757e1f6d6535a0acdbe268b1e2eb1d49a615940"} Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.036154 4642 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a6187660025657deea7e0cd68d1341136261c5e66c9e32c3b00b181df029843b"} Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.036159 4642 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ecf4d023197544de3b0fe0f541dc75332bb08d0f44046e46fd468e3c5a0f251e"} Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.036165 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7fdwx" event={"ID":"0f5d2a3f-25d8-4051-8000-30ec01a14eb0","Type":"ContainerDied","Data":"eb82ce31ee302e958f27b0d545954ccb2660743db4bf70f2312741fa99aeb02e"} Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.036171 4642 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"eeed5a705967acb120d93dcd944360096cb0ce38a2ba8309391a39f2f8b33c60"} Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.036200 4642 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f946a3d181059c560eee8b44f77e584a9b2169a8a7c6ee76a78036779954e5ec"} Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.036205 4642 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4cdaa92914e736d4130cee0be4f11381b0edec756ca70235c6a4205f04777874"} Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.036210 4642 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fba5c27390ef97be2917d1b375ca741d7a74118ba523043b1c826e4fccaee05d"} Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.036215 4642 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a597a1b6d4b61efc1b46773965ad598626e298815da9ceffaf97965d72286d3f"} Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.036220 4642 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d9c679b07a820cca68800f672c74f133d871d41e4cfe6c18e101febd7110b497"} Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.036224 4642 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"89c33d9fa76f5feb29f150ff38354d292c6cb68dedb78eb8496754c23272d137"} Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.036230 4642 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e87e992d302acbefdcab2e332757e1f6d6535a0acdbe268b1e2eb1d49a615940"} Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.036234 4642 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a6187660025657deea7e0cd68d1341136261c5e66c9e32c3b00b181df029843b"} Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.036238 4642 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ecf4d023197544de3b0fe0f541dc75332bb08d0f44046e46fd468e3c5a0f251e"} Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.037630 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-run-systemd\") pod \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\" (UID: \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\") " Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.037660 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-host-cni-bin\") pod \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\" (UID: \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\") " Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.037675 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-etc-openvswitch\") pod \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\" (UID: \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\") " Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.037696 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-ovnkube-script-lib\") pod \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\" (UID: \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\") " Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.037714 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-host-run-netns\") pod \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\" (UID: \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\") " Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.037731 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g9vq\" (UniqueName: \"kubernetes.io/projected/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-kube-api-access-4g9vq\") pod \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\" (UID: \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\") " Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.037746 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-host-slash\") pod \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\" (UID: \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\") " Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.037760 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-ovnkube-config\") pod \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\" (UID: \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\") " Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.037779 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-host-cni-netd\") pod \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\" (UID: \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\") " Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.037794 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-systemd-units\") pod \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\" (UID: \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\") " Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.037813 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-host-run-ovn-kubernetes\") pod \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\" (UID: \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\") " Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.037827 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-env-overrides\") pod \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\" (UID: \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\") " Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.037841 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-ovn-node-metrics-cert\") pod \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\" (UID: \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\") " Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.037857 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-host-kubelet\") pod \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\" (UID: \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\") " Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.037876 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-node-log\") pod \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\" (UID: \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\") " Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.037893 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-run-ovn\") pod \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\" (UID: \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\") " Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.037960 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a96e0817-a5a7-4684-9127-99bf21c6f3f3-log-socket\") pod \"ovnkube-node-tdjgt\" (UID: \"a96e0817-a5a7-4684-9127-99bf21c6f3f3\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdjgt" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.037978 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a96e0817-a5a7-4684-9127-99bf21c6f3f3-run-systemd\") pod \"ovnkube-node-tdjgt\" (UID: \"a96e0817-a5a7-4684-9127-99bf21c6f3f3\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdjgt" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.037993 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rv8fh\" (UniqueName: \"kubernetes.io/projected/a96e0817-a5a7-4684-9127-99bf21c6f3f3-kube-api-access-rv8fh\") pod \"ovnkube-node-tdjgt\" (UID: \"a96e0817-a5a7-4684-9127-99bf21c6f3f3\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdjgt" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.038008 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a96e0817-a5a7-4684-9127-99bf21c6f3f3-host-cni-bin\") pod \"ovnkube-node-tdjgt\" (UID: \"a96e0817-a5a7-4684-9127-99bf21c6f3f3\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdjgt" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.038024 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a96e0817-a5a7-4684-9127-99bf21c6f3f3-run-openvswitch\") pod \"ovnkube-node-tdjgt\" (UID: \"a96e0817-a5a7-4684-9127-99bf21c6f3f3\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdjgt" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.038038 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a96e0817-a5a7-4684-9127-99bf21c6f3f3-ovnkube-script-lib\") pod \"ovnkube-node-tdjgt\" (UID: \"a96e0817-a5a7-4684-9127-99bf21c6f3f3\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdjgt" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.038066 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a96e0817-a5a7-4684-9127-99bf21c6f3f3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-tdjgt\" (UID: \"a96e0817-a5a7-4684-9127-99bf21c6f3f3\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdjgt" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.038082 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a96e0817-a5a7-4684-9127-99bf21c6f3f3-host-run-ovn-kubernetes\") pod \"ovnkube-node-tdjgt\" (UID: \"a96e0817-a5a7-4684-9127-99bf21c6f3f3\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdjgt" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.038095 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a96e0817-a5a7-4684-9127-99bf21c6f3f3-systemd-units\") pod \"ovnkube-node-tdjgt\" (UID: \"a96e0817-a5a7-4684-9127-99bf21c6f3f3\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdjgt" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.038110 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a96e0817-a5a7-4684-9127-99bf21c6f3f3-var-lib-openvswitch\") pod \"ovnkube-node-tdjgt\" (UID: \"a96e0817-a5a7-4684-9127-99bf21c6f3f3\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdjgt" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.038125 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a96e0817-a5a7-4684-9127-99bf21c6f3f3-ovnkube-config\") pod \"ovnkube-node-tdjgt\" (UID: \"a96e0817-a5a7-4684-9127-99bf21c6f3f3\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdjgt" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.038140 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a96e0817-a5a7-4684-9127-99bf21c6f3f3-host-slash\") pod \"ovnkube-node-tdjgt\" (UID: \"a96e0817-a5a7-4684-9127-99bf21c6f3f3\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdjgt" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.038157 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a96e0817-a5a7-4684-9127-99bf21c6f3f3-host-kubelet\") pod \"ovnkube-node-tdjgt\" (UID: \"a96e0817-a5a7-4684-9127-99bf21c6f3f3\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdjgt" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.038173 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a96e0817-a5a7-4684-9127-99bf21c6f3f3-run-ovn\") pod \"ovnkube-node-tdjgt\" (UID: \"a96e0817-a5a7-4684-9127-99bf21c6f3f3\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdjgt" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.038255 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a96e0817-a5a7-4684-9127-99bf21c6f3f3-env-overrides\") pod \"ovnkube-node-tdjgt\" (UID: \"a96e0817-a5a7-4684-9127-99bf21c6f3f3\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdjgt" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.038272 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a96e0817-a5a7-4684-9127-99bf21c6f3f3-host-cni-netd\") pod \"ovnkube-node-tdjgt\" (UID: \"a96e0817-a5a7-4684-9127-99bf21c6f3f3\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdjgt" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.038285 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a96e0817-a5a7-4684-9127-99bf21c6f3f3-host-run-netns\") pod \"ovnkube-node-tdjgt\" (UID: \"a96e0817-a5a7-4684-9127-99bf21c6f3f3\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdjgt" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.038300 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a96e0817-a5a7-4684-9127-99bf21c6f3f3-node-log\") pod \"ovnkube-node-tdjgt\" (UID: \"a96e0817-a5a7-4684-9127-99bf21c6f3f3\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdjgt" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.038321 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a96e0817-a5a7-4684-9127-99bf21c6f3f3-etc-openvswitch\") pod \"ovnkube-node-tdjgt\" (UID: \"a96e0817-a5a7-4684-9127-99bf21c6f3f3\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdjgt" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.038336 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a96e0817-a5a7-4684-9127-99bf21c6f3f3-ovn-node-metrics-cert\") pod \"ovnkube-node-tdjgt\" (UID: \"a96e0817-a5a7-4684-9127-99bf21c6f3f3\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdjgt" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.038918 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "0f5d2a3f-25d8-4051-8000-30ec01a14eb0" (UID: "0f5d2a3f-25d8-4051-8000-30ec01a14eb0"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.038926 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "0f5d2a3f-25d8-4051-8000-30ec01a14eb0" (UID: "0f5d2a3f-25d8-4051-8000-30ec01a14eb0"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.038928 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "0f5d2a3f-25d8-4051-8000-30ec01a14eb0" (UID: "0f5d2a3f-25d8-4051-8000-30ec01a14eb0"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.038955 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "0f5d2a3f-25d8-4051-8000-30ec01a14eb0" (UID: "0f5d2a3f-25d8-4051-8000-30ec01a14eb0"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.038994 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "0f5d2a3f-25d8-4051-8000-30ec01a14eb0" (UID: "0f5d2a3f-25d8-4051-8000-30ec01a14eb0"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.039002 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "0f5d2a3f-25d8-4051-8000-30ec01a14eb0" (UID: "0f5d2a3f-25d8-4051-8000-30ec01a14eb0"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.039013 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "0f5d2a3f-25d8-4051-8000-30ec01a14eb0" (UID: "0f5d2a3f-25d8-4051-8000-30ec01a14eb0"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.039057 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-node-log" (OuterVolumeSpecName: "node-log") pod "0f5d2a3f-25d8-4051-8000-30ec01a14eb0" (UID: "0f5d2a3f-25d8-4051-8000-30ec01a14eb0"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.039317 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "0f5d2a3f-25d8-4051-8000-30ec01a14eb0" (UID: "0f5d2a3f-25d8-4051-8000-30ec01a14eb0"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.039329 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "0f5d2a3f-25d8-4051-8000-30ec01a14eb0" (UID: "0f5d2a3f-25d8-4051-8000-30ec01a14eb0"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.039364 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-host-slash" (OuterVolumeSpecName: "host-slash") pod "0f5d2a3f-25d8-4051-8000-30ec01a14eb0" (UID: "0f5d2a3f-25d8-4051-8000-30ec01a14eb0"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.039364 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "0f5d2a3f-25d8-4051-8000-30ec01a14eb0" (UID: "0f5d2a3f-25d8-4051-8000-30ec01a14eb0"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.039586 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "0f5d2a3f-25d8-4051-8000-30ec01a14eb0" (UID: "0f5d2a3f-25d8-4051-8000-30ec01a14eb0"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.043788 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "0f5d2a3f-25d8-4051-8000-30ec01a14eb0" (UID: "0f5d2a3f-25d8-4051-8000-30ec01a14eb0"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.044037 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-kube-api-access-4g9vq" (OuterVolumeSpecName: "kube-api-access-4g9vq") pod "0f5d2a3f-25d8-4051-8000-30ec01a14eb0" (UID: "0f5d2a3f-25d8-4051-8000-30ec01a14eb0"). InnerVolumeSpecName "kube-api-access-4g9vq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.052695 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "0f5d2a3f-25d8-4051-8000-30ec01a14eb0" (UID: "0f5d2a3f-25d8-4051-8000-30ec01a14eb0"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.055776 4642 scope.go:117] "RemoveContainer" containerID="eeed5a705967acb120d93dcd944360096cb0ce38a2ba8309391a39f2f8b33c60" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.068356 4642 scope.go:117] "RemoveContainer" containerID="f946a3d181059c560eee8b44f77e584a9b2169a8a7c6ee76a78036779954e5ec" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.079239 4642 scope.go:117] "RemoveContainer" containerID="4cdaa92914e736d4130cee0be4f11381b0edec756ca70235c6a4205f04777874" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.087543 4642 scope.go:117] "RemoveContainer" containerID="fba5c27390ef97be2917d1b375ca741d7a74118ba523043b1c826e4fccaee05d" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.095755 4642 scope.go:117] "RemoveContainer" containerID="a597a1b6d4b61efc1b46773965ad598626e298815da9ceffaf97965d72286d3f" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.108201 4642 scope.go:117] "RemoveContainer" containerID="d9c679b07a820cca68800f672c74f133d871d41e4cfe6c18e101febd7110b497" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.117926 4642 scope.go:117] "RemoveContainer" containerID="89c33d9fa76f5feb29f150ff38354d292c6cb68dedb78eb8496754c23272d137" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.127859 4642 scope.go:117] "RemoveContainer" containerID="e87e992d302acbefdcab2e332757e1f6d6535a0acdbe268b1e2eb1d49a615940" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.136018 4642 scope.go:117] "RemoveContainer" containerID="a6187660025657deea7e0cd68d1341136261c5e66c9e32c3b00b181df029843b" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.138741 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-run-openvswitch\") pod \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\" (UID: \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\") " Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.138786 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-var-lib-openvswitch\") pod \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\" (UID: \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\") " Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.138803 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-log-socket\") pod \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\" (UID: \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\") " Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.138823 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\" (UID: \"0f5d2a3f-25d8-4051-8000-30ec01a14eb0\") " Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.138872 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "0f5d2a3f-25d8-4051-8000-30ec01a14eb0" (UID: "0f5d2a3f-25d8-4051-8000-30ec01a14eb0"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.138902 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a96e0817-a5a7-4684-9127-99bf21c6f3f3-etc-openvswitch\") pod \"ovnkube-node-tdjgt\" (UID: \"a96e0817-a5a7-4684-9127-99bf21c6f3f3\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdjgt" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.138930 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-log-socket" (OuterVolumeSpecName: "log-socket") pod "0f5d2a3f-25d8-4051-8000-30ec01a14eb0" (UID: "0f5d2a3f-25d8-4051-8000-30ec01a14eb0"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.138932 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a96e0817-a5a7-4684-9127-99bf21c6f3f3-ovn-node-metrics-cert\") pod \"ovnkube-node-tdjgt\" (UID: \"a96e0817-a5a7-4684-9127-99bf21c6f3f3\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdjgt" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.138935 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "0f5d2a3f-25d8-4051-8000-30ec01a14eb0" (UID: "0f5d2a3f-25d8-4051-8000-30ec01a14eb0"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.138951 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "0f5d2a3f-25d8-4051-8000-30ec01a14eb0" (UID: "0f5d2a3f-25d8-4051-8000-30ec01a14eb0"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.138991 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a96e0817-a5a7-4684-9127-99bf21c6f3f3-etc-openvswitch\") pod \"ovnkube-node-tdjgt\" (UID: \"a96e0817-a5a7-4684-9127-99bf21c6f3f3\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdjgt" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.139027 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a96e0817-a5a7-4684-9127-99bf21c6f3f3-log-socket\") pod \"ovnkube-node-tdjgt\" (UID: \"a96e0817-a5a7-4684-9127-99bf21c6f3f3\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdjgt" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.139075 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a96e0817-a5a7-4684-9127-99bf21c6f3f3-log-socket\") pod \"ovnkube-node-tdjgt\" (UID: \"a96e0817-a5a7-4684-9127-99bf21c6f3f3\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdjgt" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.139083 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a96e0817-a5a7-4684-9127-99bf21c6f3f3-run-systemd\") pod \"ovnkube-node-tdjgt\" (UID: \"a96e0817-a5a7-4684-9127-99bf21c6f3f3\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdjgt" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.139116 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a96e0817-a5a7-4684-9127-99bf21c6f3f3-run-systemd\") pod \"ovnkube-node-tdjgt\" (UID: \"a96e0817-a5a7-4684-9127-99bf21c6f3f3\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdjgt" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.139134 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rv8fh\" (UniqueName: \"kubernetes.io/projected/a96e0817-a5a7-4684-9127-99bf21c6f3f3-kube-api-access-rv8fh\") pod \"ovnkube-node-tdjgt\" (UID: \"a96e0817-a5a7-4684-9127-99bf21c6f3f3\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdjgt" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.139156 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a96e0817-a5a7-4684-9127-99bf21c6f3f3-host-cni-bin\") pod \"ovnkube-node-tdjgt\" (UID: \"a96e0817-a5a7-4684-9127-99bf21c6f3f3\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdjgt" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.139242 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a96e0817-a5a7-4684-9127-99bf21c6f3f3-run-openvswitch\") pod \"ovnkube-node-tdjgt\" (UID: \"a96e0817-a5a7-4684-9127-99bf21c6f3f3\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdjgt" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.139264 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a96e0817-a5a7-4684-9127-99bf21c6f3f3-ovnkube-script-lib\") pod \"ovnkube-node-tdjgt\" (UID: \"a96e0817-a5a7-4684-9127-99bf21c6f3f3\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdjgt" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.139285 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a96e0817-a5a7-4684-9127-99bf21c6f3f3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-tdjgt\" (UID: \"a96e0817-a5a7-4684-9127-99bf21c6f3f3\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdjgt" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.139290 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a96e0817-a5a7-4684-9127-99bf21c6f3f3-host-cni-bin\") pod \"ovnkube-node-tdjgt\" (UID: \"a96e0817-a5a7-4684-9127-99bf21c6f3f3\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdjgt" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.139307 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a96e0817-a5a7-4684-9127-99bf21c6f3f3-host-run-ovn-kubernetes\") pod \"ovnkube-node-tdjgt\" (UID: \"a96e0817-a5a7-4684-9127-99bf21c6f3f3\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdjgt" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.139325 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a96e0817-a5a7-4684-9127-99bf21c6f3f3-systemd-units\") pod \"ovnkube-node-tdjgt\" (UID: \"a96e0817-a5a7-4684-9127-99bf21c6f3f3\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdjgt" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.139328 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a96e0817-a5a7-4684-9127-99bf21c6f3f3-run-openvswitch\") pod \"ovnkube-node-tdjgt\" (UID: \"a96e0817-a5a7-4684-9127-99bf21c6f3f3\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdjgt" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.139354 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a96e0817-a5a7-4684-9127-99bf21c6f3f3-var-lib-openvswitch\") pod \"ovnkube-node-tdjgt\" (UID: \"a96e0817-a5a7-4684-9127-99bf21c6f3f3\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdjgt" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.139371 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a96e0817-a5a7-4684-9127-99bf21c6f3f3-ovnkube-config\") pod \"ovnkube-node-tdjgt\" (UID: \"a96e0817-a5a7-4684-9127-99bf21c6f3f3\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdjgt" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.139375 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a96e0817-a5a7-4684-9127-99bf21c6f3f3-host-run-ovn-kubernetes\") pod \"ovnkube-node-tdjgt\" (UID: \"a96e0817-a5a7-4684-9127-99bf21c6f3f3\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdjgt" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.139393 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a96e0817-a5a7-4684-9127-99bf21c6f3f3-host-slash\") pod \"ovnkube-node-tdjgt\" (UID: \"a96e0817-a5a7-4684-9127-99bf21c6f3f3\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdjgt" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.139409 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a96e0817-a5a7-4684-9127-99bf21c6f3f3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-tdjgt\" (UID: \"a96e0817-a5a7-4684-9127-99bf21c6f3f3\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdjgt" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.139420 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a96e0817-a5a7-4684-9127-99bf21c6f3f3-host-kubelet\") pod \"ovnkube-node-tdjgt\" (UID: \"a96e0817-a5a7-4684-9127-99bf21c6f3f3\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdjgt" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.139439 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a96e0817-a5a7-4684-9127-99bf21c6f3f3-run-ovn\") pod \"ovnkube-node-tdjgt\" (UID: \"a96e0817-a5a7-4684-9127-99bf21c6f3f3\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdjgt" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.139454 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a96e0817-a5a7-4684-9127-99bf21c6f3f3-systemd-units\") pod \"ovnkube-node-tdjgt\" (UID: \"a96e0817-a5a7-4684-9127-99bf21c6f3f3\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdjgt" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.139504 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a96e0817-a5a7-4684-9127-99bf21c6f3f3-env-overrides\") pod \"ovnkube-node-tdjgt\" (UID: \"a96e0817-a5a7-4684-9127-99bf21c6f3f3\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdjgt" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.139531 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a96e0817-a5a7-4684-9127-99bf21c6f3f3-host-cni-netd\") pod \"ovnkube-node-tdjgt\" (UID: \"a96e0817-a5a7-4684-9127-99bf21c6f3f3\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdjgt" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.139548 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a96e0817-a5a7-4684-9127-99bf21c6f3f3-host-run-netns\") pod \"ovnkube-node-tdjgt\" (UID: \"a96e0817-a5a7-4684-9127-99bf21c6f3f3\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdjgt" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.139566 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a96e0817-a5a7-4684-9127-99bf21c6f3f3-node-log\") pod \"ovnkube-node-tdjgt\" (UID: \"a96e0817-a5a7-4684-9127-99bf21c6f3f3\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdjgt" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.139603 4642 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.139614 4642 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.139623 4642 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-log-socket\") on node \"crc\" DevicePath \"\"" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.139633 4642 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.139643 4642 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.139652 4642 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.139662 4642 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.139671 4642 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.139680 4642 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.139689 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4g9vq\" (UniqueName: \"kubernetes.io/projected/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-kube-api-access-4g9vq\") on node \"crc\" DevicePath \"\"" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.139697 4642 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-host-slash\") on node \"crc\" DevicePath \"\"" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.139706 4642 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.139713 4642 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.139722 4642 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.139732 4642 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.139741 4642 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.139750 4642 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.139757 4642 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.139765 4642 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-node-log\") on node \"crc\" DevicePath \"\"" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.139773 4642 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0f5d2a3f-25d8-4051-8000-30ec01a14eb0-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.139797 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a96e0817-a5a7-4684-9127-99bf21c6f3f3-node-log\") pod \"ovnkube-node-tdjgt\" (UID: \"a96e0817-a5a7-4684-9127-99bf21c6f3f3\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdjgt" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.139866 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a96e0817-a5a7-4684-9127-99bf21c6f3f3-ovnkube-script-lib\") pod \"ovnkube-node-tdjgt\" (UID: \"a96e0817-a5a7-4684-9127-99bf21c6f3f3\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdjgt" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.140004 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a96e0817-a5a7-4684-9127-99bf21c6f3f3-host-kubelet\") pod \"ovnkube-node-tdjgt\" (UID: \"a96e0817-a5a7-4684-9127-99bf21c6f3f3\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdjgt" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.140029 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a96e0817-a5a7-4684-9127-99bf21c6f3f3-host-slash\") pod \"ovnkube-node-tdjgt\" (UID: \"a96e0817-a5a7-4684-9127-99bf21c6f3f3\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdjgt" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.140206 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a96e0817-a5a7-4684-9127-99bf21c6f3f3-run-ovn\") pod \"ovnkube-node-tdjgt\" (UID: \"a96e0817-a5a7-4684-9127-99bf21c6f3f3\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdjgt" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.139438 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a96e0817-a5a7-4684-9127-99bf21c6f3f3-var-lib-openvswitch\") pod \"ovnkube-node-tdjgt\" (UID: \"a96e0817-a5a7-4684-9127-99bf21c6f3f3\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdjgt" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.140396 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a96e0817-a5a7-4684-9127-99bf21c6f3f3-ovnkube-config\") pod \"ovnkube-node-tdjgt\" (UID: \"a96e0817-a5a7-4684-9127-99bf21c6f3f3\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdjgt" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.140429 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a96e0817-a5a7-4684-9127-99bf21c6f3f3-host-cni-netd\") pod \"ovnkube-node-tdjgt\" (UID: \"a96e0817-a5a7-4684-9127-99bf21c6f3f3\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdjgt" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.140463 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a96e0817-a5a7-4684-9127-99bf21c6f3f3-host-run-netns\") pod \"ovnkube-node-tdjgt\" (UID: \"a96e0817-a5a7-4684-9127-99bf21c6f3f3\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdjgt" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.140681 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a96e0817-a5a7-4684-9127-99bf21c6f3f3-env-overrides\") pod \"ovnkube-node-tdjgt\" (UID: \"a96e0817-a5a7-4684-9127-99bf21c6f3f3\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdjgt" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.143162 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a96e0817-a5a7-4684-9127-99bf21c6f3f3-ovn-node-metrics-cert\") pod \"ovnkube-node-tdjgt\" (UID: \"a96e0817-a5a7-4684-9127-99bf21c6f3f3\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdjgt" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.151228 4642 scope.go:117] "RemoveContainer" containerID="ecf4d023197544de3b0fe0f541dc75332bb08d0f44046e46fd468e3c5a0f251e" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.154949 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rv8fh\" (UniqueName: \"kubernetes.io/projected/a96e0817-a5a7-4684-9127-99bf21c6f3f3-kube-api-access-rv8fh\") pod \"ovnkube-node-tdjgt\" (UID: \"a96e0817-a5a7-4684-9127-99bf21c6f3f3\") " pod="openshift-ovn-kubernetes/ovnkube-node-tdjgt" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.160708 4642 scope.go:117] "RemoveContainer" containerID="eeed5a705967acb120d93dcd944360096cb0ce38a2ba8309391a39f2f8b33c60" Jan 28 06:57:37 crc kubenswrapper[4642]: E0128 06:57:37.160999 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eeed5a705967acb120d93dcd944360096cb0ce38a2ba8309391a39f2f8b33c60\": container with ID starting with eeed5a705967acb120d93dcd944360096cb0ce38a2ba8309391a39f2f8b33c60 not found: ID does not exist" containerID="eeed5a705967acb120d93dcd944360096cb0ce38a2ba8309391a39f2f8b33c60" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.161031 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eeed5a705967acb120d93dcd944360096cb0ce38a2ba8309391a39f2f8b33c60"} err="failed to get container status \"eeed5a705967acb120d93dcd944360096cb0ce38a2ba8309391a39f2f8b33c60\": rpc error: code = NotFound desc = could not find container \"eeed5a705967acb120d93dcd944360096cb0ce38a2ba8309391a39f2f8b33c60\": container with ID starting with eeed5a705967acb120d93dcd944360096cb0ce38a2ba8309391a39f2f8b33c60 not found: ID does not exist" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.161057 4642 scope.go:117] "RemoveContainer" containerID="f946a3d181059c560eee8b44f77e584a9b2169a8a7c6ee76a78036779954e5ec" Jan 28 06:57:37 crc kubenswrapper[4642]: E0128 06:57:37.161410 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f946a3d181059c560eee8b44f77e584a9b2169a8a7c6ee76a78036779954e5ec\": container with ID starting with f946a3d181059c560eee8b44f77e584a9b2169a8a7c6ee76a78036779954e5ec not found: ID does not exist" containerID="f946a3d181059c560eee8b44f77e584a9b2169a8a7c6ee76a78036779954e5ec" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.161431 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f946a3d181059c560eee8b44f77e584a9b2169a8a7c6ee76a78036779954e5ec"} err="failed to get container status \"f946a3d181059c560eee8b44f77e584a9b2169a8a7c6ee76a78036779954e5ec\": rpc error: code = NotFound desc = could not find container \"f946a3d181059c560eee8b44f77e584a9b2169a8a7c6ee76a78036779954e5ec\": container with ID starting with f946a3d181059c560eee8b44f77e584a9b2169a8a7c6ee76a78036779954e5ec not found: ID does not exist" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.161457 4642 scope.go:117] "RemoveContainer" containerID="4cdaa92914e736d4130cee0be4f11381b0edec756ca70235c6a4205f04777874" Jan 28 06:57:37 crc kubenswrapper[4642]: E0128 06:57:37.161726 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4cdaa92914e736d4130cee0be4f11381b0edec756ca70235c6a4205f04777874\": container with ID starting with 4cdaa92914e736d4130cee0be4f11381b0edec756ca70235c6a4205f04777874 not found: ID does not exist" containerID="4cdaa92914e736d4130cee0be4f11381b0edec756ca70235c6a4205f04777874" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.161745 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cdaa92914e736d4130cee0be4f11381b0edec756ca70235c6a4205f04777874"} err="failed to get container status \"4cdaa92914e736d4130cee0be4f11381b0edec756ca70235c6a4205f04777874\": rpc error: code = NotFound desc = could not find container \"4cdaa92914e736d4130cee0be4f11381b0edec756ca70235c6a4205f04777874\": container with ID starting with 4cdaa92914e736d4130cee0be4f11381b0edec756ca70235c6a4205f04777874 not found: ID does not exist" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.161758 4642 scope.go:117] "RemoveContainer" containerID="fba5c27390ef97be2917d1b375ca741d7a74118ba523043b1c826e4fccaee05d" Jan 28 06:57:37 crc kubenswrapper[4642]: E0128 06:57:37.162139 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fba5c27390ef97be2917d1b375ca741d7a74118ba523043b1c826e4fccaee05d\": container with ID starting with fba5c27390ef97be2917d1b375ca741d7a74118ba523043b1c826e4fccaee05d not found: ID does not exist" containerID="fba5c27390ef97be2917d1b375ca741d7a74118ba523043b1c826e4fccaee05d" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.162166 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fba5c27390ef97be2917d1b375ca741d7a74118ba523043b1c826e4fccaee05d"} err="failed to get container status \"fba5c27390ef97be2917d1b375ca741d7a74118ba523043b1c826e4fccaee05d\": rpc error: code = NotFound desc = could not find container \"fba5c27390ef97be2917d1b375ca741d7a74118ba523043b1c826e4fccaee05d\": container with ID starting with fba5c27390ef97be2917d1b375ca741d7a74118ba523043b1c826e4fccaee05d not found: ID does not exist" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.162448 4642 scope.go:117] "RemoveContainer" containerID="a597a1b6d4b61efc1b46773965ad598626e298815da9ceffaf97965d72286d3f" Jan 28 06:57:37 crc kubenswrapper[4642]: E0128 06:57:37.162995 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a597a1b6d4b61efc1b46773965ad598626e298815da9ceffaf97965d72286d3f\": container with ID starting with a597a1b6d4b61efc1b46773965ad598626e298815da9ceffaf97965d72286d3f not found: ID does not exist" containerID="a597a1b6d4b61efc1b46773965ad598626e298815da9ceffaf97965d72286d3f" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.163019 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a597a1b6d4b61efc1b46773965ad598626e298815da9ceffaf97965d72286d3f"} err="failed to get container status \"a597a1b6d4b61efc1b46773965ad598626e298815da9ceffaf97965d72286d3f\": rpc error: code = NotFound desc = could not find container \"a597a1b6d4b61efc1b46773965ad598626e298815da9ceffaf97965d72286d3f\": container with ID starting with a597a1b6d4b61efc1b46773965ad598626e298815da9ceffaf97965d72286d3f not found: ID does not exist" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.163036 4642 scope.go:117] "RemoveContainer" containerID="d9c679b07a820cca68800f672c74f133d871d41e4cfe6c18e101febd7110b497" Jan 28 06:57:37 crc kubenswrapper[4642]: E0128 06:57:37.163263 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9c679b07a820cca68800f672c74f133d871d41e4cfe6c18e101febd7110b497\": container with ID starting with d9c679b07a820cca68800f672c74f133d871d41e4cfe6c18e101febd7110b497 not found: ID does not exist" containerID="d9c679b07a820cca68800f672c74f133d871d41e4cfe6c18e101febd7110b497" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.163282 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9c679b07a820cca68800f672c74f133d871d41e4cfe6c18e101febd7110b497"} err="failed to get container status \"d9c679b07a820cca68800f672c74f133d871d41e4cfe6c18e101febd7110b497\": rpc error: code = NotFound desc = could not find container \"d9c679b07a820cca68800f672c74f133d871d41e4cfe6c18e101febd7110b497\": container with ID starting with d9c679b07a820cca68800f672c74f133d871d41e4cfe6c18e101febd7110b497 not found: ID does not exist" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.163294 4642 scope.go:117] "RemoveContainer" containerID="89c33d9fa76f5feb29f150ff38354d292c6cb68dedb78eb8496754c23272d137" Jan 28 06:57:37 crc kubenswrapper[4642]: E0128 06:57:37.163529 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89c33d9fa76f5feb29f150ff38354d292c6cb68dedb78eb8496754c23272d137\": container with ID starting with 89c33d9fa76f5feb29f150ff38354d292c6cb68dedb78eb8496754c23272d137 not found: ID does not exist" containerID="89c33d9fa76f5feb29f150ff38354d292c6cb68dedb78eb8496754c23272d137" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.163553 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89c33d9fa76f5feb29f150ff38354d292c6cb68dedb78eb8496754c23272d137"} err="failed to get container status \"89c33d9fa76f5feb29f150ff38354d292c6cb68dedb78eb8496754c23272d137\": rpc error: code = NotFound desc = could not find container \"89c33d9fa76f5feb29f150ff38354d292c6cb68dedb78eb8496754c23272d137\": container with ID starting with 89c33d9fa76f5feb29f150ff38354d292c6cb68dedb78eb8496754c23272d137 not found: ID does not exist" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.163569 4642 scope.go:117] "RemoveContainer" containerID="e87e992d302acbefdcab2e332757e1f6d6535a0acdbe268b1e2eb1d49a615940" Jan 28 06:57:37 crc kubenswrapper[4642]: E0128 06:57:37.163884 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e87e992d302acbefdcab2e332757e1f6d6535a0acdbe268b1e2eb1d49a615940\": container with ID starting with e87e992d302acbefdcab2e332757e1f6d6535a0acdbe268b1e2eb1d49a615940 not found: ID does not exist" containerID="e87e992d302acbefdcab2e332757e1f6d6535a0acdbe268b1e2eb1d49a615940" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.163904 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e87e992d302acbefdcab2e332757e1f6d6535a0acdbe268b1e2eb1d49a615940"} err="failed to get container status \"e87e992d302acbefdcab2e332757e1f6d6535a0acdbe268b1e2eb1d49a615940\": rpc error: code = NotFound desc = could not find container \"e87e992d302acbefdcab2e332757e1f6d6535a0acdbe268b1e2eb1d49a615940\": container with ID starting with e87e992d302acbefdcab2e332757e1f6d6535a0acdbe268b1e2eb1d49a615940 not found: ID does not exist" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.163919 4642 scope.go:117] "RemoveContainer" containerID="a6187660025657deea7e0cd68d1341136261c5e66c9e32c3b00b181df029843b" Jan 28 06:57:37 crc kubenswrapper[4642]: E0128 06:57:37.164676 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6187660025657deea7e0cd68d1341136261c5e66c9e32c3b00b181df029843b\": container with ID starting with a6187660025657deea7e0cd68d1341136261c5e66c9e32c3b00b181df029843b not found: ID does not exist" containerID="a6187660025657deea7e0cd68d1341136261c5e66c9e32c3b00b181df029843b" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.164699 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6187660025657deea7e0cd68d1341136261c5e66c9e32c3b00b181df029843b"} err="failed to get container status \"a6187660025657deea7e0cd68d1341136261c5e66c9e32c3b00b181df029843b\": rpc error: code = NotFound desc = could not find container \"a6187660025657deea7e0cd68d1341136261c5e66c9e32c3b00b181df029843b\": container with ID starting with a6187660025657deea7e0cd68d1341136261c5e66c9e32c3b00b181df029843b not found: ID does not exist" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.164713 4642 scope.go:117] "RemoveContainer" containerID="ecf4d023197544de3b0fe0f541dc75332bb08d0f44046e46fd468e3c5a0f251e" Jan 28 06:57:37 crc kubenswrapper[4642]: E0128 06:57:37.164981 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ecf4d023197544de3b0fe0f541dc75332bb08d0f44046e46fd468e3c5a0f251e\": container with ID starting with ecf4d023197544de3b0fe0f541dc75332bb08d0f44046e46fd468e3c5a0f251e not found: ID does not exist" containerID="ecf4d023197544de3b0fe0f541dc75332bb08d0f44046e46fd468e3c5a0f251e" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.165005 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ecf4d023197544de3b0fe0f541dc75332bb08d0f44046e46fd468e3c5a0f251e"} err="failed to get container status \"ecf4d023197544de3b0fe0f541dc75332bb08d0f44046e46fd468e3c5a0f251e\": rpc error: code = NotFound desc = could not find container \"ecf4d023197544de3b0fe0f541dc75332bb08d0f44046e46fd468e3c5a0f251e\": container with ID starting with ecf4d023197544de3b0fe0f541dc75332bb08d0f44046e46fd468e3c5a0f251e not found: ID does not exist" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.165019 4642 scope.go:117] "RemoveContainer" containerID="eeed5a705967acb120d93dcd944360096cb0ce38a2ba8309391a39f2f8b33c60" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.165261 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eeed5a705967acb120d93dcd944360096cb0ce38a2ba8309391a39f2f8b33c60"} err="failed to get container status \"eeed5a705967acb120d93dcd944360096cb0ce38a2ba8309391a39f2f8b33c60\": rpc error: code = NotFound desc = could not find container \"eeed5a705967acb120d93dcd944360096cb0ce38a2ba8309391a39f2f8b33c60\": container with ID starting with eeed5a705967acb120d93dcd944360096cb0ce38a2ba8309391a39f2f8b33c60 not found: ID does not exist" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.165279 4642 scope.go:117] "RemoveContainer" containerID="f946a3d181059c560eee8b44f77e584a9b2169a8a7c6ee76a78036779954e5ec" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.165488 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f946a3d181059c560eee8b44f77e584a9b2169a8a7c6ee76a78036779954e5ec"} err="failed to get container status \"f946a3d181059c560eee8b44f77e584a9b2169a8a7c6ee76a78036779954e5ec\": rpc error: code = NotFound desc = could not find container \"f946a3d181059c560eee8b44f77e584a9b2169a8a7c6ee76a78036779954e5ec\": container with ID starting with f946a3d181059c560eee8b44f77e584a9b2169a8a7c6ee76a78036779954e5ec not found: ID does not exist" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.165505 4642 scope.go:117] "RemoveContainer" containerID="4cdaa92914e736d4130cee0be4f11381b0edec756ca70235c6a4205f04777874" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.165724 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cdaa92914e736d4130cee0be4f11381b0edec756ca70235c6a4205f04777874"} err="failed to get container status \"4cdaa92914e736d4130cee0be4f11381b0edec756ca70235c6a4205f04777874\": rpc error: code = NotFound desc = could not find container \"4cdaa92914e736d4130cee0be4f11381b0edec756ca70235c6a4205f04777874\": container with ID starting with 4cdaa92914e736d4130cee0be4f11381b0edec756ca70235c6a4205f04777874 not found: ID does not exist" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.165741 4642 scope.go:117] "RemoveContainer" containerID="fba5c27390ef97be2917d1b375ca741d7a74118ba523043b1c826e4fccaee05d" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.166644 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fba5c27390ef97be2917d1b375ca741d7a74118ba523043b1c826e4fccaee05d"} err="failed to get container status \"fba5c27390ef97be2917d1b375ca741d7a74118ba523043b1c826e4fccaee05d\": rpc error: code = NotFound desc = could not find container \"fba5c27390ef97be2917d1b375ca741d7a74118ba523043b1c826e4fccaee05d\": container with ID starting with fba5c27390ef97be2917d1b375ca741d7a74118ba523043b1c826e4fccaee05d not found: ID does not exist" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.166668 4642 scope.go:117] "RemoveContainer" containerID="a597a1b6d4b61efc1b46773965ad598626e298815da9ceffaf97965d72286d3f" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.166872 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a597a1b6d4b61efc1b46773965ad598626e298815da9ceffaf97965d72286d3f"} err="failed to get container status \"a597a1b6d4b61efc1b46773965ad598626e298815da9ceffaf97965d72286d3f\": rpc error: code = NotFound desc = could not find container \"a597a1b6d4b61efc1b46773965ad598626e298815da9ceffaf97965d72286d3f\": container with ID starting with a597a1b6d4b61efc1b46773965ad598626e298815da9ceffaf97965d72286d3f not found: ID does not exist" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.166906 4642 scope.go:117] "RemoveContainer" containerID="d9c679b07a820cca68800f672c74f133d871d41e4cfe6c18e101febd7110b497" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.167340 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9c679b07a820cca68800f672c74f133d871d41e4cfe6c18e101febd7110b497"} err="failed to get container status \"d9c679b07a820cca68800f672c74f133d871d41e4cfe6c18e101febd7110b497\": rpc error: code = NotFound desc = could not find container \"d9c679b07a820cca68800f672c74f133d871d41e4cfe6c18e101febd7110b497\": container with ID starting with d9c679b07a820cca68800f672c74f133d871d41e4cfe6c18e101febd7110b497 not found: ID does not exist" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.167362 4642 scope.go:117] "RemoveContainer" containerID="89c33d9fa76f5feb29f150ff38354d292c6cb68dedb78eb8496754c23272d137" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.167596 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89c33d9fa76f5feb29f150ff38354d292c6cb68dedb78eb8496754c23272d137"} err="failed to get container status \"89c33d9fa76f5feb29f150ff38354d292c6cb68dedb78eb8496754c23272d137\": rpc error: code = NotFound desc = could not find container \"89c33d9fa76f5feb29f150ff38354d292c6cb68dedb78eb8496754c23272d137\": container with ID starting with 89c33d9fa76f5feb29f150ff38354d292c6cb68dedb78eb8496754c23272d137 not found: ID does not exist" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.167629 4642 scope.go:117] "RemoveContainer" containerID="e87e992d302acbefdcab2e332757e1f6d6535a0acdbe268b1e2eb1d49a615940" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.167922 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e87e992d302acbefdcab2e332757e1f6d6535a0acdbe268b1e2eb1d49a615940"} err="failed to get container status \"e87e992d302acbefdcab2e332757e1f6d6535a0acdbe268b1e2eb1d49a615940\": rpc error: code = NotFound desc = could not find container \"e87e992d302acbefdcab2e332757e1f6d6535a0acdbe268b1e2eb1d49a615940\": container with ID starting with e87e992d302acbefdcab2e332757e1f6d6535a0acdbe268b1e2eb1d49a615940 not found: ID does not exist" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.167946 4642 scope.go:117] "RemoveContainer" containerID="a6187660025657deea7e0cd68d1341136261c5e66c9e32c3b00b181df029843b" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.168155 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6187660025657deea7e0cd68d1341136261c5e66c9e32c3b00b181df029843b"} err="failed to get container status \"a6187660025657deea7e0cd68d1341136261c5e66c9e32c3b00b181df029843b\": rpc error: code = NotFound desc = could not find container \"a6187660025657deea7e0cd68d1341136261c5e66c9e32c3b00b181df029843b\": container with ID starting with a6187660025657deea7e0cd68d1341136261c5e66c9e32c3b00b181df029843b not found: ID does not exist" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.168220 4642 scope.go:117] "RemoveContainer" containerID="ecf4d023197544de3b0fe0f541dc75332bb08d0f44046e46fd468e3c5a0f251e" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.168425 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ecf4d023197544de3b0fe0f541dc75332bb08d0f44046e46fd468e3c5a0f251e"} err="failed to get container status \"ecf4d023197544de3b0fe0f541dc75332bb08d0f44046e46fd468e3c5a0f251e\": rpc error: code = NotFound desc = could not find container \"ecf4d023197544de3b0fe0f541dc75332bb08d0f44046e46fd468e3c5a0f251e\": container with ID starting with ecf4d023197544de3b0fe0f541dc75332bb08d0f44046e46fd468e3c5a0f251e not found: ID does not exist" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.168448 4642 scope.go:117] "RemoveContainer" containerID="eeed5a705967acb120d93dcd944360096cb0ce38a2ba8309391a39f2f8b33c60" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.169120 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eeed5a705967acb120d93dcd944360096cb0ce38a2ba8309391a39f2f8b33c60"} err="failed to get container status \"eeed5a705967acb120d93dcd944360096cb0ce38a2ba8309391a39f2f8b33c60\": rpc error: code = NotFound desc = could not find container \"eeed5a705967acb120d93dcd944360096cb0ce38a2ba8309391a39f2f8b33c60\": container with ID starting with eeed5a705967acb120d93dcd944360096cb0ce38a2ba8309391a39f2f8b33c60 not found: ID does not exist" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.169143 4642 scope.go:117] "RemoveContainer" containerID="f946a3d181059c560eee8b44f77e584a9b2169a8a7c6ee76a78036779954e5ec" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.169411 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f946a3d181059c560eee8b44f77e584a9b2169a8a7c6ee76a78036779954e5ec"} err="failed to get container status \"f946a3d181059c560eee8b44f77e584a9b2169a8a7c6ee76a78036779954e5ec\": rpc error: code = NotFound desc = could not find container \"f946a3d181059c560eee8b44f77e584a9b2169a8a7c6ee76a78036779954e5ec\": container with ID starting with f946a3d181059c560eee8b44f77e584a9b2169a8a7c6ee76a78036779954e5ec not found: ID does not exist" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.169434 4642 scope.go:117] "RemoveContainer" containerID="4cdaa92914e736d4130cee0be4f11381b0edec756ca70235c6a4205f04777874" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.169776 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cdaa92914e736d4130cee0be4f11381b0edec756ca70235c6a4205f04777874"} err="failed to get container status \"4cdaa92914e736d4130cee0be4f11381b0edec756ca70235c6a4205f04777874\": rpc error: code = NotFound desc = could not find container \"4cdaa92914e736d4130cee0be4f11381b0edec756ca70235c6a4205f04777874\": container with ID starting with 4cdaa92914e736d4130cee0be4f11381b0edec756ca70235c6a4205f04777874 not found: ID does not exist" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.169795 4642 scope.go:117] "RemoveContainer" containerID="fba5c27390ef97be2917d1b375ca741d7a74118ba523043b1c826e4fccaee05d" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.170041 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fba5c27390ef97be2917d1b375ca741d7a74118ba523043b1c826e4fccaee05d"} err="failed to get container status \"fba5c27390ef97be2917d1b375ca741d7a74118ba523043b1c826e4fccaee05d\": rpc error: code = NotFound desc = could not find container \"fba5c27390ef97be2917d1b375ca741d7a74118ba523043b1c826e4fccaee05d\": container with ID starting with fba5c27390ef97be2917d1b375ca741d7a74118ba523043b1c826e4fccaee05d not found: ID does not exist" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.170063 4642 scope.go:117] "RemoveContainer" containerID="a597a1b6d4b61efc1b46773965ad598626e298815da9ceffaf97965d72286d3f" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.170396 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a597a1b6d4b61efc1b46773965ad598626e298815da9ceffaf97965d72286d3f"} err="failed to get container status \"a597a1b6d4b61efc1b46773965ad598626e298815da9ceffaf97965d72286d3f\": rpc error: code = NotFound desc = could not find container \"a597a1b6d4b61efc1b46773965ad598626e298815da9ceffaf97965d72286d3f\": container with ID starting with a597a1b6d4b61efc1b46773965ad598626e298815da9ceffaf97965d72286d3f not found: ID does not exist" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.170418 4642 scope.go:117] "RemoveContainer" containerID="d9c679b07a820cca68800f672c74f133d871d41e4cfe6c18e101febd7110b497" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.170640 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9c679b07a820cca68800f672c74f133d871d41e4cfe6c18e101febd7110b497"} err="failed to get container status \"d9c679b07a820cca68800f672c74f133d871d41e4cfe6c18e101febd7110b497\": rpc error: code = NotFound desc = could not find container \"d9c679b07a820cca68800f672c74f133d871d41e4cfe6c18e101febd7110b497\": container with ID starting with d9c679b07a820cca68800f672c74f133d871d41e4cfe6c18e101febd7110b497 not found: ID does not exist" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.170660 4642 scope.go:117] "RemoveContainer" containerID="89c33d9fa76f5feb29f150ff38354d292c6cb68dedb78eb8496754c23272d137" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.170831 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89c33d9fa76f5feb29f150ff38354d292c6cb68dedb78eb8496754c23272d137"} err="failed to get container status \"89c33d9fa76f5feb29f150ff38354d292c6cb68dedb78eb8496754c23272d137\": rpc error: code = NotFound desc = could not find container \"89c33d9fa76f5feb29f150ff38354d292c6cb68dedb78eb8496754c23272d137\": container with ID starting with 89c33d9fa76f5feb29f150ff38354d292c6cb68dedb78eb8496754c23272d137 not found: ID does not exist" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.170854 4642 scope.go:117] "RemoveContainer" containerID="e87e992d302acbefdcab2e332757e1f6d6535a0acdbe268b1e2eb1d49a615940" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.171151 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e87e992d302acbefdcab2e332757e1f6d6535a0acdbe268b1e2eb1d49a615940"} err="failed to get container status \"e87e992d302acbefdcab2e332757e1f6d6535a0acdbe268b1e2eb1d49a615940\": rpc error: code = NotFound desc = could not find container \"e87e992d302acbefdcab2e332757e1f6d6535a0acdbe268b1e2eb1d49a615940\": container with ID starting with e87e992d302acbefdcab2e332757e1f6d6535a0acdbe268b1e2eb1d49a615940 not found: ID does not exist" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.171170 4642 scope.go:117] "RemoveContainer" containerID="a6187660025657deea7e0cd68d1341136261c5e66c9e32c3b00b181df029843b" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.171429 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6187660025657deea7e0cd68d1341136261c5e66c9e32c3b00b181df029843b"} err="failed to get container status \"a6187660025657deea7e0cd68d1341136261c5e66c9e32c3b00b181df029843b\": rpc error: code = NotFound desc = could not find container \"a6187660025657deea7e0cd68d1341136261c5e66c9e32c3b00b181df029843b\": container with ID starting with a6187660025657deea7e0cd68d1341136261c5e66c9e32c3b00b181df029843b not found: ID does not exist" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.171451 4642 scope.go:117] "RemoveContainer" containerID="ecf4d023197544de3b0fe0f541dc75332bb08d0f44046e46fd468e3c5a0f251e" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.171631 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ecf4d023197544de3b0fe0f541dc75332bb08d0f44046e46fd468e3c5a0f251e"} err="failed to get container status \"ecf4d023197544de3b0fe0f541dc75332bb08d0f44046e46fd468e3c5a0f251e\": rpc error: code = NotFound desc = could not find container \"ecf4d023197544de3b0fe0f541dc75332bb08d0f44046e46fd468e3c5a0f251e\": container with ID starting with ecf4d023197544de3b0fe0f541dc75332bb08d0f44046e46fd468e3c5a0f251e not found: ID does not exist" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.171651 4642 scope.go:117] "RemoveContainer" containerID="eeed5a705967acb120d93dcd944360096cb0ce38a2ba8309391a39f2f8b33c60" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.171845 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eeed5a705967acb120d93dcd944360096cb0ce38a2ba8309391a39f2f8b33c60"} err="failed to get container status \"eeed5a705967acb120d93dcd944360096cb0ce38a2ba8309391a39f2f8b33c60\": rpc error: code = NotFound desc = could not find container \"eeed5a705967acb120d93dcd944360096cb0ce38a2ba8309391a39f2f8b33c60\": container with ID starting with eeed5a705967acb120d93dcd944360096cb0ce38a2ba8309391a39f2f8b33c60 not found: ID does not exist" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.171865 4642 scope.go:117] "RemoveContainer" containerID="f946a3d181059c560eee8b44f77e584a9b2169a8a7c6ee76a78036779954e5ec" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.172056 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f946a3d181059c560eee8b44f77e584a9b2169a8a7c6ee76a78036779954e5ec"} err="failed to get container status \"f946a3d181059c560eee8b44f77e584a9b2169a8a7c6ee76a78036779954e5ec\": rpc error: code = NotFound desc = could not find container \"f946a3d181059c560eee8b44f77e584a9b2169a8a7c6ee76a78036779954e5ec\": container with ID starting with f946a3d181059c560eee8b44f77e584a9b2169a8a7c6ee76a78036779954e5ec not found: ID does not exist" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.172086 4642 scope.go:117] "RemoveContainer" containerID="4cdaa92914e736d4130cee0be4f11381b0edec756ca70235c6a4205f04777874" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.172405 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cdaa92914e736d4130cee0be4f11381b0edec756ca70235c6a4205f04777874"} err="failed to get container status \"4cdaa92914e736d4130cee0be4f11381b0edec756ca70235c6a4205f04777874\": rpc error: code = NotFound desc = could not find container \"4cdaa92914e736d4130cee0be4f11381b0edec756ca70235c6a4205f04777874\": container with ID starting with 4cdaa92914e736d4130cee0be4f11381b0edec756ca70235c6a4205f04777874 not found: ID does not exist" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.172425 4642 scope.go:117] "RemoveContainer" containerID="fba5c27390ef97be2917d1b375ca741d7a74118ba523043b1c826e4fccaee05d" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.172621 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fba5c27390ef97be2917d1b375ca741d7a74118ba523043b1c826e4fccaee05d"} err="failed to get container status \"fba5c27390ef97be2917d1b375ca741d7a74118ba523043b1c826e4fccaee05d\": rpc error: code = NotFound desc = could not find container \"fba5c27390ef97be2917d1b375ca741d7a74118ba523043b1c826e4fccaee05d\": container with ID starting with fba5c27390ef97be2917d1b375ca741d7a74118ba523043b1c826e4fccaee05d not found: ID does not exist" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.172641 4642 scope.go:117] "RemoveContainer" containerID="a597a1b6d4b61efc1b46773965ad598626e298815da9ceffaf97965d72286d3f" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.172952 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a597a1b6d4b61efc1b46773965ad598626e298815da9ceffaf97965d72286d3f"} err="failed to get container status \"a597a1b6d4b61efc1b46773965ad598626e298815da9ceffaf97965d72286d3f\": rpc error: code = NotFound desc = could not find container \"a597a1b6d4b61efc1b46773965ad598626e298815da9ceffaf97965d72286d3f\": container with ID starting with a597a1b6d4b61efc1b46773965ad598626e298815da9ceffaf97965d72286d3f not found: ID does not exist" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.172973 4642 scope.go:117] "RemoveContainer" containerID="d9c679b07a820cca68800f672c74f133d871d41e4cfe6c18e101febd7110b497" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.173169 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9c679b07a820cca68800f672c74f133d871d41e4cfe6c18e101febd7110b497"} err="failed to get container status \"d9c679b07a820cca68800f672c74f133d871d41e4cfe6c18e101febd7110b497\": rpc error: code = NotFound desc = could not find container \"d9c679b07a820cca68800f672c74f133d871d41e4cfe6c18e101febd7110b497\": container with ID starting with d9c679b07a820cca68800f672c74f133d871d41e4cfe6c18e101febd7110b497 not found: ID does not exist" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.173218 4642 scope.go:117] "RemoveContainer" containerID="89c33d9fa76f5feb29f150ff38354d292c6cb68dedb78eb8496754c23272d137" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.173394 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89c33d9fa76f5feb29f150ff38354d292c6cb68dedb78eb8496754c23272d137"} err="failed to get container status \"89c33d9fa76f5feb29f150ff38354d292c6cb68dedb78eb8496754c23272d137\": rpc error: code = NotFound desc = could not find container \"89c33d9fa76f5feb29f150ff38354d292c6cb68dedb78eb8496754c23272d137\": container with ID starting with 89c33d9fa76f5feb29f150ff38354d292c6cb68dedb78eb8496754c23272d137 not found: ID does not exist" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.173424 4642 scope.go:117] "RemoveContainer" containerID="e87e992d302acbefdcab2e332757e1f6d6535a0acdbe268b1e2eb1d49a615940" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.173591 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e87e992d302acbefdcab2e332757e1f6d6535a0acdbe268b1e2eb1d49a615940"} err="failed to get container status \"e87e992d302acbefdcab2e332757e1f6d6535a0acdbe268b1e2eb1d49a615940\": rpc error: code = NotFound desc = could not find container \"e87e992d302acbefdcab2e332757e1f6d6535a0acdbe268b1e2eb1d49a615940\": container with ID starting with e87e992d302acbefdcab2e332757e1f6d6535a0acdbe268b1e2eb1d49a615940 not found: ID does not exist" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.173614 4642 scope.go:117] "RemoveContainer" containerID="a6187660025657deea7e0cd68d1341136261c5e66c9e32c3b00b181df029843b" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.173792 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6187660025657deea7e0cd68d1341136261c5e66c9e32c3b00b181df029843b"} err="failed to get container status \"a6187660025657deea7e0cd68d1341136261c5e66c9e32c3b00b181df029843b\": rpc error: code = NotFound desc = could not find container \"a6187660025657deea7e0cd68d1341136261c5e66c9e32c3b00b181df029843b\": container with ID starting with a6187660025657deea7e0cd68d1341136261c5e66c9e32c3b00b181df029843b not found: ID does not exist" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.173811 4642 scope.go:117] "RemoveContainer" containerID="ecf4d023197544de3b0fe0f541dc75332bb08d0f44046e46fd468e3c5a0f251e" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.174003 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ecf4d023197544de3b0fe0f541dc75332bb08d0f44046e46fd468e3c5a0f251e"} err="failed to get container status \"ecf4d023197544de3b0fe0f541dc75332bb08d0f44046e46fd468e3c5a0f251e\": rpc error: code = NotFound desc = could not find container \"ecf4d023197544de3b0fe0f541dc75332bb08d0f44046e46fd468e3c5a0f251e\": container with ID starting with ecf4d023197544de3b0fe0f541dc75332bb08d0f44046e46fd468e3c5a0f251e not found: ID does not exist" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.300031 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-tdjgt" Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.360412 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-7fdwx"] Jan 28 06:57:37 crc kubenswrapper[4642]: I0128 06:57:37.363399 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-7fdwx"] Jan 28 06:57:38 crc kubenswrapper[4642]: I0128 06:57:38.041458 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-28n48_3d569b7c-8a0e-4074-b61f-4139413b9849/kube-multus/2.log" Jan 28 06:57:38 crc kubenswrapper[4642]: I0128 06:57:38.042832 4642 generic.go:334] "Generic (PLEG): container finished" podID="a96e0817-a5a7-4684-9127-99bf21c6f3f3" containerID="63b06af8579f5a95057210501fdfaefe1b5af5f788433f5051bc7fe4adcb7d52" exitCode=0 Jan 28 06:57:38 crc kubenswrapper[4642]: I0128 06:57:38.042861 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tdjgt" event={"ID":"a96e0817-a5a7-4684-9127-99bf21c6f3f3","Type":"ContainerDied","Data":"63b06af8579f5a95057210501fdfaefe1b5af5f788433f5051bc7fe4adcb7d52"} Jan 28 06:57:38 crc kubenswrapper[4642]: I0128 06:57:38.042881 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tdjgt" event={"ID":"a96e0817-a5a7-4684-9127-99bf21c6f3f3","Type":"ContainerStarted","Data":"478580ac798668d6a084e3c92e0cb872531d172908644c49bd1c76760dce6efb"} Jan 28 06:57:38 crc kubenswrapper[4642]: I0128 06:57:38.200023 4642 patch_prober.go:28] interesting pod/machine-config-daemon-hdsmf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 06:57:38 crc kubenswrapper[4642]: I0128 06:57:38.200350 4642 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 06:57:38 crc kubenswrapper[4642]: I0128 06:57:38.200388 4642 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" Jan 28 06:57:38 crc kubenswrapper[4642]: I0128 06:57:38.200849 4642 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2613a0bc4b999e8496454b6a0b4fbac4291bb4c032e65bdad7717ec571a658c4"} pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 06:57:38 crc kubenswrapper[4642]: I0128 06:57:38.200910 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" containerName="machine-config-daemon" containerID="cri-o://2613a0bc4b999e8496454b6a0b4fbac4291bb4c032e65bdad7717ec571a658c4" gracePeriod=600 Jan 28 06:57:39 crc kubenswrapper[4642]: I0128 06:57:39.054248 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tdjgt" event={"ID":"a96e0817-a5a7-4684-9127-99bf21c6f3f3","Type":"ContainerStarted","Data":"f8bc1f62adae794c3cb5a15087050992e93827fe03b410753f8f823f13171841"} Jan 28 06:57:39 crc kubenswrapper[4642]: I0128 06:57:39.054331 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tdjgt" event={"ID":"a96e0817-a5a7-4684-9127-99bf21c6f3f3","Type":"ContainerStarted","Data":"280374ec69bb5b4fa6dc28672447b2f67290d8b855c5100da1ca08ee23307f42"} Jan 28 06:57:39 crc kubenswrapper[4642]: I0128 06:57:39.054345 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tdjgt" event={"ID":"a96e0817-a5a7-4684-9127-99bf21c6f3f3","Type":"ContainerStarted","Data":"d815cfe3935ded7368f9861e48bb451fc56772317a81912416cb46f9a6c2af29"} Jan 28 06:57:39 crc kubenswrapper[4642]: I0128 06:57:39.054356 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tdjgt" event={"ID":"a96e0817-a5a7-4684-9127-99bf21c6f3f3","Type":"ContainerStarted","Data":"dcc88fd9d65e529e62d2523830686e5a0651376c0711e818f8ba39b7769ce5d5"} Jan 28 06:57:39 crc kubenswrapper[4642]: I0128 06:57:39.054366 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tdjgt" event={"ID":"a96e0817-a5a7-4684-9127-99bf21c6f3f3","Type":"ContainerStarted","Data":"793c60e9d94c4688d49e34b6a676d086665e1ea80498371342c958e558cf450c"} Jan 28 06:57:39 crc kubenswrapper[4642]: I0128 06:57:39.054376 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tdjgt" event={"ID":"a96e0817-a5a7-4684-9127-99bf21c6f3f3","Type":"ContainerStarted","Data":"81aa3a6d68e0e208592263c7922c65a9e5f06a43e50ba6a1ee8a9c20ee15a37b"} Jan 28 06:57:39 crc kubenswrapper[4642]: I0128 06:57:39.058350 4642 generic.go:334] "Generic (PLEG): container finished" podID="338ae955-434d-40bd-8519-580badf3e175" containerID="2613a0bc4b999e8496454b6a0b4fbac4291bb4c032e65bdad7717ec571a658c4" exitCode=0 Jan 28 06:57:39 crc kubenswrapper[4642]: I0128 06:57:39.058400 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" event={"ID":"338ae955-434d-40bd-8519-580badf3e175","Type":"ContainerDied","Data":"2613a0bc4b999e8496454b6a0b4fbac4291bb4c032e65bdad7717ec571a658c4"} Jan 28 06:57:39 crc kubenswrapper[4642]: I0128 06:57:39.058437 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" event={"ID":"338ae955-434d-40bd-8519-580badf3e175","Type":"ContainerStarted","Data":"6c756017776ac34b1c728f1bab1ac90c7064431607aa921734d4ccc64382a3e1"} Jan 28 06:57:39 crc kubenswrapper[4642]: I0128 06:57:39.058471 4642 scope.go:117] "RemoveContainer" containerID="48bbb04e7e00b1645e7fb76e2fc1e8e43eb0a9a8d80349bf4cb4e5bb19fe3f52" Jan 28 06:57:39 crc kubenswrapper[4642]: I0128 06:57:39.104277 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f5d2a3f-25d8-4051-8000-30ec01a14eb0" path="/var/lib/kubelet/pods/0f5d2a3f-25d8-4051-8000-30ec01a14eb0/volumes" Jan 28 06:57:41 crc kubenswrapper[4642]: I0128 06:57:41.072023 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tdjgt" event={"ID":"a96e0817-a5a7-4684-9127-99bf21c6f3f3","Type":"ContainerStarted","Data":"c63364724240024b16c2b3027447ea474e88d8bad4b8e4ca82b29af468499550"} Jan 28 06:57:43 crc kubenswrapper[4642]: I0128 06:57:43.082594 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tdjgt" event={"ID":"a96e0817-a5a7-4684-9127-99bf21c6f3f3","Type":"ContainerStarted","Data":"80a70f340d6bbf21f18b24085bf65957a4c97ed5cd96f059380dcfc1dcaedae6"} Jan 28 06:57:43 crc kubenswrapper[4642]: I0128 06:57:43.083927 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-tdjgt" Jan 28 06:57:43 crc kubenswrapper[4642]: I0128 06:57:43.083954 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-tdjgt" Jan 28 06:57:43 crc kubenswrapper[4642]: I0128 06:57:43.083992 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-tdjgt" Jan 28 06:57:43 crc kubenswrapper[4642]: I0128 06:57:43.105259 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-tdjgt" podStartSLOduration=7.105245967 podStartE2EDuration="7.105245967s" podCreationTimestamp="2026-01-28 06:57:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 06:57:43.102268737 +0000 UTC m=+586.334357537" watchObservedRunningTime="2026-01-28 06:57:43.105245967 +0000 UTC m=+586.337334776" Jan 28 06:57:43 crc kubenswrapper[4642]: I0128 06:57:43.106276 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-tdjgt" Jan 28 06:57:43 crc kubenswrapper[4642]: I0128 06:57:43.106687 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-tdjgt" Jan 28 06:57:48 crc kubenswrapper[4642]: I0128 06:57:48.098128 4642 scope.go:117] "RemoveContainer" containerID="284b63834bf5eec6caab472794a1dfb8ec01f2a5fa5e3807db375880b7b556eb" Jan 28 06:57:48 crc kubenswrapper[4642]: E0128 06:57:48.098694 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-28n48_openshift-multus(3d569b7c-8a0e-4074-b61f-4139413b9849)\"" pod="openshift-multus/multus-28n48" podUID="3d569b7c-8a0e-4074-b61f-4139413b9849" Jan 28 06:58:00 crc kubenswrapper[4642]: I0128 06:58:00.098545 4642 scope.go:117] "RemoveContainer" containerID="284b63834bf5eec6caab472794a1dfb8ec01f2a5fa5e3807db375880b7b556eb" Jan 28 06:58:01 crc kubenswrapper[4642]: I0128 06:58:01.148667 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-28n48_3d569b7c-8a0e-4074-b61f-4139413b9849/kube-multus/2.log" Jan 28 06:58:01 crc kubenswrapper[4642]: I0128 06:58:01.148995 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-28n48" event={"ID":"3d569b7c-8a0e-4074-b61f-4139413b9849","Type":"ContainerStarted","Data":"3cca092b9b13adfaf160f56aef7d625884539d3cd73730721dc4913cbe08aed4"} Jan 28 06:58:04 crc kubenswrapper[4642]: I0128 06:58:04.152914 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137cnpp"] Jan 28 06:58:04 crc kubenswrapper[4642]: I0128 06:58:04.154216 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137cnpp" Jan 28 06:58:04 crc kubenswrapper[4642]: I0128 06:58:04.155565 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 28 06:58:04 crc kubenswrapper[4642]: I0128 06:58:04.158748 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137cnpp"] Jan 28 06:58:04 crc kubenswrapper[4642]: I0128 06:58:04.282306 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5b5jq\" (UniqueName: \"kubernetes.io/projected/4ac15d4c-285c-4cef-8de9-b532767c0a6b-kube-api-access-5b5jq\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137cnpp\" (UID: \"4ac15d4c-285c-4cef-8de9-b532767c0a6b\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137cnpp" Jan 28 06:58:04 crc kubenswrapper[4642]: I0128 06:58:04.282348 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4ac15d4c-285c-4cef-8de9-b532767c0a6b-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137cnpp\" (UID: \"4ac15d4c-285c-4cef-8de9-b532767c0a6b\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137cnpp" Jan 28 06:58:04 crc kubenswrapper[4642]: I0128 06:58:04.282397 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4ac15d4c-285c-4cef-8de9-b532767c0a6b-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137cnpp\" (UID: \"4ac15d4c-285c-4cef-8de9-b532767c0a6b\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137cnpp" Jan 28 06:58:04 crc kubenswrapper[4642]: I0128 06:58:04.382730 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4ac15d4c-285c-4cef-8de9-b532767c0a6b-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137cnpp\" (UID: \"4ac15d4c-285c-4cef-8de9-b532767c0a6b\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137cnpp" Jan 28 06:58:04 crc kubenswrapper[4642]: I0128 06:58:04.382821 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5b5jq\" (UniqueName: \"kubernetes.io/projected/4ac15d4c-285c-4cef-8de9-b532767c0a6b-kube-api-access-5b5jq\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137cnpp\" (UID: \"4ac15d4c-285c-4cef-8de9-b532767c0a6b\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137cnpp" Jan 28 06:58:04 crc kubenswrapper[4642]: I0128 06:58:04.382864 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4ac15d4c-285c-4cef-8de9-b532767c0a6b-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137cnpp\" (UID: \"4ac15d4c-285c-4cef-8de9-b532767c0a6b\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137cnpp" Jan 28 06:58:04 crc kubenswrapper[4642]: I0128 06:58:04.383114 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4ac15d4c-285c-4cef-8de9-b532767c0a6b-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137cnpp\" (UID: \"4ac15d4c-285c-4cef-8de9-b532767c0a6b\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137cnpp" Jan 28 06:58:04 crc kubenswrapper[4642]: I0128 06:58:04.383262 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4ac15d4c-285c-4cef-8de9-b532767c0a6b-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137cnpp\" (UID: \"4ac15d4c-285c-4cef-8de9-b532767c0a6b\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137cnpp" Jan 28 06:58:04 crc kubenswrapper[4642]: I0128 06:58:04.397282 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5b5jq\" (UniqueName: \"kubernetes.io/projected/4ac15d4c-285c-4cef-8de9-b532767c0a6b-kube-api-access-5b5jq\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137cnpp\" (UID: \"4ac15d4c-285c-4cef-8de9-b532767c0a6b\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137cnpp" Jan 28 06:58:04 crc kubenswrapper[4642]: I0128 06:58:04.468759 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137cnpp" Jan 28 06:58:04 crc kubenswrapper[4642]: I0128 06:58:04.791849 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137cnpp"] Jan 28 06:58:05 crc kubenswrapper[4642]: I0128 06:58:05.164401 4642 generic.go:334] "Generic (PLEG): container finished" podID="4ac15d4c-285c-4cef-8de9-b532767c0a6b" containerID="777cdd5ad9a38441b1b9dac2ab507924133e1ae45ebb36f78acb845046449674" exitCode=0 Jan 28 06:58:05 crc kubenswrapper[4642]: I0128 06:58:05.164478 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137cnpp" event={"ID":"4ac15d4c-285c-4cef-8de9-b532767c0a6b","Type":"ContainerDied","Data":"777cdd5ad9a38441b1b9dac2ab507924133e1ae45ebb36f78acb845046449674"} Jan 28 06:58:05 crc kubenswrapper[4642]: I0128 06:58:05.164561 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137cnpp" event={"ID":"4ac15d4c-285c-4cef-8de9-b532767c0a6b","Type":"ContainerStarted","Data":"671df9efe0beb03336b7440e1c1d4e7e4696270985cf83495bd3ff80f733240f"} Jan 28 06:58:07 crc kubenswrapper[4642]: I0128 06:58:07.173229 4642 generic.go:334] "Generic (PLEG): container finished" podID="4ac15d4c-285c-4cef-8de9-b532767c0a6b" containerID="bdf167b95a0932ed52a057fec77608b5e5252e2bc0f31211597e4ece55ae0bab" exitCode=0 Jan 28 06:58:07 crc kubenswrapper[4642]: I0128 06:58:07.173270 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137cnpp" event={"ID":"4ac15d4c-285c-4cef-8de9-b532767c0a6b","Type":"ContainerDied","Data":"bdf167b95a0932ed52a057fec77608b5e5252e2bc0f31211597e4ece55ae0bab"} Jan 28 06:58:07 crc kubenswrapper[4642]: I0128 06:58:07.318802 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-tdjgt" Jan 28 06:58:08 crc kubenswrapper[4642]: I0128 06:58:08.179005 4642 generic.go:334] "Generic (PLEG): container finished" podID="4ac15d4c-285c-4cef-8de9-b532767c0a6b" containerID="ac2abdb2cc47b0314974c0de4cd077b98333a39a5f6c142d841c33d3de49cd4e" exitCode=0 Jan 28 06:58:08 crc kubenswrapper[4642]: I0128 06:58:08.179094 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137cnpp" event={"ID":"4ac15d4c-285c-4cef-8de9-b532767c0a6b","Type":"ContainerDied","Data":"ac2abdb2cc47b0314974c0de4cd077b98333a39a5f6c142d841c33d3de49cd4e"} Jan 28 06:58:09 crc kubenswrapper[4642]: I0128 06:58:09.344260 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137cnpp" Jan 28 06:58:09 crc kubenswrapper[4642]: I0128 06:58:09.529855 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4ac15d4c-285c-4cef-8de9-b532767c0a6b-bundle\") pod \"4ac15d4c-285c-4cef-8de9-b532767c0a6b\" (UID: \"4ac15d4c-285c-4cef-8de9-b532767c0a6b\") " Jan 28 06:58:09 crc kubenswrapper[4642]: I0128 06:58:09.529896 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4ac15d4c-285c-4cef-8de9-b532767c0a6b-util\") pod \"4ac15d4c-285c-4cef-8de9-b532767c0a6b\" (UID: \"4ac15d4c-285c-4cef-8de9-b532767c0a6b\") " Jan 28 06:58:09 crc kubenswrapper[4642]: I0128 06:58:09.530009 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5b5jq\" (UniqueName: \"kubernetes.io/projected/4ac15d4c-285c-4cef-8de9-b532767c0a6b-kube-api-access-5b5jq\") pod \"4ac15d4c-285c-4cef-8de9-b532767c0a6b\" (UID: \"4ac15d4c-285c-4cef-8de9-b532767c0a6b\") " Jan 28 06:58:09 crc kubenswrapper[4642]: I0128 06:58:09.530501 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ac15d4c-285c-4cef-8de9-b532767c0a6b-bundle" (OuterVolumeSpecName: "bundle") pod "4ac15d4c-285c-4cef-8de9-b532767c0a6b" (UID: "4ac15d4c-285c-4cef-8de9-b532767c0a6b"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 06:58:09 crc kubenswrapper[4642]: I0128 06:58:09.534759 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ac15d4c-285c-4cef-8de9-b532767c0a6b-kube-api-access-5b5jq" (OuterVolumeSpecName: "kube-api-access-5b5jq") pod "4ac15d4c-285c-4cef-8de9-b532767c0a6b" (UID: "4ac15d4c-285c-4cef-8de9-b532767c0a6b"). InnerVolumeSpecName "kube-api-access-5b5jq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 06:58:09 crc kubenswrapper[4642]: I0128 06:58:09.539622 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ac15d4c-285c-4cef-8de9-b532767c0a6b-util" (OuterVolumeSpecName: "util") pod "4ac15d4c-285c-4cef-8de9-b532767c0a6b" (UID: "4ac15d4c-285c-4cef-8de9-b532767c0a6b"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 06:58:09 crc kubenswrapper[4642]: I0128 06:58:09.631888 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5b5jq\" (UniqueName: \"kubernetes.io/projected/4ac15d4c-285c-4cef-8de9-b532767c0a6b-kube-api-access-5b5jq\") on node \"crc\" DevicePath \"\"" Jan 28 06:58:09 crc kubenswrapper[4642]: I0128 06:58:09.631918 4642 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4ac15d4c-285c-4cef-8de9-b532767c0a6b-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 06:58:09 crc kubenswrapper[4642]: I0128 06:58:09.631928 4642 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4ac15d4c-285c-4cef-8de9-b532767c0a6b-util\") on node \"crc\" DevicePath \"\"" Jan 28 06:58:10 crc kubenswrapper[4642]: I0128 06:58:10.189203 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137cnpp" event={"ID":"4ac15d4c-285c-4cef-8de9-b532767c0a6b","Type":"ContainerDied","Data":"671df9efe0beb03336b7440e1c1d4e7e4696270985cf83495bd3ff80f733240f"} Jan 28 06:58:10 crc kubenswrapper[4642]: I0128 06:58:10.189392 4642 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="671df9efe0beb03336b7440e1c1d4e7e4696270985cf83495bd3ff80f733240f" Jan 28 06:58:10 crc kubenswrapper[4642]: I0128 06:58:10.189266 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137cnpp" Jan 28 06:58:12 crc kubenswrapper[4642]: I0128 06:58:12.191137 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-89bz2"] Jan 28 06:58:12 crc kubenswrapper[4642]: E0128 06:58:12.191328 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ac15d4c-285c-4cef-8de9-b532767c0a6b" containerName="util" Jan 28 06:58:12 crc kubenswrapper[4642]: I0128 06:58:12.191339 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ac15d4c-285c-4cef-8de9-b532767c0a6b" containerName="util" Jan 28 06:58:12 crc kubenswrapper[4642]: E0128 06:58:12.191348 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ac15d4c-285c-4cef-8de9-b532767c0a6b" containerName="pull" Jan 28 06:58:12 crc kubenswrapper[4642]: I0128 06:58:12.191353 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ac15d4c-285c-4cef-8de9-b532767c0a6b" containerName="pull" Jan 28 06:58:12 crc kubenswrapper[4642]: E0128 06:58:12.191360 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ac15d4c-285c-4cef-8de9-b532767c0a6b" containerName="extract" Jan 28 06:58:12 crc kubenswrapper[4642]: I0128 06:58:12.191365 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ac15d4c-285c-4cef-8de9-b532767c0a6b" containerName="extract" Jan 28 06:58:12 crc kubenswrapper[4642]: I0128 06:58:12.191441 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ac15d4c-285c-4cef-8de9-b532767c0a6b" containerName="extract" Jan 28 06:58:12 crc kubenswrapper[4642]: I0128 06:58:12.191747 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-89bz2" Jan 28 06:58:12 crc kubenswrapper[4642]: I0128 06:58:12.193023 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-kgq77" Jan 28 06:58:12 crc kubenswrapper[4642]: I0128 06:58:12.194414 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 28 06:58:12 crc kubenswrapper[4642]: I0128 06:58:12.194835 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 28 06:58:12 crc kubenswrapper[4642]: I0128 06:58:12.200060 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-89bz2"] Jan 28 06:58:12 crc kubenswrapper[4642]: I0128 06:58:12.355660 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7znd\" (UniqueName: \"kubernetes.io/projected/b97164bc-f5f3-489d-b0f2-c33fdf700a20-kube-api-access-v7znd\") pod \"nmstate-operator-646758c888-89bz2\" (UID: \"b97164bc-f5f3-489d-b0f2-c33fdf700a20\") " pod="openshift-nmstate/nmstate-operator-646758c888-89bz2" Jan 28 06:58:12 crc kubenswrapper[4642]: I0128 06:58:12.456245 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7znd\" (UniqueName: \"kubernetes.io/projected/b97164bc-f5f3-489d-b0f2-c33fdf700a20-kube-api-access-v7znd\") pod \"nmstate-operator-646758c888-89bz2\" (UID: \"b97164bc-f5f3-489d-b0f2-c33fdf700a20\") " pod="openshift-nmstate/nmstate-operator-646758c888-89bz2" Jan 28 06:58:12 crc kubenswrapper[4642]: I0128 06:58:12.470772 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7znd\" (UniqueName: \"kubernetes.io/projected/b97164bc-f5f3-489d-b0f2-c33fdf700a20-kube-api-access-v7znd\") pod \"nmstate-operator-646758c888-89bz2\" (UID: \"b97164bc-f5f3-489d-b0f2-c33fdf700a20\") " pod="openshift-nmstate/nmstate-operator-646758c888-89bz2" Jan 28 06:58:12 crc kubenswrapper[4642]: I0128 06:58:12.505095 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-89bz2" Jan 28 06:58:12 crc kubenswrapper[4642]: I0128 06:58:12.834497 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-89bz2"] Jan 28 06:58:12 crc kubenswrapper[4642]: W0128 06:58:12.843317 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb97164bc_f5f3_489d_b0f2_c33fdf700a20.slice/crio-b6c4ba83ff3000ff770b7fb9f481a45f3fa527b320294e92743d86d743f35666 WatchSource:0}: Error finding container b6c4ba83ff3000ff770b7fb9f481a45f3fa527b320294e92743d86d743f35666: Status 404 returned error can't find the container with id b6c4ba83ff3000ff770b7fb9f481a45f3fa527b320294e92743d86d743f35666 Jan 28 06:58:13 crc kubenswrapper[4642]: I0128 06:58:13.199728 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-89bz2" event={"ID":"b97164bc-f5f3-489d-b0f2-c33fdf700a20","Type":"ContainerStarted","Data":"b6c4ba83ff3000ff770b7fb9f481a45f3fa527b320294e92743d86d743f35666"} Jan 28 06:58:15 crc kubenswrapper[4642]: I0128 06:58:15.211624 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-89bz2" event={"ID":"b97164bc-f5f3-489d-b0f2-c33fdf700a20","Type":"ContainerStarted","Data":"ec6e32a2e1558577c1faedf6777e8cc778f981ef6a2bcdf5b5015517316b1bd4"} Jan 28 06:58:15 crc kubenswrapper[4642]: I0128 06:58:15.233105 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-89bz2" podStartSLOduration=1.6652182899999999 podStartE2EDuration="3.233090039s" podCreationTimestamp="2026-01-28 06:58:12 +0000 UTC" firstStartedPulling="2026-01-28 06:58:12.846045664 +0000 UTC m=+616.078134473" lastFinishedPulling="2026-01-28 06:58:14.413917413 +0000 UTC m=+617.646006222" observedRunningTime="2026-01-28 06:58:15.232948253 +0000 UTC m=+618.465037062" watchObservedRunningTime="2026-01-28 06:58:15.233090039 +0000 UTC m=+618.465178849" Jan 28 06:58:16 crc kubenswrapper[4642]: I0128 06:58:16.009582 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-v4jdx"] Jan 28 06:58:16 crc kubenswrapper[4642]: I0128 06:58:16.010378 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-v4jdx" Jan 28 06:58:16 crc kubenswrapper[4642]: I0128 06:58:16.011935 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-jhwwx" Jan 28 06:58:16 crc kubenswrapper[4642]: I0128 06:58:16.018302 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-v4jdx"] Jan 28 06:58:16 crc kubenswrapper[4642]: I0128 06:58:16.023854 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-pmhhc"] Jan 28 06:58:16 crc kubenswrapper[4642]: I0128 06:58:16.024495 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-pmhhc" Jan 28 06:58:16 crc kubenswrapper[4642]: I0128 06:58:16.026756 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 28 06:58:16 crc kubenswrapper[4642]: I0128 06:58:16.036414 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-pmhhc"] Jan 28 06:58:16 crc kubenswrapper[4642]: I0128 06:58:16.055389 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-nbnj6"] Jan 28 06:58:16 crc kubenswrapper[4642]: I0128 06:58:16.056582 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-nbnj6" Jan 28 06:58:16 crc kubenswrapper[4642]: I0128 06:58:16.098850 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gq44w\" (UniqueName: \"kubernetes.io/projected/24cc2707-e7fa-4112-83cd-549fede20a62-kube-api-access-gq44w\") pod \"nmstate-metrics-54757c584b-v4jdx\" (UID: \"24cc2707-e7fa-4112-83cd-549fede20a62\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-v4jdx" Jan 28 06:58:16 crc kubenswrapper[4642]: I0128 06:58:16.112849 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-mghfw"] Jan 28 06:58:16 crc kubenswrapper[4642]: I0128 06:58:16.113766 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mghfw" Jan 28 06:58:16 crc kubenswrapper[4642]: I0128 06:58:16.115978 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 28 06:58:16 crc kubenswrapper[4642]: I0128 06:58:16.116280 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 28 06:58:16 crc kubenswrapper[4642]: I0128 06:58:16.118959 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-qqvt2" Jan 28 06:58:16 crc kubenswrapper[4642]: I0128 06:58:16.120012 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-mghfw"] Jan 28 06:58:16 crc kubenswrapper[4642]: I0128 06:58:16.201030 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/e311b34d-bd2e-4096-bfd4-734999821b7e-ovs-socket\") pod \"nmstate-handler-nbnj6\" (UID: \"e311b34d-bd2e-4096-bfd4-734999821b7e\") " pod="openshift-nmstate/nmstate-handler-nbnj6" Jan 28 06:58:16 crc kubenswrapper[4642]: I0128 06:58:16.201076 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/41dcadcf-4728-4cba-9997-5e76250477e6-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-mghfw\" (UID: \"41dcadcf-4728-4cba-9997-5e76250477e6\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mghfw" Jan 28 06:58:16 crc kubenswrapper[4642]: I0128 06:58:16.201109 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/e311b34d-bd2e-4096-bfd4-734999821b7e-dbus-socket\") pod \"nmstate-handler-nbnj6\" (UID: \"e311b34d-bd2e-4096-bfd4-734999821b7e\") " pod="openshift-nmstate/nmstate-handler-nbnj6" Jan 28 06:58:16 crc kubenswrapper[4642]: I0128 06:58:16.201132 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6p5g\" (UniqueName: \"kubernetes.io/projected/e8385e4f-aa98-4f3c-9712-0ee8951e1322-kube-api-access-g6p5g\") pod \"nmstate-webhook-8474b5b9d8-pmhhc\" (UID: \"e8385e4f-aa98-4f3c-9712-0ee8951e1322\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-pmhhc" Jan 28 06:58:16 crc kubenswrapper[4642]: I0128 06:58:16.201181 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwrrs\" (UniqueName: \"kubernetes.io/projected/41dcadcf-4728-4cba-9997-5e76250477e6-kube-api-access-jwrrs\") pod \"nmstate-console-plugin-7754f76f8b-mghfw\" (UID: \"41dcadcf-4728-4cba-9997-5e76250477e6\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mghfw" Jan 28 06:58:16 crc kubenswrapper[4642]: I0128 06:58:16.201218 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/e8385e4f-aa98-4f3c-9712-0ee8951e1322-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-pmhhc\" (UID: \"e8385e4f-aa98-4f3c-9712-0ee8951e1322\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-pmhhc" Jan 28 06:58:16 crc kubenswrapper[4642]: I0128 06:58:16.201256 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/e311b34d-bd2e-4096-bfd4-734999821b7e-nmstate-lock\") pod \"nmstate-handler-nbnj6\" (UID: \"e311b34d-bd2e-4096-bfd4-734999821b7e\") " pod="openshift-nmstate/nmstate-handler-nbnj6" Jan 28 06:58:16 crc kubenswrapper[4642]: I0128 06:58:16.201273 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/41dcadcf-4728-4cba-9997-5e76250477e6-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-mghfw\" (UID: \"41dcadcf-4728-4cba-9997-5e76250477e6\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mghfw" Jan 28 06:58:16 crc kubenswrapper[4642]: I0128 06:58:16.201295 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7twkj\" (UniqueName: \"kubernetes.io/projected/e311b34d-bd2e-4096-bfd4-734999821b7e-kube-api-access-7twkj\") pod \"nmstate-handler-nbnj6\" (UID: \"e311b34d-bd2e-4096-bfd4-734999821b7e\") " pod="openshift-nmstate/nmstate-handler-nbnj6" Jan 28 06:58:16 crc kubenswrapper[4642]: I0128 06:58:16.201336 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gq44w\" (UniqueName: \"kubernetes.io/projected/24cc2707-e7fa-4112-83cd-549fede20a62-kube-api-access-gq44w\") pod \"nmstate-metrics-54757c584b-v4jdx\" (UID: \"24cc2707-e7fa-4112-83cd-549fede20a62\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-v4jdx" Jan 28 06:58:16 crc kubenswrapper[4642]: I0128 06:58:16.220412 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gq44w\" (UniqueName: \"kubernetes.io/projected/24cc2707-e7fa-4112-83cd-549fede20a62-kube-api-access-gq44w\") pod \"nmstate-metrics-54757c584b-v4jdx\" (UID: \"24cc2707-e7fa-4112-83cd-549fede20a62\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-v4jdx" Jan 28 06:58:16 crc kubenswrapper[4642]: I0128 06:58:16.278809 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6b6dc57555-xpblc"] Jan 28 06:58:16 crc kubenswrapper[4642]: I0128 06:58:16.279542 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6b6dc57555-xpblc" Jan 28 06:58:16 crc kubenswrapper[4642]: I0128 06:58:16.285343 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6b6dc57555-xpblc"] Jan 28 06:58:16 crc kubenswrapper[4642]: I0128 06:58:16.302224 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7twkj\" (UniqueName: \"kubernetes.io/projected/e311b34d-bd2e-4096-bfd4-734999821b7e-kube-api-access-7twkj\") pod \"nmstate-handler-nbnj6\" (UID: \"e311b34d-bd2e-4096-bfd4-734999821b7e\") " pod="openshift-nmstate/nmstate-handler-nbnj6" Jan 28 06:58:16 crc kubenswrapper[4642]: I0128 06:58:16.302279 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a2e0975-0a24-40ec-8e52-b61b16064396-trusted-ca-bundle\") pod \"console-6b6dc57555-xpblc\" (UID: \"7a2e0975-0a24-40ec-8e52-b61b16064396\") " pod="openshift-console/console-6b6dc57555-xpblc" Jan 28 06:58:16 crc kubenswrapper[4642]: I0128 06:58:16.302299 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7a2e0975-0a24-40ec-8e52-b61b16064396-service-ca\") pod \"console-6b6dc57555-xpblc\" (UID: \"7a2e0975-0a24-40ec-8e52-b61b16064396\") " pod="openshift-console/console-6b6dc57555-xpblc" Jan 28 06:58:16 crc kubenswrapper[4642]: I0128 06:58:16.302316 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/e311b34d-bd2e-4096-bfd4-734999821b7e-ovs-socket\") pod \"nmstate-handler-nbnj6\" (UID: \"e311b34d-bd2e-4096-bfd4-734999821b7e\") " pod="openshift-nmstate/nmstate-handler-nbnj6" Jan 28 06:58:16 crc kubenswrapper[4642]: I0128 06:58:16.302337 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/41dcadcf-4728-4cba-9997-5e76250477e6-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-mghfw\" (UID: \"41dcadcf-4728-4cba-9997-5e76250477e6\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mghfw" Jan 28 06:58:16 crc kubenswrapper[4642]: I0128 06:58:16.302356 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/e311b34d-bd2e-4096-bfd4-734999821b7e-dbus-socket\") pod \"nmstate-handler-nbnj6\" (UID: \"e311b34d-bd2e-4096-bfd4-734999821b7e\") " pod="openshift-nmstate/nmstate-handler-nbnj6" Jan 28 06:58:16 crc kubenswrapper[4642]: I0128 06:58:16.302374 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6p5g\" (UniqueName: \"kubernetes.io/projected/e8385e4f-aa98-4f3c-9712-0ee8951e1322-kube-api-access-g6p5g\") pod \"nmstate-webhook-8474b5b9d8-pmhhc\" (UID: \"e8385e4f-aa98-4f3c-9712-0ee8951e1322\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-pmhhc" Jan 28 06:58:16 crc kubenswrapper[4642]: I0128 06:58:16.302411 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwrrs\" (UniqueName: \"kubernetes.io/projected/41dcadcf-4728-4cba-9997-5e76250477e6-kube-api-access-jwrrs\") pod \"nmstate-console-plugin-7754f76f8b-mghfw\" (UID: \"41dcadcf-4728-4cba-9997-5e76250477e6\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mghfw" Jan 28 06:58:16 crc kubenswrapper[4642]: I0128 06:58:16.302445 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/e8385e4f-aa98-4f3c-9712-0ee8951e1322-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-pmhhc\" (UID: \"e8385e4f-aa98-4f3c-9712-0ee8951e1322\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-pmhhc" Jan 28 06:58:16 crc kubenswrapper[4642]: I0128 06:58:16.302462 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7a2e0975-0a24-40ec-8e52-b61b16064396-console-config\") pod \"console-6b6dc57555-xpblc\" (UID: \"7a2e0975-0a24-40ec-8e52-b61b16064396\") " pod="openshift-console/console-6b6dc57555-xpblc" Jan 28 06:58:16 crc kubenswrapper[4642]: I0128 06:58:16.302480 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7a2e0975-0a24-40ec-8e52-b61b16064396-oauth-serving-cert\") pod \"console-6b6dc57555-xpblc\" (UID: \"7a2e0975-0a24-40ec-8e52-b61b16064396\") " pod="openshift-console/console-6b6dc57555-xpblc" Jan 28 06:58:16 crc kubenswrapper[4642]: I0128 06:58:16.302496 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7a2e0975-0a24-40ec-8e52-b61b16064396-console-oauth-config\") pod \"console-6b6dc57555-xpblc\" (UID: \"7a2e0975-0a24-40ec-8e52-b61b16064396\") " pod="openshift-console/console-6b6dc57555-xpblc" Jan 28 06:58:16 crc kubenswrapper[4642]: I0128 06:58:16.302515 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7a2e0975-0a24-40ec-8e52-b61b16064396-console-serving-cert\") pod \"console-6b6dc57555-xpblc\" (UID: \"7a2e0975-0a24-40ec-8e52-b61b16064396\") " pod="openshift-console/console-6b6dc57555-xpblc" Jan 28 06:58:16 crc kubenswrapper[4642]: I0128 06:58:16.302530 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glfg6\" (UniqueName: \"kubernetes.io/projected/7a2e0975-0a24-40ec-8e52-b61b16064396-kube-api-access-glfg6\") pod \"console-6b6dc57555-xpblc\" (UID: \"7a2e0975-0a24-40ec-8e52-b61b16064396\") " pod="openshift-console/console-6b6dc57555-xpblc" Jan 28 06:58:16 crc kubenswrapper[4642]: I0128 06:58:16.302551 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/e311b34d-bd2e-4096-bfd4-734999821b7e-nmstate-lock\") pod \"nmstate-handler-nbnj6\" (UID: \"e311b34d-bd2e-4096-bfd4-734999821b7e\") " pod="openshift-nmstate/nmstate-handler-nbnj6" Jan 28 06:58:16 crc kubenswrapper[4642]: I0128 06:58:16.302568 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/41dcadcf-4728-4cba-9997-5e76250477e6-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-mghfw\" (UID: \"41dcadcf-4728-4cba-9997-5e76250477e6\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mghfw" Jan 28 06:58:16 crc kubenswrapper[4642]: I0128 06:58:16.303266 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/41dcadcf-4728-4cba-9997-5e76250477e6-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-mghfw\" (UID: \"41dcadcf-4728-4cba-9997-5e76250477e6\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mghfw" Jan 28 06:58:16 crc kubenswrapper[4642]: I0128 06:58:16.303982 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/e311b34d-bd2e-4096-bfd4-734999821b7e-ovs-socket\") pod \"nmstate-handler-nbnj6\" (UID: \"e311b34d-bd2e-4096-bfd4-734999821b7e\") " pod="openshift-nmstate/nmstate-handler-nbnj6" Jan 28 06:58:16 crc kubenswrapper[4642]: I0128 06:58:16.304076 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/e311b34d-bd2e-4096-bfd4-734999821b7e-nmstate-lock\") pod \"nmstate-handler-nbnj6\" (UID: \"e311b34d-bd2e-4096-bfd4-734999821b7e\") " pod="openshift-nmstate/nmstate-handler-nbnj6" Jan 28 06:58:16 crc kubenswrapper[4642]: I0128 06:58:16.304308 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/e311b34d-bd2e-4096-bfd4-734999821b7e-dbus-socket\") pod \"nmstate-handler-nbnj6\" (UID: \"e311b34d-bd2e-4096-bfd4-734999821b7e\") " pod="openshift-nmstate/nmstate-handler-nbnj6" Jan 28 06:58:16 crc kubenswrapper[4642]: I0128 06:58:16.307417 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/41dcadcf-4728-4cba-9997-5e76250477e6-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-mghfw\" (UID: \"41dcadcf-4728-4cba-9997-5e76250477e6\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mghfw" Jan 28 06:58:16 crc kubenswrapper[4642]: I0128 06:58:16.313345 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/e8385e4f-aa98-4f3c-9712-0ee8951e1322-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-pmhhc\" (UID: \"e8385e4f-aa98-4f3c-9712-0ee8951e1322\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-pmhhc" Jan 28 06:58:16 crc kubenswrapper[4642]: I0128 06:58:16.318011 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6p5g\" (UniqueName: \"kubernetes.io/projected/e8385e4f-aa98-4f3c-9712-0ee8951e1322-kube-api-access-g6p5g\") pod \"nmstate-webhook-8474b5b9d8-pmhhc\" (UID: \"e8385e4f-aa98-4f3c-9712-0ee8951e1322\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-pmhhc" Jan 28 06:58:16 crc kubenswrapper[4642]: I0128 06:58:16.318135 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7twkj\" (UniqueName: \"kubernetes.io/projected/e311b34d-bd2e-4096-bfd4-734999821b7e-kube-api-access-7twkj\") pod \"nmstate-handler-nbnj6\" (UID: \"e311b34d-bd2e-4096-bfd4-734999821b7e\") " pod="openshift-nmstate/nmstate-handler-nbnj6" Jan 28 06:58:16 crc kubenswrapper[4642]: I0128 06:58:16.320109 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwrrs\" (UniqueName: \"kubernetes.io/projected/41dcadcf-4728-4cba-9997-5e76250477e6-kube-api-access-jwrrs\") pod \"nmstate-console-plugin-7754f76f8b-mghfw\" (UID: \"41dcadcf-4728-4cba-9997-5e76250477e6\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mghfw" Jan 28 06:58:16 crc kubenswrapper[4642]: I0128 06:58:16.322933 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-v4jdx" Jan 28 06:58:16 crc kubenswrapper[4642]: I0128 06:58:16.339954 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-pmhhc" Jan 28 06:58:16 crc kubenswrapper[4642]: I0128 06:58:16.370738 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-nbnj6" Jan 28 06:58:16 crc kubenswrapper[4642]: I0128 06:58:16.402901 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a2e0975-0a24-40ec-8e52-b61b16064396-trusted-ca-bundle\") pod \"console-6b6dc57555-xpblc\" (UID: \"7a2e0975-0a24-40ec-8e52-b61b16064396\") " pod="openshift-console/console-6b6dc57555-xpblc" Jan 28 06:58:16 crc kubenswrapper[4642]: I0128 06:58:16.402937 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7a2e0975-0a24-40ec-8e52-b61b16064396-service-ca\") pod \"console-6b6dc57555-xpblc\" (UID: \"7a2e0975-0a24-40ec-8e52-b61b16064396\") " pod="openshift-console/console-6b6dc57555-xpblc" Jan 28 06:58:16 crc kubenswrapper[4642]: I0128 06:58:16.403002 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7a2e0975-0a24-40ec-8e52-b61b16064396-console-config\") pod \"console-6b6dc57555-xpblc\" (UID: \"7a2e0975-0a24-40ec-8e52-b61b16064396\") " pod="openshift-console/console-6b6dc57555-xpblc" Jan 28 06:58:16 crc kubenswrapper[4642]: I0128 06:58:16.403022 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7a2e0975-0a24-40ec-8e52-b61b16064396-oauth-serving-cert\") pod \"console-6b6dc57555-xpblc\" (UID: \"7a2e0975-0a24-40ec-8e52-b61b16064396\") " pod="openshift-console/console-6b6dc57555-xpblc" Jan 28 06:58:16 crc kubenswrapper[4642]: I0128 06:58:16.403041 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7a2e0975-0a24-40ec-8e52-b61b16064396-console-oauth-config\") pod \"console-6b6dc57555-xpblc\" (UID: \"7a2e0975-0a24-40ec-8e52-b61b16064396\") " pod="openshift-console/console-6b6dc57555-xpblc" Jan 28 06:58:16 crc kubenswrapper[4642]: I0128 06:58:16.403063 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7a2e0975-0a24-40ec-8e52-b61b16064396-console-serving-cert\") pod \"console-6b6dc57555-xpblc\" (UID: \"7a2e0975-0a24-40ec-8e52-b61b16064396\") " pod="openshift-console/console-6b6dc57555-xpblc" Jan 28 06:58:16 crc kubenswrapper[4642]: I0128 06:58:16.403080 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glfg6\" (UniqueName: \"kubernetes.io/projected/7a2e0975-0a24-40ec-8e52-b61b16064396-kube-api-access-glfg6\") pod \"console-6b6dc57555-xpblc\" (UID: \"7a2e0975-0a24-40ec-8e52-b61b16064396\") " pod="openshift-console/console-6b6dc57555-xpblc" Jan 28 06:58:16 crc kubenswrapper[4642]: I0128 06:58:16.404152 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a2e0975-0a24-40ec-8e52-b61b16064396-trusted-ca-bundle\") pod \"console-6b6dc57555-xpblc\" (UID: \"7a2e0975-0a24-40ec-8e52-b61b16064396\") " pod="openshift-console/console-6b6dc57555-xpblc" Jan 28 06:58:16 crc kubenswrapper[4642]: I0128 06:58:16.404764 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7a2e0975-0a24-40ec-8e52-b61b16064396-oauth-serving-cert\") pod \"console-6b6dc57555-xpblc\" (UID: \"7a2e0975-0a24-40ec-8e52-b61b16064396\") " pod="openshift-console/console-6b6dc57555-xpblc" Jan 28 06:58:16 crc kubenswrapper[4642]: I0128 06:58:16.405055 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7a2e0975-0a24-40ec-8e52-b61b16064396-service-ca\") pod \"console-6b6dc57555-xpblc\" (UID: \"7a2e0975-0a24-40ec-8e52-b61b16064396\") " pod="openshift-console/console-6b6dc57555-xpblc" Jan 28 06:58:16 crc kubenswrapper[4642]: I0128 06:58:16.405507 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7a2e0975-0a24-40ec-8e52-b61b16064396-console-config\") pod \"console-6b6dc57555-xpblc\" (UID: \"7a2e0975-0a24-40ec-8e52-b61b16064396\") " pod="openshift-console/console-6b6dc57555-xpblc" Jan 28 06:58:16 crc kubenswrapper[4642]: I0128 06:58:16.406882 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7a2e0975-0a24-40ec-8e52-b61b16064396-console-oauth-config\") pod \"console-6b6dc57555-xpblc\" (UID: \"7a2e0975-0a24-40ec-8e52-b61b16064396\") " pod="openshift-console/console-6b6dc57555-xpblc" Jan 28 06:58:16 crc kubenswrapper[4642]: I0128 06:58:16.407700 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7a2e0975-0a24-40ec-8e52-b61b16064396-console-serving-cert\") pod \"console-6b6dc57555-xpblc\" (UID: \"7a2e0975-0a24-40ec-8e52-b61b16064396\") " pod="openshift-console/console-6b6dc57555-xpblc" Jan 28 06:58:16 crc kubenswrapper[4642]: I0128 06:58:16.417089 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glfg6\" (UniqueName: \"kubernetes.io/projected/7a2e0975-0a24-40ec-8e52-b61b16064396-kube-api-access-glfg6\") pod \"console-6b6dc57555-xpblc\" (UID: \"7a2e0975-0a24-40ec-8e52-b61b16064396\") " pod="openshift-console/console-6b6dc57555-xpblc" Jan 28 06:58:16 crc kubenswrapper[4642]: I0128 06:58:16.429018 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mghfw" Jan 28 06:58:16 crc kubenswrapper[4642]: I0128 06:58:16.526613 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-v4jdx"] Jan 28 06:58:16 crc kubenswrapper[4642]: W0128 06:58:16.533407 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24cc2707_e7fa_4112_83cd_549fede20a62.slice/crio-0e36ef66de97fd70784af6e4246adc33b13f190a329193ccc018ff73887ee6f5 WatchSource:0}: Error finding container 0e36ef66de97fd70784af6e4246adc33b13f190a329193ccc018ff73887ee6f5: Status 404 returned error can't find the container with id 0e36ef66de97fd70784af6e4246adc33b13f190a329193ccc018ff73887ee6f5 Jan 28 06:58:16 crc kubenswrapper[4642]: I0128 06:58:16.586660 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-mghfw"] Jan 28 06:58:16 crc kubenswrapper[4642]: W0128 06:58:16.588243 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod41dcadcf_4728_4cba_9997_5e76250477e6.slice/crio-9981c6df45da637a597291c4e5c0c38217e1a30bb4ce51f361e96caed1ce1ec2 WatchSource:0}: Error finding container 9981c6df45da637a597291c4e5c0c38217e1a30bb4ce51f361e96caed1ce1ec2: Status 404 returned error can't find the container with id 9981c6df45da637a597291c4e5c0c38217e1a30bb4ce51f361e96caed1ce1ec2 Jan 28 06:58:16 crc kubenswrapper[4642]: I0128 06:58:16.595523 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6b6dc57555-xpblc" Jan 28 06:58:16 crc kubenswrapper[4642]: I0128 06:58:16.747324 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-pmhhc"] Jan 28 06:58:16 crc kubenswrapper[4642]: W0128 06:58:16.752336 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8385e4f_aa98_4f3c_9712_0ee8951e1322.slice/crio-403a8b3d6fe9cebbbfcad3eda67fa2a0617075f8942fec5ee2281b6bd6ca255c WatchSource:0}: Error finding container 403a8b3d6fe9cebbbfcad3eda67fa2a0617075f8942fec5ee2281b6bd6ca255c: Status 404 returned error can't find the container with id 403a8b3d6fe9cebbbfcad3eda67fa2a0617075f8942fec5ee2281b6bd6ca255c Jan 28 06:58:16 crc kubenswrapper[4642]: I0128 06:58:16.760398 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6b6dc57555-xpblc"] Jan 28 06:58:16 crc kubenswrapper[4642]: W0128 06:58:16.763941 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a2e0975_0a24_40ec_8e52_b61b16064396.slice/crio-afe170635bce87b2af5243540fd9b6f74104d341890d368a8896e165face4502 WatchSource:0}: Error finding container afe170635bce87b2af5243540fd9b6f74104d341890d368a8896e165face4502: Status 404 returned error can't find the container with id afe170635bce87b2af5243540fd9b6f74104d341890d368a8896e165face4502 Jan 28 06:58:17 crc kubenswrapper[4642]: I0128 06:58:17.226872 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mghfw" event={"ID":"41dcadcf-4728-4cba-9997-5e76250477e6","Type":"ContainerStarted","Data":"9981c6df45da637a597291c4e5c0c38217e1a30bb4ce51f361e96caed1ce1ec2"} Jan 28 06:58:17 crc kubenswrapper[4642]: I0128 06:58:17.229295 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-v4jdx" event={"ID":"24cc2707-e7fa-4112-83cd-549fede20a62","Type":"ContainerStarted","Data":"0e36ef66de97fd70784af6e4246adc33b13f190a329193ccc018ff73887ee6f5"} Jan 28 06:58:17 crc kubenswrapper[4642]: I0128 06:58:17.230846 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-nbnj6" event={"ID":"e311b34d-bd2e-4096-bfd4-734999821b7e","Type":"ContainerStarted","Data":"8ef273ff6c49b2a131221d554b8aca4306614066d5b4b9a0e8245edc361e9e97"} Jan 28 06:58:17 crc kubenswrapper[4642]: I0128 06:58:17.232894 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-pmhhc" event={"ID":"e8385e4f-aa98-4f3c-9712-0ee8951e1322","Type":"ContainerStarted","Data":"403a8b3d6fe9cebbbfcad3eda67fa2a0617075f8942fec5ee2281b6bd6ca255c"} Jan 28 06:58:17 crc kubenswrapper[4642]: I0128 06:58:17.235706 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6b6dc57555-xpblc" event={"ID":"7a2e0975-0a24-40ec-8e52-b61b16064396","Type":"ContainerStarted","Data":"c63caeb2cfae0fcf3fff9c64b9e5bc7c6acd910183fdefec7fe13e9cae63b976"} Jan 28 06:58:17 crc kubenswrapper[4642]: I0128 06:58:17.235740 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6b6dc57555-xpblc" event={"ID":"7a2e0975-0a24-40ec-8e52-b61b16064396","Type":"ContainerStarted","Data":"afe170635bce87b2af5243540fd9b6f74104d341890d368a8896e165face4502"} Jan 28 06:58:17 crc kubenswrapper[4642]: I0128 06:58:17.250055 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6b6dc57555-xpblc" podStartSLOduration=1.250037952 podStartE2EDuration="1.250037952s" podCreationTimestamp="2026-01-28 06:58:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 06:58:17.246624757 +0000 UTC m=+620.478713566" watchObservedRunningTime="2026-01-28 06:58:17.250037952 +0000 UTC m=+620.482126771" Jan 28 06:58:19 crc kubenswrapper[4642]: I0128 06:58:19.248384 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-v4jdx" event={"ID":"24cc2707-e7fa-4112-83cd-549fede20a62","Type":"ContainerStarted","Data":"220c951c2406d1d35788d84dd658cb8b799a70fa54c0ecf7c55a9bd50ec26e23"} Jan 28 06:58:19 crc kubenswrapper[4642]: I0128 06:58:19.252223 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-nbnj6" event={"ID":"e311b34d-bd2e-4096-bfd4-734999821b7e","Type":"ContainerStarted","Data":"99fd997add0e926bb1571165ff403f73a12f1f1f637afc8f1551887adc146c68"} Jan 28 06:58:19 crc kubenswrapper[4642]: I0128 06:58:19.252491 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-nbnj6" Jan 28 06:58:19 crc kubenswrapper[4642]: I0128 06:58:19.253748 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-pmhhc" event={"ID":"e8385e4f-aa98-4f3c-9712-0ee8951e1322","Type":"ContainerStarted","Data":"b1587dfc12afc999bec703bab0fa16dbaf42d2003a74c87c41b37c168f5e05ad"} Jan 28 06:58:19 crc kubenswrapper[4642]: I0128 06:58:19.253804 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-pmhhc" Jan 28 06:58:19 crc kubenswrapper[4642]: I0128 06:58:19.254985 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mghfw" event={"ID":"41dcadcf-4728-4cba-9997-5e76250477e6","Type":"ContainerStarted","Data":"fc0628542ce08a9a339f0eb48b433516353b59de03c65782dcb86d5d3df34801"} Jan 28 06:58:19 crc kubenswrapper[4642]: I0128 06:58:19.266855 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-nbnj6" podStartSLOduration=0.637497601 podStartE2EDuration="3.266843779s" podCreationTimestamp="2026-01-28 06:58:16 +0000 UTC" firstStartedPulling="2026-01-28 06:58:16.397943911 +0000 UTC m=+619.630032720" lastFinishedPulling="2026-01-28 06:58:19.027290089 +0000 UTC m=+622.259378898" observedRunningTime="2026-01-28 06:58:19.263447425 +0000 UTC m=+622.495536234" watchObservedRunningTime="2026-01-28 06:58:19.266843779 +0000 UTC m=+622.498932588" Jan 28 06:58:19 crc kubenswrapper[4642]: I0128 06:58:19.277266 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-pmhhc" podStartSLOduration=1.009953155 podStartE2EDuration="3.277254364s" podCreationTimestamp="2026-01-28 06:58:16 +0000 UTC" firstStartedPulling="2026-01-28 06:58:16.75484885 +0000 UTC m=+619.986937659" lastFinishedPulling="2026-01-28 06:58:19.022150059 +0000 UTC m=+622.254238868" observedRunningTime="2026-01-28 06:58:19.274618691 +0000 UTC m=+622.506707500" watchObservedRunningTime="2026-01-28 06:58:19.277254364 +0000 UTC m=+622.509343173" Jan 28 06:58:22 crc kubenswrapper[4642]: I0128 06:58:22.280009 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-v4jdx" event={"ID":"24cc2707-e7fa-4112-83cd-549fede20a62","Type":"ContainerStarted","Data":"d954e25e3c969edcef4c060f0d57f0be17416cd2c7451e26217127673c37bee4"} Jan 28 06:58:22 crc kubenswrapper[4642]: I0128 06:58:22.293276 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-v4jdx" podStartSLOduration=2.432466294 podStartE2EDuration="7.293265326s" podCreationTimestamp="2026-01-28 06:58:15 +0000 UTC" firstStartedPulling="2026-01-28 06:58:16.541809324 +0000 UTC m=+619.773898133" lastFinishedPulling="2026-01-28 06:58:21.402608356 +0000 UTC m=+624.634697165" observedRunningTime="2026-01-28 06:58:22.291929145 +0000 UTC m=+625.524017953" watchObservedRunningTime="2026-01-28 06:58:22.293265326 +0000 UTC m=+625.525354135" Jan 28 06:58:22 crc kubenswrapper[4642]: I0128 06:58:22.293484 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mghfw" podStartSLOduration=3.861792288 podStartE2EDuration="6.2934799s" podCreationTimestamp="2026-01-28 06:58:16 +0000 UTC" firstStartedPulling="2026-01-28 06:58:16.590316162 +0000 UTC m=+619.822404971" lastFinishedPulling="2026-01-28 06:58:19.022003774 +0000 UTC m=+622.254092583" observedRunningTime="2026-01-28 06:58:19.289009308 +0000 UTC m=+622.521098117" watchObservedRunningTime="2026-01-28 06:58:22.2934799 +0000 UTC m=+625.525568708" Jan 28 06:58:26 crc kubenswrapper[4642]: I0128 06:58:26.389221 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-nbnj6" Jan 28 06:58:26 crc kubenswrapper[4642]: I0128 06:58:26.595756 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6b6dc57555-xpblc" Jan 28 06:58:26 crc kubenswrapper[4642]: I0128 06:58:26.595987 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6b6dc57555-xpblc" Jan 28 06:58:26 crc kubenswrapper[4642]: I0128 06:58:26.599581 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6b6dc57555-xpblc" Jan 28 06:58:27 crc kubenswrapper[4642]: I0128 06:58:27.300952 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6b6dc57555-xpblc" Jan 28 06:58:27 crc kubenswrapper[4642]: I0128 06:58:27.337504 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-zrrr7"] Jan 28 06:58:36 crc kubenswrapper[4642]: I0128 06:58:36.344770 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-pmhhc" Jan 28 06:58:45 crc kubenswrapper[4642]: I0128 06:58:45.976012 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx9p7q"] Jan 28 06:58:45 crc kubenswrapper[4642]: I0128 06:58:45.977417 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx9p7q" Jan 28 06:58:45 crc kubenswrapper[4642]: I0128 06:58:45.979042 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 28 06:58:45 crc kubenswrapper[4642]: I0128 06:58:45.983104 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx9p7q"] Jan 28 06:58:46 crc kubenswrapper[4642]: I0128 06:58:46.093009 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2j4p\" (UniqueName: \"kubernetes.io/projected/fec6cae0-ef33-4521-b704-1fead4aca74b-kube-api-access-q2j4p\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx9p7q\" (UID: \"fec6cae0-ef33-4521-b704-1fead4aca74b\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx9p7q" Jan 28 06:58:46 crc kubenswrapper[4642]: I0128 06:58:46.093122 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fec6cae0-ef33-4521-b704-1fead4aca74b-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx9p7q\" (UID: \"fec6cae0-ef33-4521-b704-1fead4aca74b\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx9p7q" Jan 28 06:58:46 crc kubenswrapper[4642]: I0128 06:58:46.093171 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fec6cae0-ef33-4521-b704-1fead4aca74b-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx9p7q\" (UID: \"fec6cae0-ef33-4521-b704-1fead4aca74b\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx9p7q" Jan 28 06:58:46 crc kubenswrapper[4642]: I0128 06:58:46.194005 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fec6cae0-ef33-4521-b704-1fead4aca74b-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx9p7q\" (UID: \"fec6cae0-ef33-4521-b704-1fead4aca74b\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx9p7q" Jan 28 06:58:46 crc kubenswrapper[4642]: I0128 06:58:46.194053 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fec6cae0-ef33-4521-b704-1fead4aca74b-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx9p7q\" (UID: \"fec6cae0-ef33-4521-b704-1fead4aca74b\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx9p7q" Jan 28 06:58:46 crc kubenswrapper[4642]: I0128 06:58:46.194115 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2j4p\" (UniqueName: \"kubernetes.io/projected/fec6cae0-ef33-4521-b704-1fead4aca74b-kube-api-access-q2j4p\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx9p7q\" (UID: \"fec6cae0-ef33-4521-b704-1fead4aca74b\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx9p7q" Jan 28 06:58:46 crc kubenswrapper[4642]: I0128 06:58:46.194577 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fec6cae0-ef33-4521-b704-1fead4aca74b-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx9p7q\" (UID: \"fec6cae0-ef33-4521-b704-1fead4aca74b\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx9p7q" Jan 28 06:58:46 crc kubenswrapper[4642]: I0128 06:58:46.194587 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fec6cae0-ef33-4521-b704-1fead4aca74b-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx9p7q\" (UID: \"fec6cae0-ef33-4521-b704-1fead4aca74b\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx9p7q" Jan 28 06:58:46 crc kubenswrapper[4642]: I0128 06:58:46.210838 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2j4p\" (UniqueName: \"kubernetes.io/projected/fec6cae0-ef33-4521-b704-1fead4aca74b-kube-api-access-q2j4p\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx9p7q\" (UID: \"fec6cae0-ef33-4521-b704-1fead4aca74b\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx9p7q" Jan 28 06:58:46 crc kubenswrapper[4642]: I0128 06:58:46.299348 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx9p7q" Jan 28 06:58:46 crc kubenswrapper[4642]: I0128 06:58:46.634744 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx9p7q"] Jan 28 06:58:47 crc kubenswrapper[4642]: I0128 06:58:47.384280 4642 generic.go:334] "Generic (PLEG): container finished" podID="fec6cae0-ef33-4521-b704-1fead4aca74b" containerID="24d54e055604e49ddc3ac8cd88f56f3b67eae879c400f67d175e3d2e98234410" exitCode=0 Jan 28 06:58:47 crc kubenswrapper[4642]: I0128 06:58:47.384508 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx9p7q" event={"ID":"fec6cae0-ef33-4521-b704-1fead4aca74b","Type":"ContainerDied","Data":"24d54e055604e49ddc3ac8cd88f56f3b67eae879c400f67d175e3d2e98234410"} Jan 28 06:58:47 crc kubenswrapper[4642]: I0128 06:58:47.384562 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx9p7q" event={"ID":"fec6cae0-ef33-4521-b704-1fead4aca74b","Type":"ContainerStarted","Data":"78734680eaa493e7799a43468fa6273aaa6476b49347f17f838e0591db1de996"} Jan 28 06:58:49 crc kubenswrapper[4642]: I0128 06:58:49.396149 4642 generic.go:334] "Generic (PLEG): container finished" podID="fec6cae0-ef33-4521-b704-1fead4aca74b" containerID="f6ff0e64feb4abf34ce2c0d0cdb3e5c15e2621da92c0a74156ecd1e433b2784f" exitCode=0 Jan 28 06:58:49 crc kubenswrapper[4642]: I0128 06:58:49.396224 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx9p7q" event={"ID":"fec6cae0-ef33-4521-b704-1fead4aca74b","Type":"ContainerDied","Data":"f6ff0e64feb4abf34ce2c0d0cdb3e5c15e2621da92c0a74156ecd1e433b2784f"} Jan 28 06:58:50 crc kubenswrapper[4642]: I0128 06:58:50.403607 4642 generic.go:334] "Generic (PLEG): container finished" podID="fec6cae0-ef33-4521-b704-1fead4aca74b" containerID="66f95576288b4e30ae74daf00136eac1dff8001333341bf9fccd4038cbb1be90" exitCode=0 Jan 28 06:58:50 crc kubenswrapper[4642]: I0128 06:58:50.403660 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx9p7q" event={"ID":"fec6cae0-ef33-4521-b704-1fead4aca74b","Type":"ContainerDied","Data":"66f95576288b4e30ae74daf00136eac1dff8001333341bf9fccd4038cbb1be90"} Jan 28 06:58:51 crc kubenswrapper[4642]: I0128 06:58:51.586277 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx9p7q" Jan 28 06:58:51 crc kubenswrapper[4642]: I0128 06:58:51.752007 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fec6cae0-ef33-4521-b704-1fead4aca74b-util\") pod \"fec6cae0-ef33-4521-b704-1fead4aca74b\" (UID: \"fec6cae0-ef33-4521-b704-1fead4aca74b\") " Jan 28 06:58:51 crc kubenswrapper[4642]: I0128 06:58:51.752063 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fec6cae0-ef33-4521-b704-1fead4aca74b-bundle\") pod \"fec6cae0-ef33-4521-b704-1fead4aca74b\" (UID: \"fec6cae0-ef33-4521-b704-1fead4aca74b\") " Jan 28 06:58:51 crc kubenswrapper[4642]: I0128 06:58:51.752086 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2j4p\" (UniqueName: \"kubernetes.io/projected/fec6cae0-ef33-4521-b704-1fead4aca74b-kube-api-access-q2j4p\") pod \"fec6cae0-ef33-4521-b704-1fead4aca74b\" (UID: \"fec6cae0-ef33-4521-b704-1fead4aca74b\") " Jan 28 06:58:51 crc kubenswrapper[4642]: I0128 06:58:51.752873 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fec6cae0-ef33-4521-b704-1fead4aca74b-bundle" (OuterVolumeSpecName: "bundle") pod "fec6cae0-ef33-4521-b704-1fead4aca74b" (UID: "fec6cae0-ef33-4521-b704-1fead4aca74b"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 06:58:51 crc kubenswrapper[4642]: I0128 06:58:51.757614 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fec6cae0-ef33-4521-b704-1fead4aca74b-kube-api-access-q2j4p" (OuterVolumeSpecName: "kube-api-access-q2j4p") pod "fec6cae0-ef33-4521-b704-1fead4aca74b" (UID: "fec6cae0-ef33-4521-b704-1fead4aca74b"). InnerVolumeSpecName "kube-api-access-q2j4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 06:58:51 crc kubenswrapper[4642]: I0128 06:58:51.761868 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fec6cae0-ef33-4521-b704-1fead4aca74b-util" (OuterVolumeSpecName: "util") pod "fec6cae0-ef33-4521-b704-1fead4aca74b" (UID: "fec6cae0-ef33-4521-b704-1fead4aca74b"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 06:58:51 crc kubenswrapper[4642]: I0128 06:58:51.853548 4642 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fec6cae0-ef33-4521-b704-1fead4aca74b-util\") on node \"crc\" DevicePath \"\"" Jan 28 06:58:51 crc kubenswrapper[4642]: I0128 06:58:51.853584 4642 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fec6cae0-ef33-4521-b704-1fead4aca74b-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 06:58:51 crc kubenswrapper[4642]: I0128 06:58:51.853597 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2j4p\" (UniqueName: \"kubernetes.io/projected/fec6cae0-ef33-4521-b704-1fead4aca74b-kube-api-access-q2j4p\") on node \"crc\" DevicePath \"\"" Jan 28 06:58:52 crc kubenswrapper[4642]: I0128 06:58:52.378508 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-zrrr7" podUID="4d5c1bf9-0f8d-4363-8afc-f764165812c8" containerName="console" containerID="cri-o://5fdd9de374dc7bbbe4544d2061418cadeafd97dc7af4db1f60f5a2a8b8aa9824" gracePeriod=15 Jan 28 06:58:52 crc kubenswrapper[4642]: I0128 06:58:52.414549 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx9p7q" event={"ID":"fec6cae0-ef33-4521-b704-1fead4aca74b","Type":"ContainerDied","Data":"78734680eaa493e7799a43468fa6273aaa6476b49347f17f838e0591db1de996"} Jan 28 06:58:52 crc kubenswrapper[4642]: I0128 06:58:52.414582 4642 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="78734680eaa493e7799a43468fa6273aaa6476b49347f17f838e0591db1de996" Jan 28 06:58:52 crc kubenswrapper[4642]: I0128 06:58:52.414587 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx9p7q" Jan 28 06:58:52 crc kubenswrapper[4642]: I0128 06:58:52.672280 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-zrrr7_4d5c1bf9-0f8d-4363-8afc-f764165812c8/console/0.log" Jan 28 06:58:52 crc kubenswrapper[4642]: I0128 06:58:52.672587 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-zrrr7" Jan 28 06:58:52 crc kubenswrapper[4642]: I0128 06:58:52.765410 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4d5c1bf9-0f8d-4363-8afc-f764165812c8-service-ca\") pod \"4d5c1bf9-0f8d-4363-8afc-f764165812c8\" (UID: \"4d5c1bf9-0f8d-4363-8afc-f764165812c8\") " Jan 28 06:58:52 crc kubenswrapper[4642]: I0128 06:58:52.765466 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4d5c1bf9-0f8d-4363-8afc-f764165812c8-console-oauth-config\") pod \"4d5c1bf9-0f8d-4363-8afc-f764165812c8\" (UID: \"4d5c1bf9-0f8d-4363-8afc-f764165812c8\") " Jan 28 06:58:52 crc kubenswrapper[4642]: I0128 06:58:52.765491 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4d5c1bf9-0f8d-4363-8afc-f764165812c8-trusted-ca-bundle\") pod \"4d5c1bf9-0f8d-4363-8afc-f764165812c8\" (UID: \"4d5c1bf9-0f8d-4363-8afc-f764165812c8\") " Jan 28 06:58:52 crc kubenswrapper[4642]: I0128 06:58:52.766329 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d5c1bf9-0f8d-4363-8afc-f764165812c8-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "4d5c1bf9-0f8d-4363-8afc-f764165812c8" (UID: "4d5c1bf9-0f8d-4363-8afc-f764165812c8"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:58:52 crc kubenswrapper[4642]: I0128 06:58:52.766455 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d5c1bf9-0f8d-4363-8afc-f764165812c8-service-ca" (OuterVolumeSpecName: "service-ca") pod "4d5c1bf9-0f8d-4363-8afc-f764165812c8" (UID: "4d5c1bf9-0f8d-4363-8afc-f764165812c8"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:58:52 crc kubenswrapper[4642]: I0128 06:58:52.769108 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d5c1bf9-0f8d-4363-8afc-f764165812c8-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "4d5c1bf9-0f8d-4363-8afc-f764165812c8" (UID: "4d5c1bf9-0f8d-4363-8afc-f764165812c8"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 06:58:52 crc kubenswrapper[4642]: I0128 06:58:52.866200 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4d5c1bf9-0f8d-4363-8afc-f764165812c8-oauth-serving-cert\") pod \"4d5c1bf9-0f8d-4363-8afc-f764165812c8\" (UID: \"4d5c1bf9-0f8d-4363-8afc-f764165812c8\") " Jan 28 06:58:52 crc kubenswrapper[4642]: I0128 06:58:52.866335 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4d5c1bf9-0f8d-4363-8afc-f764165812c8-console-serving-cert\") pod \"4d5c1bf9-0f8d-4363-8afc-f764165812c8\" (UID: \"4d5c1bf9-0f8d-4363-8afc-f764165812c8\") " Jan 28 06:58:52 crc kubenswrapper[4642]: I0128 06:58:52.866361 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5cmm\" (UniqueName: \"kubernetes.io/projected/4d5c1bf9-0f8d-4363-8afc-f764165812c8-kube-api-access-z5cmm\") pod \"4d5c1bf9-0f8d-4363-8afc-f764165812c8\" (UID: \"4d5c1bf9-0f8d-4363-8afc-f764165812c8\") " Jan 28 06:58:52 crc kubenswrapper[4642]: I0128 06:58:52.866381 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4d5c1bf9-0f8d-4363-8afc-f764165812c8-console-config\") pod \"4d5c1bf9-0f8d-4363-8afc-f764165812c8\" (UID: \"4d5c1bf9-0f8d-4363-8afc-f764165812c8\") " Jan 28 06:58:52 crc kubenswrapper[4642]: I0128 06:58:52.866553 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d5c1bf9-0f8d-4363-8afc-f764165812c8-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "4d5c1bf9-0f8d-4363-8afc-f764165812c8" (UID: "4d5c1bf9-0f8d-4363-8afc-f764165812c8"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:58:52 crc kubenswrapper[4642]: I0128 06:58:52.866610 4642 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4d5c1bf9-0f8d-4363-8afc-f764165812c8-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 28 06:58:52 crc kubenswrapper[4642]: I0128 06:58:52.866623 4642 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4d5c1bf9-0f8d-4363-8afc-f764165812c8-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 06:58:52 crc kubenswrapper[4642]: I0128 06:58:52.866633 4642 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4d5c1bf9-0f8d-4363-8afc-f764165812c8-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 06:58:52 crc kubenswrapper[4642]: I0128 06:58:52.866642 4642 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4d5c1bf9-0f8d-4363-8afc-f764165812c8-service-ca\") on node \"crc\" DevicePath \"\"" Jan 28 06:58:52 crc kubenswrapper[4642]: I0128 06:58:52.866861 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d5c1bf9-0f8d-4363-8afc-f764165812c8-console-config" (OuterVolumeSpecName: "console-config") pod "4d5c1bf9-0f8d-4363-8afc-f764165812c8" (UID: "4d5c1bf9-0f8d-4363-8afc-f764165812c8"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 06:58:52 crc kubenswrapper[4642]: I0128 06:58:52.869230 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d5c1bf9-0f8d-4363-8afc-f764165812c8-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "4d5c1bf9-0f8d-4363-8afc-f764165812c8" (UID: "4d5c1bf9-0f8d-4363-8afc-f764165812c8"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 06:58:52 crc kubenswrapper[4642]: I0128 06:58:52.869355 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d5c1bf9-0f8d-4363-8afc-f764165812c8-kube-api-access-z5cmm" (OuterVolumeSpecName: "kube-api-access-z5cmm") pod "4d5c1bf9-0f8d-4363-8afc-f764165812c8" (UID: "4d5c1bf9-0f8d-4363-8afc-f764165812c8"). InnerVolumeSpecName "kube-api-access-z5cmm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 06:58:52 crc kubenswrapper[4642]: I0128 06:58:52.967700 4642 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4d5c1bf9-0f8d-4363-8afc-f764165812c8-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 06:58:52 crc kubenswrapper[4642]: I0128 06:58:52.967728 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5cmm\" (UniqueName: \"kubernetes.io/projected/4d5c1bf9-0f8d-4363-8afc-f764165812c8-kube-api-access-z5cmm\") on node \"crc\" DevicePath \"\"" Jan 28 06:58:52 crc kubenswrapper[4642]: I0128 06:58:52.967741 4642 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4d5c1bf9-0f8d-4363-8afc-f764165812c8-console-config\") on node \"crc\" DevicePath \"\"" Jan 28 06:58:53 crc kubenswrapper[4642]: I0128 06:58:53.422161 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-zrrr7_4d5c1bf9-0f8d-4363-8afc-f764165812c8/console/0.log" Jan 28 06:58:53 crc kubenswrapper[4642]: I0128 06:58:53.422241 4642 generic.go:334] "Generic (PLEG): container finished" podID="4d5c1bf9-0f8d-4363-8afc-f764165812c8" containerID="5fdd9de374dc7bbbe4544d2061418cadeafd97dc7af4db1f60f5a2a8b8aa9824" exitCode=2 Jan 28 06:58:53 crc kubenswrapper[4642]: I0128 06:58:53.422273 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-zrrr7" event={"ID":"4d5c1bf9-0f8d-4363-8afc-f764165812c8","Type":"ContainerDied","Data":"5fdd9de374dc7bbbe4544d2061418cadeafd97dc7af4db1f60f5a2a8b8aa9824"} Jan 28 06:58:53 crc kubenswrapper[4642]: I0128 06:58:53.422307 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-zrrr7" event={"ID":"4d5c1bf9-0f8d-4363-8afc-f764165812c8","Type":"ContainerDied","Data":"b18f645d44a502c8183f4e50ba70dcb71d6d2ef5c5e5aa7e1becc2b5370c98a1"} Jan 28 06:58:53 crc kubenswrapper[4642]: I0128 06:58:53.422335 4642 scope.go:117] "RemoveContainer" containerID="5fdd9de374dc7bbbe4544d2061418cadeafd97dc7af4db1f60f5a2a8b8aa9824" Jan 28 06:58:53 crc kubenswrapper[4642]: I0128 06:58:53.422401 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-zrrr7" Jan 28 06:58:53 crc kubenswrapper[4642]: I0128 06:58:53.437027 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-zrrr7"] Jan 28 06:58:53 crc kubenswrapper[4642]: I0128 06:58:53.439562 4642 scope.go:117] "RemoveContainer" containerID="5fdd9de374dc7bbbe4544d2061418cadeafd97dc7af4db1f60f5a2a8b8aa9824" Jan 28 06:58:53 crc kubenswrapper[4642]: E0128 06:58:53.439930 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5fdd9de374dc7bbbe4544d2061418cadeafd97dc7af4db1f60f5a2a8b8aa9824\": container with ID starting with 5fdd9de374dc7bbbe4544d2061418cadeafd97dc7af4db1f60f5a2a8b8aa9824 not found: ID does not exist" containerID="5fdd9de374dc7bbbe4544d2061418cadeafd97dc7af4db1f60f5a2a8b8aa9824" Jan 28 06:58:53 crc kubenswrapper[4642]: I0128 06:58:53.439965 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fdd9de374dc7bbbe4544d2061418cadeafd97dc7af4db1f60f5a2a8b8aa9824"} err="failed to get container status \"5fdd9de374dc7bbbe4544d2061418cadeafd97dc7af4db1f60f5a2a8b8aa9824\": rpc error: code = NotFound desc = could not find container \"5fdd9de374dc7bbbe4544d2061418cadeafd97dc7af4db1f60f5a2a8b8aa9824\": container with ID starting with 5fdd9de374dc7bbbe4544d2061418cadeafd97dc7af4db1f60f5a2a8b8aa9824 not found: ID does not exist" Jan 28 06:58:53 crc kubenswrapper[4642]: I0128 06:58:53.440410 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-zrrr7"] Jan 28 06:58:55 crc kubenswrapper[4642]: I0128 06:58:55.105051 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d5c1bf9-0f8d-4363-8afc-f764165812c8" path="/var/lib/kubelet/pods/4d5c1bf9-0f8d-4363-8afc-f764165812c8/volumes" Jan 28 06:59:02 crc kubenswrapper[4642]: I0128 06:59:02.020946 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-85fb65d6bf-4cwxd"] Jan 28 06:59:02 crc kubenswrapper[4642]: E0128 06:59:02.021654 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d5c1bf9-0f8d-4363-8afc-f764165812c8" containerName="console" Jan 28 06:59:02 crc kubenswrapper[4642]: I0128 06:59:02.021668 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d5c1bf9-0f8d-4363-8afc-f764165812c8" containerName="console" Jan 28 06:59:02 crc kubenswrapper[4642]: E0128 06:59:02.021683 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fec6cae0-ef33-4521-b704-1fead4aca74b" containerName="util" Jan 28 06:59:02 crc kubenswrapper[4642]: I0128 06:59:02.021688 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="fec6cae0-ef33-4521-b704-1fead4aca74b" containerName="util" Jan 28 06:59:02 crc kubenswrapper[4642]: E0128 06:59:02.021698 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fec6cae0-ef33-4521-b704-1fead4aca74b" containerName="pull" Jan 28 06:59:02 crc kubenswrapper[4642]: I0128 06:59:02.021703 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="fec6cae0-ef33-4521-b704-1fead4aca74b" containerName="pull" Jan 28 06:59:02 crc kubenswrapper[4642]: E0128 06:59:02.021714 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fec6cae0-ef33-4521-b704-1fead4aca74b" containerName="extract" Jan 28 06:59:02 crc kubenswrapper[4642]: I0128 06:59:02.021719 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="fec6cae0-ef33-4521-b704-1fead4aca74b" containerName="extract" Jan 28 06:59:02 crc kubenswrapper[4642]: I0128 06:59:02.021818 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d5c1bf9-0f8d-4363-8afc-f764165812c8" containerName="console" Jan 28 06:59:02 crc kubenswrapper[4642]: I0128 06:59:02.021827 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="fec6cae0-ef33-4521-b704-1fead4aca74b" containerName="extract" Jan 28 06:59:02 crc kubenswrapper[4642]: I0128 06:59:02.022138 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-85fb65d6bf-4cwxd" Jan 28 06:59:02 crc kubenswrapper[4642]: I0128 06:59:02.024463 4642 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-rb5xz" Jan 28 06:59:02 crc kubenswrapper[4642]: I0128 06:59:02.024643 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 28 06:59:02 crc kubenswrapper[4642]: I0128 06:59:02.025308 4642 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 28 06:59:02 crc kubenswrapper[4642]: I0128 06:59:02.025414 4642 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 28 06:59:02 crc kubenswrapper[4642]: I0128 06:59:02.025433 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 28 06:59:02 crc kubenswrapper[4642]: I0128 06:59:02.036429 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-85fb65d6bf-4cwxd"] Jan 28 06:59:02 crc kubenswrapper[4642]: I0128 06:59:02.062748 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzpfm\" (UniqueName: \"kubernetes.io/projected/4b4ddf14-3402-4717-8cd7-9858e01a1bc2-kube-api-access-wzpfm\") pod \"metallb-operator-controller-manager-85fb65d6bf-4cwxd\" (UID: \"4b4ddf14-3402-4717-8cd7-9858e01a1bc2\") " pod="metallb-system/metallb-operator-controller-manager-85fb65d6bf-4cwxd" Jan 28 06:59:02 crc kubenswrapper[4642]: I0128 06:59:02.062806 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4b4ddf14-3402-4717-8cd7-9858e01a1bc2-webhook-cert\") pod \"metallb-operator-controller-manager-85fb65d6bf-4cwxd\" (UID: \"4b4ddf14-3402-4717-8cd7-9858e01a1bc2\") " pod="metallb-system/metallb-operator-controller-manager-85fb65d6bf-4cwxd" Jan 28 06:59:02 crc kubenswrapper[4642]: I0128 06:59:02.062861 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4b4ddf14-3402-4717-8cd7-9858e01a1bc2-apiservice-cert\") pod \"metallb-operator-controller-manager-85fb65d6bf-4cwxd\" (UID: \"4b4ddf14-3402-4717-8cd7-9858e01a1bc2\") " pod="metallb-system/metallb-operator-controller-manager-85fb65d6bf-4cwxd" Jan 28 06:59:02 crc kubenswrapper[4642]: I0128 06:59:02.164039 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4b4ddf14-3402-4717-8cd7-9858e01a1bc2-webhook-cert\") pod \"metallb-operator-controller-manager-85fb65d6bf-4cwxd\" (UID: \"4b4ddf14-3402-4717-8cd7-9858e01a1bc2\") " pod="metallb-system/metallb-operator-controller-manager-85fb65d6bf-4cwxd" Jan 28 06:59:02 crc kubenswrapper[4642]: I0128 06:59:02.164083 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4b4ddf14-3402-4717-8cd7-9858e01a1bc2-apiservice-cert\") pod \"metallb-operator-controller-manager-85fb65d6bf-4cwxd\" (UID: \"4b4ddf14-3402-4717-8cd7-9858e01a1bc2\") " pod="metallb-system/metallb-operator-controller-manager-85fb65d6bf-4cwxd" Jan 28 06:59:02 crc kubenswrapper[4642]: I0128 06:59:02.164219 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzpfm\" (UniqueName: \"kubernetes.io/projected/4b4ddf14-3402-4717-8cd7-9858e01a1bc2-kube-api-access-wzpfm\") pod \"metallb-operator-controller-manager-85fb65d6bf-4cwxd\" (UID: \"4b4ddf14-3402-4717-8cd7-9858e01a1bc2\") " pod="metallb-system/metallb-operator-controller-manager-85fb65d6bf-4cwxd" Jan 28 06:59:02 crc kubenswrapper[4642]: I0128 06:59:02.170198 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4b4ddf14-3402-4717-8cd7-9858e01a1bc2-webhook-cert\") pod \"metallb-operator-controller-manager-85fb65d6bf-4cwxd\" (UID: \"4b4ddf14-3402-4717-8cd7-9858e01a1bc2\") " pod="metallb-system/metallb-operator-controller-manager-85fb65d6bf-4cwxd" Jan 28 06:59:02 crc kubenswrapper[4642]: I0128 06:59:02.170253 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4b4ddf14-3402-4717-8cd7-9858e01a1bc2-apiservice-cert\") pod \"metallb-operator-controller-manager-85fb65d6bf-4cwxd\" (UID: \"4b4ddf14-3402-4717-8cd7-9858e01a1bc2\") " pod="metallb-system/metallb-operator-controller-manager-85fb65d6bf-4cwxd" Jan 28 06:59:02 crc kubenswrapper[4642]: I0128 06:59:02.182248 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzpfm\" (UniqueName: \"kubernetes.io/projected/4b4ddf14-3402-4717-8cd7-9858e01a1bc2-kube-api-access-wzpfm\") pod \"metallb-operator-controller-manager-85fb65d6bf-4cwxd\" (UID: \"4b4ddf14-3402-4717-8cd7-9858e01a1bc2\") " pod="metallb-system/metallb-operator-controller-manager-85fb65d6bf-4cwxd" Jan 28 06:59:02 crc kubenswrapper[4642]: I0128 06:59:02.264379 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-765f49f7c6-dglx5"] Jan 28 06:59:02 crc kubenswrapper[4642]: I0128 06:59:02.265170 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-765f49f7c6-dglx5" Jan 28 06:59:02 crc kubenswrapper[4642]: I0128 06:59:02.266681 4642 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 28 06:59:02 crc kubenswrapper[4642]: I0128 06:59:02.266694 4642 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-5n5ms" Jan 28 06:59:02 crc kubenswrapper[4642]: I0128 06:59:02.267104 4642 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 28 06:59:02 crc kubenswrapper[4642]: I0128 06:59:02.276540 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-765f49f7c6-dglx5"] Jan 28 06:59:02 crc kubenswrapper[4642]: I0128 06:59:02.336367 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-85fb65d6bf-4cwxd" Jan 28 06:59:02 crc kubenswrapper[4642]: I0128 06:59:02.366207 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1fcc6dc0-d8c3-47a9-965d-dec1320015c6-webhook-cert\") pod \"metallb-operator-webhook-server-765f49f7c6-dglx5\" (UID: \"1fcc6dc0-d8c3-47a9-965d-dec1320015c6\") " pod="metallb-system/metallb-operator-webhook-server-765f49f7c6-dglx5" Jan 28 06:59:02 crc kubenswrapper[4642]: I0128 06:59:02.366264 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mz8n\" (UniqueName: \"kubernetes.io/projected/1fcc6dc0-d8c3-47a9-965d-dec1320015c6-kube-api-access-7mz8n\") pod \"metallb-operator-webhook-server-765f49f7c6-dglx5\" (UID: \"1fcc6dc0-d8c3-47a9-965d-dec1320015c6\") " pod="metallb-system/metallb-operator-webhook-server-765f49f7c6-dglx5" Jan 28 06:59:02 crc kubenswrapper[4642]: I0128 06:59:02.366290 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1fcc6dc0-d8c3-47a9-965d-dec1320015c6-apiservice-cert\") pod \"metallb-operator-webhook-server-765f49f7c6-dglx5\" (UID: \"1fcc6dc0-d8c3-47a9-965d-dec1320015c6\") " pod="metallb-system/metallb-operator-webhook-server-765f49f7c6-dglx5" Jan 28 06:59:02 crc kubenswrapper[4642]: I0128 06:59:02.467092 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1fcc6dc0-d8c3-47a9-965d-dec1320015c6-webhook-cert\") pod \"metallb-operator-webhook-server-765f49f7c6-dglx5\" (UID: \"1fcc6dc0-d8c3-47a9-965d-dec1320015c6\") " pod="metallb-system/metallb-operator-webhook-server-765f49f7c6-dglx5" Jan 28 06:59:02 crc kubenswrapper[4642]: I0128 06:59:02.467131 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mz8n\" (UniqueName: \"kubernetes.io/projected/1fcc6dc0-d8c3-47a9-965d-dec1320015c6-kube-api-access-7mz8n\") pod \"metallb-operator-webhook-server-765f49f7c6-dglx5\" (UID: \"1fcc6dc0-d8c3-47a9-965d-dec1320015c6\") " pod="metallb-system/metallb-operator-webhook-server-765f49f7c6-dglx5" Jan 28 06:59:02 crc kubenswrapper[4642]: I0128 06:59:02.467162 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1fcc6dc0-d8c3-47a9-965d-dec1320015c6-apiservice-cert\") pod \"metallb-operator-webhook-server-765f49f7c6-dglx5\" (UID: \"1fcc6dc0-d8c3-47a9-965d-dec1320015c6\") " pod="metallb-system/metallb-operator-webhook-server-765f49f7c6-dglx5" Jan 28 06:59:02 crc kubenswrapper[4642]: I0128 06:59:02.472496 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1fcc6dc0-d8c3-47a9-965d-dec1320015c6-apiservice-cert\") pod \"metallb-operator-webhook-server-765f49f7c6-dglx5\" (UID: \"1fcc6dc0-d8c3-47a9-965d-dec1320015c6\") " pod="metallb-system/metallb-operator-webhook-server-765f49f7c6-dglx5" Jan 28 06:59:02 crc kubenswrapper[4642]: I0128 06:59:02.475643 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1fcc6dc0-d8c3-47a9-965d-dec1320015c6-webhook-cert\") pod \"metallb-operator-webhook-server-765f49f7c6-dglx5\" (UID: \"1fcc6dc0-d8c3-47a9-965d-dec1320015c6\") " pod="metallb-system/metallb-operator-webhook-server-765f49f7c6-dglx5" Jan 28 06:59:02 crc kubenswrapper[4642]: I0128 06:59:02.485981 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mz8n\" (UniqueName: \"kubernetes.io/projected/1fcc6dc0-d8c3-47a9-965d-dec1320015c6-kube-api-access-7mz8n\") pod \"metallb-operator-webhook-server-765f49f7c6-dglx5\" (UID: \"1fcc6dc0-d8c3-47a9-965d-dec1320015c6\") " pod="metallb-system/metallb-operator-webhook-server-765f49f7c6-dglx5" Jan 28 06:59:02 crc kubenswrapper[4642]: I0128 06:59:02.576512 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-765f49f7c6-dglx5" Jan 28 06:59:02 crc kubenswrapper[4642]: I0128 06:59:02.704137 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-85fb65d6bf-4cwxd"] Jan 28 06:59:02 crc kubenswrapper[4642]: I0128 06:59:02.724478 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-765f49f7c6-dglx5"] Jan 28 06:59:02 crc kubenswrapper[4642]: W0128 06:59:02.727727 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1fcc6dc0_d8c3_47a9_965d_dec1320015c6.slice/crio-f9b12049ed449fd9b2d44bbf7fed46994569369ded0008d8523ab9108e55be1b WatchSource:0}: Error finding container f9b12049ed449fd9b2d44bbf7fed46994569369ded0008d8523ab9108e55be1b: Status 404 returned error can't find the container with id f9b12049ed449fd9b2d44bbf7fed46994569369ded0008d8523ab9108e55be1b Jan 28 06:59:03 crc kubenswrapper[4642]: I0128 06:59:03.464722 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-85fb65d6bf-4cwxd" event={"ID":"4b4ddf14-3402-4717-8cd7-9858e01a1bc2","Type":"ContainerStarted","Data":"0e4cd1f00058b557bb4fd928424208c1f332037a39709d159c3e03bd0ecf695a"} Jan 28 06:59:03 crc kubenswrapper[4642]: I0128 06:59:03.466665 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-765f49f7c6-dglx5" event={"ID":"1fcc6dc0-d8c3-47a9-965d-dec1320015c6","Type":"ContainerStarted","Data":"f9b12049ed449fd9b2d44bbf7fed46994569369ded0008d8523ab9108e55be1b"} Jan 28 06:59:06 crc kubenswrapper[4642]: I0128 06:59:06.487818 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-85fb65d6bf-4cwxd" event={"ID":"4b4ddf14-3402-4717-8cd7-9858e01a1bc2","Type":"ContainerStarted","Data":"926ba439e05a3f327b6b607fa4a32d5918bf8dbec37db219ae40cb0056286ee5"} Jan 28 06:59:06 crc kubenswrapper[4642]: I0128 06:59:06.488396 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-85fb65d6bf-4cwxd" Jan 28 06:59:06 crc kubenswrapper[4642]: I0128 06:59:06.489712 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-765f49f7c6-dglx5" event={"ID":"1fcc6dc0-d8c3-47a9-965d-dec1320015c6","Type":"ContainerStarted","Data":"5f90daeb5ae8a4c9b7ba3a88b234affa8ccecd1fa73fd25a9622830c16342458"} Jan 28 06:59:06 crc kubenswrapper[4642]: I0128 06:59:06.489929 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-765f49f7c6-dglx5" Jan 28 06:59:06 crc kubenswrapper[4642]: I0128 06:59:06.505683 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-85fb65d6bf-4cwxd" podStartSLOduration=0.885302654 podStartE2EDuration="4.505660932s" podCreationTimestamp="2026-01-28 06:59:02 +0000 UTC" firstStartedPulling="2026-01-28 06:59:02.709482132 +0000 UTC m=+665.941570941" lastFinishedPulling="2026-01-28 06:59:06.32984041 +0000 UTC m=+669.561929219" observedRunningTime="2026-01-28 06:59:06.501001085 +0000 UTC m=+669.733089894" watchObservedRunningTime="2026-01-28 06:59:06.505660932 +0000 UTC m=+669.737749741" Jan 28 06:59:06 crc kubenswrapper[4642]: I0128 06:59:06.524467 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-765f49f7c6-dglx5" podStartSLOduration=0.911458027 podStartE2EDuration="4.524454426s" podCreationTimestamp="2026-01-28 06:59:02 +0000 UTC" firstStartedPulling="2026-01-28 06:59:02.73066939 +0000 UTC m=+665.962758199" lastFinishedPulling="2026-01-28 06:59:06.34366579 +0000 UTC m=+669.575754598" observedRunningTime="2026-01-28 06:59:06.520777629 +0000 UTC m=+669.752866438" watchObservedRunningTime="2026-01-28 06:59:06.524454426 +0000 UTC m=+669.756543236" Jan 28 06:59:22 crc kubenswrapper[4642]: I0128 06:59:22.585154 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-765f49f7c6-dglx5" Jan 28 06:59:38 crc kubenswrapper[4642]: I0128 06:59:38.199822 4642 patch_prober.go:28] interesting pod/machine-config-daemon-hdsmf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 06:59:38 crc kubenswrapper[4642]: I0128 06:59:38.200454 4642 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 06:59:42 crc kubenswrapper[4642]: I0128 06:59:42.339694 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-85fb65d6bf-4cwxd" Jan 28 06:59:42 crc kubenswrapper[4642]: I0128 06:59:42.956267 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-g79zx"] Jan 28 06:59:42 crc kubenswrapper[4642]: I0128 06:59:42.956893 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-g79zx" Jan 28 06:59:42 crc kubenswrapper[4642]: I0128 06:59:42.958130 4642 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-hj5rr" Jan 28 06:59:42 crc kubenswrapper[4642]: I0128 06:59:42.958656 4642 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 28 06:59:42 crc kubenswrapper[4642]: I0128 06:59:42.959297 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-c9xns"] Jan 28 06:59:42 crc kubenswrapper[4642]: I0128 06:59:42.961054 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-c9xns" Jan 28 06:59:42 crc kubenswrapper[4642]: I0128 06:59:42.962344 4642 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 28 06:59:42 crc kubenswrapper[4642]: I0128 06:59:42.962553 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 28 06:59:42 crc kubenswrapper[4642]: I0128 06:59:42.965531 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/7eff0229-6d46-439f-9e3b-b1382d2615ee-frr-startup\") pod \"frr-k8s-c9xns\" (UID: \"7eff0229-6d46-439f-9e3b-b1382d2615ee\") " pod="metallb-system/frr-k8s-c9xns" Jan 28 06:59:42 crc kubenswrapper[4642]: I0128 06:59:42.965563 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/7eff0229-6d46-439f-9e3b-b1382d2615ee-frr-conf\") pod \"frr-k8s-c9xns\" (UID: \"7eff0229-6d46-439f-9e3b-b1382d2615ee\") " pod="metallb-system/frr-k8s-c9xns" Jan 28 06:59:42 crc kubenswrapper[4642]: I0128 06:59:42.965594 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpqr7\" (UniqueName: \"kubernetes.io/projected/6b78a60e-9afd-4252-98b0-a1ba76c8e54c-kube-api-access-wpqr7\") pod \"frr-k8s-webhook-server-7df86c4f6c-g79zx\" (UID: \"6b78a60e-9afd-4252-98b0-a1ba76c8e54c\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-g79zx" Jan 28 06:59:42 crc kubenswrapper[4642]: I0128 06:59:42.965610 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/7eff0229-6d46-439f-9e3b-b1382d2615ee-metrics\") pod \"frr-k8s-c9xns\" (UID: \"7eff0229-6d46-439f-9e3b-b1382d2615ee\") " pod="metallb-system/frr-k8s-c9xns" Jan 28 06:59:42 crc kubenswrapper[4642]: I0128 06:59:42.965656 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/7eff0229-6d46-439f-9e3b-b1382d2615ee-reloader\") pod \"frr-k8s-c9xns\" (UID: \"7eff0229-6d46-439f-9e3b-b1382d2615ee\") " pod="metallb-system/frr-k8s-c9xns" Jan 28 06:59:42 crc kubenswrapper[4642]: I0128 06:59:42.965684 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7eff0229-6d46-439f-9e3b-b1382d2615ee-metrics-certs\") pod \"frr-k8s-c9xns\" (UID: \"7eff0229-6d46-439f-9e3b-b1382d2615ee\") " pod="metallb-system/frr-k8s-c9xns" Jan 28 06:59:42 crc kubenswrapper[4642]: I0128 06:59:42.965708 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/7eff0229-6d46-439f-9e3b-b1382d2615ee-frr-sockets\") pod \"frr-k8s-c9xns\" (UID: \"7eff0229-6d46-439f-9e3b-b1382d2615ee\") " pod="metallb-system/frr-k8s-c9xns" Jan 28 06:59:42 crc kubenswrapper[4642]: I0128 06:59:42.965749 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vthkv\" (UniqueName: \"kubernetes.io/projected/7eff0229-6d46-439f-9e3b-b1382d2615ee-kube-api-access-vthkv\") pod \"frr-k8s-c9xns\" (UID: \"7eff0229-6d46-439f-9e3b-b1382d2615ee\") " pod="metallb-system/frr-k8s-c9xns" Jan 28 06:59:42 crc kubenswrapper[4642]: I0128 06:59:42.965772 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6b78a60e-9afd-4252-98b0-a1ba76c8e54c-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-g79zx\" (UID: \"6b78a60e-9afd-4252-98b0-a1ba76c8e54c\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-g79zx" Jan 28 06:59:43 crc kubenswrapper[4642]: I0128 06:59:43.008102 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-6jjwb"] Jan 28 06:59:43 crc kubenswrapper[4642]: I0128 06:59:43.008975 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-6jjwb" Jan 28 06:59:43 crc kubenswrapper[4642]: I0128 06:59:43.011851 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-g79zx"] Jan 28 06:59:43 crc kubenswrapper[4642]: I0128 06:59:43.013552 4642 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 28 06:59:43 crc kubenswrapper[4642]: I0128 06:59:43.014172 4642 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 28 06:59:43 crc kubenswrapper[4642]: I0128 06:59:43.015162 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 28 06:59:43 crc kubenswrapper[4642]: I0128 06:59:43.023233 4642 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-qs878" Jan 28 06:59:43 crc kubenswrapper[4642]: I0128 06:59:43.035263 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-bwswz"] Jan 28 06:59:43 crc kubenswrapper[4642]: I0128 06:59:43.035987 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-bwswz" Jan 28 06:59:43 crc kubenswrapper[4642]: I0128 06:59:43.038770 4642 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 28 06:59:43 crc kubenswrapper[4642]: I0128 06:59:43.042288 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-bwswz"] Jan 28 06:59:43 crc kubenswrapper[4642]: I0128 06:59:43.066795 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6b78a60e-9afd-4252-98b0-a1ba76c8e54c-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-g79zx\" (UID: \"6b78a60e-9afd-4252-98b0-a1ba76c8e54c\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-g79zx" Jan 28 06:59:43 crc kubenswrapper[4642]: I0128 06:59:43.066835 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cb4d1f9b-a7f7-4bc5-89e1-3c175cbdaee2-cert\") pod \"controller-6968d8fdc4-bwswz\" (UID: \"cb4d1f9b-a7f7-4bc5-89e1-3c175cbdaee2\") " pod="metallb-system/controller-6968d8fdc4-bwswz" Jan 28 06:59:43 crc kubenswrapper[4642]: I0128 06:59:43.066858 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/7eff0229-6d46-439f-9e3b-b1382d2615ee-frr-startup\") pod \"frr-k8s-c9xns\" (UID: \"7eff0229-6d46-439f-9e3b-b1382d2615ee\") " pod="metallb-system/frr-k8s-c9xns" Jan 28 06:59:43 crc kubenswrapper[4642]: I0128 06:59:43.066878 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/7eff0229-6d46-439f-9e3b-b1382d2615ee-frr-conf\") pod \"frr-k8s-c9xns\" (UID: \"7eff0229-6d46-439f-9e3b-b1382d2615ee\") " pod="metallb-system/frr-k8s-c9xns" Jan 28 06:59:43 crc kubenswrapper[4642]: I0128 06:59:43.066895 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cb4d1f9b-a7f7-4bc5-89e1-3c175cbdaee2-metrics-certs\") pod \"controller-6968d8fdc4-bwswz\" (UID: \"cb4d1f9b-a7f7-4bc5-89e1-3c175cbdaee2\") " pod="metallb-system/controller-6968d8fdc4-bwswz" Jan 28 06:59:43 crc kubenswrapper[4642]: I0128 06:59:43.066919 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/78316030-2b3d-4a8a-b7ed-3ace14a05e80-metallb-excludel2\") pod \"speaker-6jjwb\" (UID: \"78316030-2b3d-4a8a-b7ed-3ace14a05e80\") " pod="metallb-system/speaker-6jjwb" Jan 28 06:59:43 crc kubenswrapper[4642]: E0128 06:59:43.066923 4642 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Jan 28 06:59:43 crc kubenswrapper[4642]: I0128 06:59:43.066938 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/78316030-2b3d-4a8a-b7ed-3ace14a05e80-metrics-certs\") pod \"speaker-6jjwb\" (UID: \"78316030-2b3d-4a8a-b7ed-3ace14a05e80\") " pod="metallb-system/speaker-6jjwb" Jan 28 06:59:43 crc kubenswrapper[4642]: I0128 06:59:43.066961 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpqr7\" (UniqueName: \"kubernetes.io/projected/6b78a60e-9afd-4252-98b0-a1ba76c8e54c-kube-api-access-wpqr7\") pod \"frr-k8s-webhook-server-7df86c4f6c-g79zx\" (UID: \"6b78a60e-9afd-4252-98b0-a1ba76c8e54c\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-g79zx" Jan 28 06:59:43 crc kubenswrapper[4642]: E0128 06:59:43.066973 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b78a60e-9afd-4252-98b0-a1ba76c8e54c-cert podName:6b78a60e-9afd-4252-98b0-a1ba76c8e54c nodeName:}" failed. No retries permitted until 2026-01-28 06:59:43.566957556 +0000 UTC m=+706.799046365 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6b78a60e-9afd-4252-98b0-a1ba76c8e54c-cert") pod "frr-k8s-webhook-server-7df86c4f6c-g79zx" (UID: "6b78a60e-9afd-4252-98b0-a1ba76c8e54c") : secret "frr-k8s-webhook-server-cert" not found Jan 28 06:59:43 crc kubenswrapper[4642]: I0128 06:59:43.066988 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/7eff0229-6d46-439f-9e3b-b1382d2615ee-metrics\") pod \"frr-k8s-c9xns\" (UID: \"7eff0229-6d46-439f-9e3b-b1382d2615ee\") " pod="metallb-system/frr-k8s-c9xns" Jan 28 06:59:43 crc kubenswrapper[4642]: I0128 06:59:43.067016 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/78316030-2b3d-4a8a-b7ed-3ace14a05e80-memberlist\") pod \"speaker-6jjwb\" (UID: \"78316030-2b3d-4a8a-b7ed-3ace14a05e80\") " pod="metallb-system/speaker-6jjwb" Jan 28 06:59:43 crc kubenswrapper[4642]: I0128 06:59:43.067035 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/7eff0229-6d46-439f-9e3b-b1382d2615ee-reloader\") pod \"frr-k8s-c9xns\" (UID: \"7eff0229-6d46-439f-9e3b-b1382d2615ee\") " pod="metallb-system/frr-k8s-c9xns" Jan 28 06:59:43 crc kubenswrapper[4642]: I0128 06:59:43.067054 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7eff0229-6d46-439f-9e3b-b1382d2615ee-metrics-certs\") pod \"frr-k8s-c9xns\" (UID: \"7eff0229-6d46-439f-9e3b-b1382d2615ee\") " pod="metallb-system/frr-k8s-c9xns" Jan 28 06:59:43 crc kubenswrapper[4642]: I0128 06:59:43.067080 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hf242\" (UniqueName: \"kubernetes.io/projected/78316030-2b3d-4a8a-b7ed-3ace14a05e80-kube-api-access-hf242\") pod \"speaker-6jjwb\" (UID: \"78316030-2b3d-4a8a-b7ed-3ace14a05e80\") " pod="metallb-system/speaker-6jjwb" Jan 28 06:59:43 crc kubenswrapper[4642]: I0128 06:59:43.067095 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/7eff0229-6d46-439f-9e3b-b1382d2615ee-frr-sockets\") pod \"frr-k8s-c9xns\" (UID: \"7eff0229-6d46-439f-9e3b-b1382d2615ee\") " pod="metallb-system/frr-k8s-c9xns" Jan 28 06:59:43 crc kubenswrapper[4642]: I0128 06:59:43.067112 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dknls\" (UniqueName: \"kubernetes.io/projected/cb4d1f9b-a7f7-4bc5-89e1-3c175cbdaee2-kube-api-access-dknls\") pod \"controller-6968d8fdc4-bwswz\" (UID: \"cb4d1f9b-a7f7-4bc5-89e1-3c175cbdaee2\") " pod="metallb-system/controller-6968d8fdc4-bwswz" Jan 28 06:59:43 crc kubenswrapper[4642]: I0128 06:59:43.067144 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vthkv\" (UniqueName: \"kubernetes.io/projected/7eff0229-6d46-439f-9e3b-b1382d2615ee-kube-api-access-vthkv\") pod \"frr-k8s-c9xns\" (UID: \"7eff0229-6d46-439f-9e3b-b1382d2615ee\") " pod="metallb-system/frr-k8s-c9xns" Jan 28 06:59:43 crc kubenswrapper[4642]: I0128 06:59:43.067755 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/7eff0229-6d46-439f-9e3b-b1382d2615ee-frr-conf\") pod \"frr-k8s-c9xns\" (UID: \"7eff0229-6d46-439f-9e3b-b1382d2615ee\") " pod="metallb-system/frr-k8s-c9xns" Jan 28 06:59:43 crc kubenswrapper[4642]: I0128 06:59:43.067754 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/7eff0229-6d46-439f-9e3b-b1382d2615ee-reloader\") pod \"frr-k8s-c9xns\" (UID: \"7eff0229-6d46-439f-9e3b-b1382d2615ee\") " pod="metallb-system/frr-k8s-c9xns" Jan 28 06:59:43 crc kubenswrapper[4642]: I0128 06:59:43.067877 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/7eff0229-6d46-439f-9e3b-b1382d2615ee-metrics\") pod \"frr-k8s-c9xns\" (UID: \"7eff0229-6d46-439f-9e3b-b1382d2615ee\") " pod="metallb-system/frr-k8s-c9xns" Jan 28 06:59:43 crc kubenswrapper[4642]: I0128 06:59:43.068147 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/7eff0229-6d46-439f-9e3b-b1382d2615ee-frr-sockets\") pod \"frr-k8s-c9xns\" (UID: \"7eff0229-6d46-439f-9e3b-b1382d2615ee\") " pod="metallb-system/frr-k8s-c9xns" Jan 28 06:59:43 crc kubenswrapper[4642]: I0128 06:59:43.068178 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/7eff0229-6d46-439f-9e3b-b1382d2615ee-frr-startup\") pod \"frr-k8s-c9xns\" (UID: \"7eff0229-6d46-439f-9e3b-b1382d2615ee\") " pod="metallb-system/frr-k8s-c9xns" Jan 28 06:59:43 crc kubenswrapper[4642]: I0128 06:59:43.073712 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7eff0229-6d46-439f-9e3b-b1382d2615ee-metrics-certs\") pod \"frr-k8s-c9xns\" (UID: \"7eff0229-6d46-439f-9e3b-b1382d2615ee\") " pod="metallb-system/frr-k8s-c9xns" Jan 28 06:59:43 crc kubenswrapper[4642]: I0128 06:59:43.083647 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vthkv\" (UniqueName: \"kubernetes.io/projected/7eff0229-6d46-439f-9e3b-b1382d2615ee-kube-api-access-vthkv\") pod \"frr-k8s-c9xns\" (UID: \"7eff0229-6d46-439f-9e3b-b1382d2615ee\") " pod="metallb-system/frr-k8s-c9xns" Jan 28 06:59:43 crc kubenswrapper[4642]: I0128 06:59:43.084629 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpqr7\" (UniqueName: \"kubernetes.io/projected/6b78a60e-9afd-4252-98b0-a1ba76c8e54c-kube-api-access-wpqr7\") pod \"frr-k8s-webhook-server-7df86c4f6c-g79zx\" (UID: \"6b78a60e-9afd-4252-98b0-a1ba76c8e54c\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-g79zx" Jan 28 06:59:43 crc kubenswrapper[4642]: I0128 06:59:43.168082 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/78316030-2b3d-4a8a-b7ed-3ace14a05e80-memberlist\") pod \"speaker-6jjwb\" (UID: \"78316030-2b3d-4a8a-b7ed-3ace14a05e80\") " pod="metallb-system/speaker-6jjwb" Jan 28 06:59:43 crc kubenswrapper[4642]: I0128 06:59:43.168401 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hf242\" (UniqueName: \"kubernetes.io/projected/78316030-2b3d-4a8a-b7ed-3ace14a05e80-kube-api-access-hf242\") pod \"speaker-6jjwb\" (UID: \"78316030-2b3d-4a8a-b7ed-3ace14a05e80\") " pod="metallb-system/speaker-6jjwb" Jan 28 06:59:43 crc kubenswrapper[4642]: E0128 06:59:43.168274 4642 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 28 06:59:43 crc kubenswrapper[4642]: I0128 06:59:43.168428 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dknls\" (UniqueName: \"kubernetes.io/projected/cb4d1f9b-a7f7-4bc5-89e1-3c175cbdaee2-kube-api-access-dknls\") pod \"controller-6968d8fdc4-bwswz\" (UID: \"cb4d1f9b-a7f7-4bc5-89e1-3c175cbdaee2\") " pod="metallb-system/controller-6968d8fdc4-bwswz" Jan 28 06:59:43 crc kubenswrapper[4642]: E0128 06:59:43.168478 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/78316030-2b3d-4a8a-b7ed-3ace14a05e80-memberlist podName:78316030-2b3d-4a8a-b7ed-3ace14a05e80 nodeName:}" failed. No retries permitted until 2026-01-28 06:59:43.668460107 +0000 UTC m=+706.900548916 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/78316030-2b3d-4a8a-b7ed-3ace14a05e80-memberlist") pod "speaker-6jjwb" (UID: "78316030-2b3d-4a8a-b7ed-3ace14a05e80") : secret "metallb-memberlist" not found Jan 28 06:59:43 crc kubenswrapper[4642]: I0128 06:59:43.168520 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cb4d1f9b-a7f7-4bc5-89e1-3c175cbdaee2-cert\") pod \"controller-6968d8fdc4-bwswz\" (UID: \"cb4d1f9b-a7f7-4bc5-89e1-3c175cbdaee2\") " pod="metallb-system/controller-6968d8fdc4-bwswz" Jan 28 06:59:43 crc kubenswrapper[4642]: I0128 06:59:43.168541 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cb4d1f9b-a7f7-4bc5-89e1-3c175cbdaee2-metrics-certs\") pod \"controller-6968d8fdc4-bwswz\" (UID: \"cb4d1f9b-a7f7-4bc5-89e1-3c175cbdaee2\") " pod="metallb-system/controller-6968d8fdc4-bwswz" Jan 28 06:59:43 crc kubenswrapper[4642]: E0128 06:59:43.168661 4642 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Jan 28 06:59:43 crc kubenswrapper[4642]: E0128 06:59:43.168707 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cb4d1f9b-a7f7-4bc5-89e1-3c175cbdaee2-metrics-certs podName:cb4d1f9b-a7f7-4bc5-89e1-3c175cbdaee2 nodeName:}" failed. No retries permitted until 2026-01-28 06:59:43.668694238 +0000 UTC m=+706.900783047 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cb4d1f9b-a7f7-4bc5-89e1-3c175cbdaee2-metrics-certs") pod "controller-6968d8fdc4-bwswz" (UID: "cb4d1f9b-a7f7-4bc5-89e1-3c175cbdaee2") : secret "controller-certs-secret" not found Jan 28 06:59:43 crc kubenswrapper[4642]: I0128 06:59:43.168734 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/78316030-2b3d-4a8a-b7ed-3ace14a05e80-metallb-excludel2\") pod \"speaker-6jjwb\" (UID: \"78316030-2b3d-4a8a-b7ed-3ace14a05e80\") " pod="metallb-system/speaker-6jjwb" Jan 28 06:59:43 crc kubenswrapper[4642]: I0128 06:59:43.168764 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/78316030-2b3d-4a8a-b7ed-3ace14a05e80-metrics-certs\") pod \"speaker-6jjwb\" (UID: \"78316030-2b3d-4a8a-b7ed-3ace14a05e80\") " pod="metallb-system/speaker-6jjwb" Jan 28 06:59:43 crc kubenswrapper[4642]: I0128 06:59:43.169415 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/78316030-2b3d-4a8a-b7ed-3ace14a05e80-metallb-excludel2\") pod \"speaker-6jjwb\" (UID: \"78316030-2b3d-4a8a-b7ed-3ace14a05e80\") " pod="metallb-system/speaker-6jjwb" Jan 28 06:59:43 crc kubenswrapper[4642]: I0128 06:59:43.172051 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cb4d1f9b-a7f7-4bc5-89e1-3c175cbdaee2-cert\") pod \"controller-6968d8fdc4-bwswz\" (UID: \"cb4d1f9b-a7f7-4bc5-89e1-3c175cbdaee2\") " pod="metallb-system/controller-6968d8fdc4-bwswz" Jan 28 06:59:43 crc kubenswrapper[4642]: I0128 06:59:43.178665 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/78316030-2b3d-4a8a-b7ed-3ace14a05e80-metrics-certs\") pod \"speaker-6jjwb\" (UID: \"78316030-2b3d-4a8a-b7ed-3ace14a05e80\") " pod="metallb-system/speaker-6jjwb" Jan 28 06:59:43 crc kubenswrapper[4642]: I0128 06:59:43.181985 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dknls\" (UniqueName: \"kubernetes.io/projected/cb4d1f9b-a7f7-4bc5-89e1-3c175cbdaee2-kube-api-access-dknls\") pod \"controller-6968d8fdc4-bwswz\" (UID: \"cb4d1f9b-a7f7-4bc5-89e1-3c175cbdaee2\") " pod="metallb-system/controller-6968d8fdc4-bwswz" Jan 28 06:59:43 crc kubenswrapper[4642]: I0128 06:59:43.183848 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hf242\" (UniqueName: \"kubernetes.io/projected/78316030-2b3d-4a8a-b7ed-3ace14a05e80-kube-api-access-hf242\") pod \"speaker-6jjwb\" (UID: \"78316030-2b3d-4a8a-b7ed-3ace14a05e80\") " pod="metallb-system/speaker-6jjwb" Jan 28 06:59:43 crc kubenswrapper[4642]: I0128 06:59:43.278125 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-c9xns" Jan 28 06:59:43 crc kubenswrapper[4642]: I0128 06:59:43.574002 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6b78a60e-9afd-4252-98b0-a1ba76c8e54c-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-g79zx\" (UID: \"6b78a60e-9afd-4252-98b0-a1ba76c8e54c\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-g79zx" Jan 28 06:59:43 crc kubenswrapper[4642]: I0128 06:59:43.578004 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6b78a60e-9afd-4252-98b0-a1ba76c8e54c-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-g79zx\" (UID: \"6b78a60e-9afd-4252-98b0-a1ba76c8e54c\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-g79zx" Jan 28 06:59:43 crc kubenswrapper[4642]: I0128 06:59:43.674616 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/78316030-2b3d-4a8a-b7ed-3ace14a05e80-memberlist\") pod \"speaker-6jjwb\" (UID: \"78316030-2b3d-4a8a-b7ed-3ace14a05e80\") " pod="metallb-system/speaker-6jjwb" Jan 28 06:59:43 crc kubenswrapper[4642]: I0128 06:59:43.674692 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cb4d1f9b-a7f7-4bc5-89e1-3c175cbdaee2-metrics-certs\") pod \"controller-6968d8fdc4-bwswz\" (UID: \"cb4d1f9b-a7f7-4bc5-89e1-3c175cbdaee2\") " pod="metallb-system/controller-6968d8fdc4-bwswz" Jan 28 06:59:43 crc kubenswrapper[4642]: E0128 06:59:43.675254 4642 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 28 06:59:43 crc kubenswrapper[4642]: E0128 06:59:43.675320 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/78316030-2b3d-4a8a-b7ed-3ace14a05e80-memberlist podName:78316030-2b3d-4a8a-b7ed-3ace14a05e80 nodeName:}" failed. No retries permitted until 2026-01-28 06:59:44.6753032 +0000 UTC m=+707.907392009 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/78316030-2b3d-4a8a-b7ed-3ace14a05e80-memberlist") pod "speaker-6jjwb" (UID: "78316030-2b3d-4a8a-b7ed-3ace14a05e80") : secret "metallb-memberlist" not found Jan 28 06:59:43 crc kubenswrapper[4642]: I0128 06:59:43.677618 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cb4d1f9b-a7f7-4bc5-89e1-3c175cbdaee2-metrics-certs\") pod \"controller-6968d8fdc4-bwswz\" (UID: \"cb4d1f9b-a7f7-4bc5-89e1-3c175cbdaee2\") " pod="metallb-system/controller-6968d8fdc4-bwswz" Jan 28 06:59:43 crc kubenswrapper[4642]: I0128 06:59:43.682776 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-c9xns" event={"ID":"7eff0229-6d46-439f-9e3b-b1382d2615ee","Type":"ContainerStarted","Data":"a7406883137243708d295e9089c9f0580df53e21b40185beb792276d91130934"} Jan 28 06:59:43 crc kubenswrapper[4642]: I0128 06:59:43.872529 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-g79zx" Jan 28 06:59:43 crc kubenswrapper[4642]: I0128 06:59:43.945959 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-bwswz" Jan 28 06:59:44 crc kubenswrapper[4642]: I0128 06:59:44.056422 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-g79zx"] Jan 28 06:59:44 crc kubenswrapper[4642]: I0128 06:59:44.113828 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-bwswz"] Jan 28 06:59:44 crc kubenswrapper[4642]: W0128 06:59:44.117165 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb4d1f9b_a7f7_4bc5_89e1_3c175cbdaee2.slice/crio-f0ee2d79e948e3e907f9db4b743030bb81cb37a8370bddf02135de5c1a3cbcb1 WatchSource:0}: Error finding container f0ee2d79e948e3e907f9db4b743030bb81cb37a8370bddf02135de5c1a3cbcb1: Status 404 returned error can't find the container with id f0ee2d79e948e3e907f9db4b743030bb81cb37a8370bddf02135de5c1a3cbcb1 Jan 28 06:59:44 crc kubenswrapper[4642]: I0128 06:59:44.690722 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/78316030-2b3d-4a8a-b7ed-3ace14a05e80-memberlist\") pod \"speaker-6jjwb\" (UID: \"78316030-2b3d-4a8a-b7ed-3ace14a05e80\") " pod="metallb-system/speaker-6jjwb" Jan 28 06:59:44 crc kubenswrapper[4642]: I0128 06:59:44.692546 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-bwswz" event={"ID":"cb4d1f9b-a7f7-4bc5-89e1-3c175cbdaee2","Type":"ContainerStarted","Data":"a26ae19fd4a796a446e277957d063470e81b5958d6ee5c38ab5de83ac825d83b"} Jan 28 06:59:44 crc kubenswrapper[4642]: I0128 06:59:44.692588 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-bwswz" event={"ID":"cb4d1f9b-a7f7-4bc5-89e1-3c175cbdaee2","Type":"ContainerStarted","Data":"5f87deae8526b2ba8bd16a48681ca8a24015bedfb7180dc31d1da379f33e9977"} Jan 28 06:59:44 crc kubenswrapper[4642]: I0128 06:59:44.692601 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-bwswz" event={"ID":"cb4d1f9b-a7f7-4bc5-89e1-3c175cbdaee2","Type":"ContainerStarted","Data":"f0ee2d79e948e3e907f9db4b743030bb81cb37a8370bddf02135de5c1a3cbcb1"} Jan 28 06:59:44 crc kubenswrapper[4642]: I0128 06:59:44.692759 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-bwswz" Jan 28 06:59:44 crc kubenswrapper[4642]: I0128 06:59:44.695935 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-g79zx" event={"ID":"6b78a60e-9afd-4252-98b0-a1ba76c8e54c","Type":"ContainerStarted","Data":"74b0a531b4f6c59e74a47a972b91dbaf08c9625a3b5fead6df78aefdf1c31a51"} Jan 28 06:59:44 crc kubenswrapper[4642]: I0128 06:59:44.697059 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/78316030-2b3d-4a8a-b7ed-3ace14a05e80-memberlist\") pod \"speaker-6jjwb\" (UID: \"78316030-2b3d-4a8a-b7ed-3ace14a05e80\") " pod="metallb-system/speaker-6jjwb" Jan 28 06:59:44 crc kubenswrapper[4642]: I0128 06:59:44.713139 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-bwswz" podStartSLOduration=1.713125462 podStartE2EDuration="1.713125462s" podCreationTimestamp="2026-01-28 06:59:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 06:59:44.70791496 +0000 UTC m=+707.940003768" watchObservedRunningTime="2026-01-28 06:59:44.713125462 +0000 UTC m=+707.945214271" Jan 28 06:59:44 crc kubenswrapper[4642]: I0128 06:59:44.819872 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-6jjwb" Jan 28 06:59:44 crc kubenswrapper[4642]: W0128 06:59:44.837683 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78316030_2b3d_4a8a_b7ed_3ace14a05e80.slice/crio-a6bd745e37efd4acf0365fa38ab47c889454af3116647f669c34efbbb5367cc9 WatchSource:0}: Error finding container a6bd745e37efd4acf0365fa38ab47c889454af3116647f669c34efbbb5367cc9: Status 404 returned error can't find the container with id a6bd745e37efd4acf0365fa38ab47c889454af3116647f669c34efbbb5367cc9 Jan 28 06:59:45 crc kubenswrapper[4642]: I0128 06:59:45.704655 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-6jjwb" event={"ID":"78316030-2b3d-4a8a-b7ed-3ace14a05e80","Type":"ContainerStarted","Data":"79f10dbbab5aad28d64d7aa317bee0b0c8efe9e8349a7c070ab5ca29b378e837"} Jan 28 06:59:45 crc kubenswrapper[4642]: I0128 06:59:45.704697 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-6jjwb" event={"ID":"78316030-2b3d-4a8a-b7ed-3ace14a05e80","Type":"ContainerStarted","Data":"52b147572c140fa307bf41c35d8446ef57fac10c2fa9fd303623b0a08d8499a3"} Jan 28 06:59:45 crc kubenswrapper[4642]: I0128 06:59:45.704709 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-6jjwb" event={"ID":"78316030-2b3d-4a8a-b7ed-3ace14a05e80","Type":"ContainerStarted","Data":"a6bd745e37efd4acf0365fa38ab47c889454af3116647f669c34efbbb5367cc9"} Jan 28 06:59:45 crc kubenswrapper[4642]: I0128 06:59:45.704943 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-6jjwb" Jan 28 06:59:45 crc kubenswrapper[4642]: I0128 06:59:45.722889 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-6jjwb" podStartSLOduration=3.722867638 podStartE2EDuration="3.722867638s" podCreationTimestamp="2026-01-28 06:59:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 06:59:45.721862948 +0000 UTC m=+708.953951758" watchObservedRunningTime="2026-01-28 06:59:45.722867638 +0000 UTC m=+708.954956448" Jan 28 06:59:55 crc kubenswrapper[4642]: I0128 06:59:55.762251 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-g79zx" event={"ID":"6b78a60e-9afd-4252-98b0-a1ba76c8e54c","Type":"ContainerStarted","Data":"23aea5e7e11a4868e8c09eff54c147db68cb32ff1b9c7e106e3e64f7cc6ebc5c"} Jan 28 06:59:55 crc kubenswrapper[4642]: I0128 06:59:55.763221 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-g79zx" Jan 28 06:59:55 crc kubenswrapper[4642]: I0128 06:59:55.764308 4642 generic.go:334] "Generic (PLEG): container finished" podID="7eff0229-6d46-439f-9e3b-b1382d2615ee" containerID="680625dad3ac38663f88fdb1b56f70b9609f1fe0afaa109f189bb976b7f6952d" exitCode=0 Jan 28 06:59:55 crc kubenswrapper[4642]: I0128 06:59:55.764357 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-c9xns" event={"ID":"7eff0229-6d46-439f-9e3b-b1382d2615ee","Type":"ContainerDied","Data":"680625dad3ac38663f88fdb1b56f70b9609f1fe0afaa109f189bb976b7f6952d"} Jan 28 06:59:55 crc kubenswrapper[4642]: I0128 06:59:55.782665 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-g79zx" podStartSLOduration=3.179345939 podStartE2EDuration="13.782652001s" podCreationTimestamp="2026-01-28 06:59:42 +0000 UTC" firstStartedPulling="2026-01-28 06:59:44.053664057 +0000 UTC m=+707.285752866" lastFinishedPulling="2026-01-28 06:59:54.656970119 +0000 UTC m=+717.889058928" observedRunningTime="2026-01-28 06:59:55.777833085 +0000 UTC m=+719.009921894" watchObservedRunningTime="2026-01-28 06:59:55.782652001 +0000 UTC m=+719.014740810" Jan 28 06:59:56 crc kubenswrapper[4642]: I0128 06:59:56.772574 4642 generic.go:334] "Generic (PLEG): container finished" podID="7eff0229-6d46-439f-9e3b-b1382d2615ee" containerID="2f06bedfd0beaee64a0aad650f7c1f75be960521fdd62c149ab433f5f7944325" exitCode=0 Jan 28 06:59:56 crc kubenswrapper[4642]: I0128 06:59:56.772637 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-c9xns" event={"ID":"7eff0229-6d46-439f-9e3b-b1382d2615ee","Type":"ContainerDied","Data":"2f06bedfd0beaee64a0aad650f7c1f75be960521fdd62c149ab433f5f7944325"} Jan 28 06:59:57 crc kubenswrapper[4642]: I0128 06:59:57.779622 4642 generic.go:334] "Generic (PLEG): container finished" podID="7eff0229-6d46-439f-9e3b-b1382d2615ee" containerID="db62fc2bd306e878236d7009780a7ee56807e18e25c9bc3ec216e9cf13aaec20" exitCode=0 Jan 28 06:59:57 crc kubenswrapper[4642]: I0128 06:59:57.779719 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-c9xns" event={"ID":"7eff0229-6d46-439f-9e3b-b1382d2615ee","Type":"ContainerDied","Data":"db62fc2bd306e878236d7009780a7ee56807e18e25c9bc3ec216e9cf13aaec20"} Jan 28 06:59:58 crc kubenswrapper[4642]: I0128 06:59:58.790084 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-c9xns" event={"ID":"7eff0229-6d46-439f-9e3b-b1382d2615ee","Type":"ContainerStarted","Data":"39a7ab88d4fe2c97ce393e72080adad4495415efe1d19e4711741267c566014b"} Jan 28 06:59:58 crc kubenswrapper[4642]: I0128 06:59:58.790421 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-c9xns" Jan 28 06:59:58 crc kubenswrapper[4642]: I0128 06:59:58.790438 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-c9xns" event={"ID":"7eff0229-6d46-439f-9e3b-b1382d2615ee","Type":"ContainerStarted","Data":"91e0a4f50143239209cac5551490f32856688e6de30d5983f239f286bd7149ee"} Jan 28 06:59:58 crc kubenswrapper[4642]: I0128 06:59:58.790450 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-c9xns" event={"ID":"7eff0229-6d46-439f-9e3b-b1382d2615ee","Type":"ContainerStarted","Data":"1abb9a30da958ab969d80ac1e5e7515e492656a85eb893eae4e77d1bf53412da"} Jan 28 06:59:58 crc kubenswrapper[4642]: I0128 06:59:58.790460 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-c9xns" event={"ID":"7eff0229-6d46-439f-9e3b-b1382d2615ee","Type":"ContainerStarted","Data":"0c32d6c0a16ca9fde8c4b6cb74fce4d74d7b2ce0104c058d883a650fe14ff843"} Jan 28 06:59:58 crc kubenswrapper[4642]: I0128 06:59:58.790467 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-c9xns" event={"ID":"7eff0229-6d46-439f-9e3b-b1382d2615ee","Type":"ContainerStarted","Data":"127ad904a434fbeae4fc5f9d488df461467c8448c4b1cf93d873c2a4a1766076"} Jan 28 06:59:58 crc kubenswrapper[4642]: I0128 06:59:58.790478 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-c9xns" event={"ID":"7eff0229-6d46-439f-9e3b-b1382d2615ee","Type":"ContainerStarted","Data":"659f8e8bdd868a7ff280d22741aa200f76bb1569f5f5c396a6853e3cbd0aa24a"} Jan 28 06:59:58 crc kubenswrapper[4642]: I0128 06:59:58.811258 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-c9xns" podStartSLOduration=5.541514693 podStartE2EDuration="16.811238857s" podCreationTimestamp="2026-01-28 06:59:42 +0000 UTC" firstStartedPulling="2026-01-28 06:59:43.381551863 +0000 UTC m=+706.613640672" lastFinishedPulling="2026-01-28 06:59:54.651276026 +0000 UTC m=+717.883364836" observedRunningTime="2026-01-28 06:59:58.807318001 +0000 UTC m=+722.039406811" watchObservedRunningTime="2026-01-28 06:59:58.811238857 +0000 UTC m=+722.043327666" Jan 28 07:00:00 crc kubenswrapper[4642]: I0128 07:00:00.131770 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493060-9w4pg"] Jan 28 07:00:00 crc kubenswrapper[4642]: I0128 07:00:00.132405 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493060-9w4pg" Jan 28 07:00:00 crc kubenswrapper[4642]: I0128 07:00:00.134096 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 28 07:00:00 crc kubenswrapper[4642]: I0128 07:00:00.135136 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 28 07:00:00 crc kubenswrapper[4642]: I0128 07:00:00.142696 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493060-9w4pg"] Jan 28 07:00:00 crc kubenswrapper[4642]: I0128 07:00:00.209793 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/febb5d9c-82a0-457d-8bc8-e84da755454d-config-volume\") pod \"collect-profiles-29493060-9w4pg\" (UID: \"febb5d9c-82a0-457d-8bc8-e84da755454d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493060-9w4pg" Jan 28 07:00:00 crc kubenswrapper[4642]: I0128 07:00:00.210168 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knxgj\" (UniqueName: \"kubernetes.io/projected/febb5d9c-82a0-457d-8bc8-e84da755454d-kube-api-access-knxgj\") pod \"collect-profiles-29493060-9w4pg\" (UID: \"febb5d9c-82a0-457d-8bc8-e84da755454d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493060-9w4pg" Jan 28 07:00:00 crc kubenswrapper[4642]: I0128 07:00:00.210216 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/febb5d9c-82a0-457d-8bc8-e84da755454d-secret-volume\") pod \"collect-profiles-29493060-9w4pg\" (UID: \"febb5d9c-82a0-457d-8bc8-e84da755454d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493060-9w4pg" Jan 28 07:00:00 crc kubenswrapper[4642]: I0128 07:00:00.311302 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/febb5d9c-82a0-457d-8bc8-e84da755454d-config-volume\") pod \"collect-profiles-29493060-9w4pg\" (UID: \"febb5d9c-82a0-457d-8bc8-e84da755454d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493060-9w4pg" Jan 28 07:00:00 crc kubenswrapper[4642]: I0128 07:00:00.311399 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knxgj\" (UniqueName: \"kubernetes.io/projected/febb5d9c-82a0-457d-8bc8-e84da755454d-kube-api-access-knxgj\") pod \"collect-profiles-29493060-9w4pg\" (UID: \"febb5d9c-82a0-457d-8bc8-e84da755454d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493060-9w4pg" Jan 28 07:00:00 crc kubenswrapper[4642]: I0128 07:00:00.311426 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/febb5d9c-82a0-457d-8bc8-e84da755454d-secret-volume\") pod \"collect-profiles-29493060-9w4pg\" (UID: \"febb5d9c-82a0-457d-8bc8-e84da755454d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493060-9w4pg" Jan 28 07:00:00 crc kubenswrapper[4642]: I0128 07:00:00.312427 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/febb5d9c-82a0-457d-8bc8-e84da755454d-config-volume\") pod \"collect-profiles-29493060-9w4pg\" (UID: \"febb5d9c-82a0-457d-8bc8-e84da755454d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493060-9w4pg" Jan 28 07:00:00 crc kubenswrapper[4642]: I0128 07:00:00.317477 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/febb5d9c-82a0-457d-8bc8-e84da755454d-secret-volume\") pod \"collect-profiles-29493060-9w4pg\" (UID: \"febb5d9c-82a0-457d-8bc8-e84da755454d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493060-9w4pg" Jan 28 07:00:00 crc kubenswrapper[4642]: I0128 07:00:00.326790 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knxgj\" (UniqueName: \"kubernetes.io/projected/febb5d9c-82a0-457d-8bc8-e84da755454d-kube-api-access-knxgj\") pod \"collect-profiles-29493060-9w4pg\" (UID: \"febb5d9c-82a0-457d-8bc8-e84da755454d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493060-9w4pg" Jan 28 07:00:00 crc kubenswrapper[4642]: I0128 07:00:00.449477 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493060-9w4pg" Jan 28 07:00:00 crc kubenswrapper[4642]: I0128 07:00:00.885487 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493060-9w4pg"] Jan 28 07:00:00 crc kubenswrapper[4642]: W0128 07:00:00.885662 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfebb5d9c_82a0_457d_8bc8_e84da755454d.slice/crio-5c1fa1cbebc7cfd5da94c238bfe8eb1adf35d18520496d112d29e409021c2ea2 WatchSource:0}: Error finding container 5c1fa1cbebc7cfd5da94c238bfe8eb1adf35d18520496d112d29e409021c2ea2: Status 404 returned error can't find the container with id 5c1fa1cbebc7cfd5da94c238bfe8eb1adf35d18520496d112d29e409021c2ea2 Jan 28 07:00:01 crc kubenswrapper[4642]: I0128 07:00:01.820039 4642 generic.go:334] "Generic (PLEG): container finished" podID="febb5d9c-82a0-457d-8bc8-e84da755454d" containerID="d6f045c17d53905b1878be61c8d0d08769d628230ad4db0e4f63d9a4237f5596" exitCode=0 Jan 28 07:00:01 crc kubenswrapper[4642]: I0128 07:00:01.820131 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493060-9w4pg" event={"ID":"febb5d9c-82a0-457d-8bc8-e84da755454d","Type":"ContainerDied","Data":"d6f045c17d53905b1878be61c8d0d08769d628230ad4db0e4f63d9a4237f5596"} Jan 28 07:00:01 crc kubenswrapper[4642]: I0128 07:00:01.820482 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493060-9w4pg" event={"ID":"febb5d9c-82a0-457d-8bc8-e84da755454d","Type":"ContainerStarted","Data":"5c1fa1cbebc7cfd5da94c238bfe8eb1adf35d18520496d112d29e409021c2ea2"} Jan 28 07:00:03 crc kubenswrapper[4642]: I0128 07:00:03.050842 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493060-9w4pg" Jan 28 07:00:03 crc kubenswrapper[4642]: I0128 07:00:03.154452 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/febb5d9c-82a0-457d-8bc8-e84da755454d-config-volume\") pod \"febb5d9c-82a0-457d-8bc8-e84da755454d\" (UID: \"febb5d9c-82a0-457d-8bc8-e84da755454d\") " Jan 28 07:00:03 crc kubenswrapper[4642]: I0128 07:00:03.154526 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-knxgj\" (UniqueName: \"kubernetes.io/projected/febb5d9c-82a0-457d-8bc8-e84da755454d-kube-api-access-knxgj\") pod \"febb5d9c-82a0-457d-8bc8-e84da755454d\" (UID: \"febb5d9c-82a0-457d-8bc8-e84da755454d\") " Jan 28 07:00:03 crc kubenswrapper[4642]: I0128 07:00:03.154553 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/febb5d9c-82a0-457d-8bc8-e84da755454d-secret-volume\") pod \"febb5d9c-82a0-457d-8bc8-e84da755454d\" (UID: \"febb5d9c-82a0-457d-8bc8-e84da755454d\") " Jan 28 07:00:03 crc kubenswrapper[4642]: I0128 07:00:03.155247 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/febb5d9c-82a0-457d-8bc8-e84da755454d-config-volume" (OuterVolumeSpecName: "config-volume") pod "febb5d9c-82a0-457d-8bc8-e84da755454d" (UID: "febb5d9c-82a0-457d-8bc8-e84da755454d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:00:03 crc kubenswrapper[4642]: I0128 07:00:03.163270 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/febb5d9c-82a0-457d-8bc8-e84da755454d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "febb5d9c-82a0-457d-8bc8-e84da755454d" (UID: "febb5d9c-82a0-457d-8bc8-e84da755454d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:00:03 crc kubenswrapper[4642]: I0128 07:00:03.172324 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/febb5d9c-82a0-457d-8bc8-e84da755454d-kube-api-access-knxgj" (OuterVolumeSpecName: "kube-api-access-knxgj") pod "febb5d9c-82a0-457d-8bc8-e84da755454d" (UID: "febb5d9c-82a0-457d-8bc8-e84da755454d"). InnerVolumeSpecName "kube-api-access-knxgj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:00:03 crc kubenswrapper[4642]: I0128 07:00:03.256299 4642 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/febb5d9c-82a0-457d-8bc8-e84da755454d-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 07:00:03 crc kubenswrapper[4642]: I0128 07:00:03.256336 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-knxgj\" (UniqueName: \"kubernetes.io/projected/febb5d9c-82a0-457d-8bc8-e84da755454d-kube-api-access-knxgj\") on node \"crc\" DevicePath \"\"" Jan 28 07:00:03 crc kubenswrapper[4642]: I0128 07:00:03.256349 4642 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/febb5d9c-82a0-457d-8bc8-e84da755454d-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 28 07:00:03 crc kubenswrapper[4642]: I0128 07:00:03.278965 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-c9xns" Jan 28 07:00:03 crc kubenswrapper[4642]: I0128 07:00:03.316058 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-c9xns" Jan 28 07:00:03 crc kubenswrapper[4642]: I0128 07:00:03.837423 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493060-9w4pg" Jan 28 07:00:03 crc kubenswrapper[4642]: I0128 07:00:03.837424 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493060-9w4pg" event={"ID":"febb5d9c-82a0-457d-8bc8-e84da755454d","Type":"ContainerDied","Data":"5c1fa1cbebc7cfd5da94c238bfe8eb1adf35d18520496d112d29e409021c2ea2"} Jan 28 07:00:03 crc kubenswrapper[4642]: I0128 07:00:03.837952 4642 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c1fa1cbebc7cfd5da94c238bfe8eb1adf35d18520496d112d29e409021c2ea2" Jan 28 07:00:03 crc kubenswrapper[4642]: I0128 07:00:03.949836 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-bwswz" Jan 28 07:00:04 crc kubenswrapper[4642]: I0128 07:00:04.823456 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-6jjwb" Jan 28 07:00:08 crc kubenswrapper[4642]: I0128 07:00:08.200066 4642 patch_prober.go:28] interesting pod/machine-config-daemon-hdsmf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 07:00:08 crc kubenswrapper[4642]: I0128 07:00:08.200433 4642 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 07:00:10 crc kubenswrapper[4642]: I0128 07:00:10.446925 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-wn666"] Jan 28 07:00:10 crc kubenswrapper[4642]: E0128 07:00:10.447330 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="febb5d9c-82a0-457d-8bc8-e84da755454d" containerName="collect-profiles" Jan 28 07:00:10 crc kubenswrapper[4642]: I0128 07:00:10.447350 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="febb5d9c-82a0-457d-8bc8-e84da755454d" containerName="collect-profiles" Jan 28 07:00:10 crc kubenswrapper[4642]: I0128 07:00:10.447506 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="febb5d9c-82a0-457d-8bc8-e84da755454d" containerName="collect-profiles" Jan 28 07:00:10 crc kubenswrapper[4642]: I0128 07:00:10.448273 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-wn666" Jan 28 07:00:10 crc kubenswrapper[4642]: I0128 07:00:10.451388 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 28 07:00:10 crc kubenswrapper[4642]: I0128 07:00:10.451482 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-dlqvq" Jan 28 07:00:10 crc kubenswrapper[4642]: I0128 07:00:10.451537 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 28 07:00:10 crc kubenswrapper[4642]: I0128 07:00:10.462813 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-wn666"] Jan 28 07:00:10 crc kubenswrapper[4642]: I0128 07:00:10.570909 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svxt6\" (UniqueName: \"kubernetes.io/projected/0db90a49-adb7-4431-8624-881e45653dc9-kube-api-access-svxt6\") pod \"openstack-operator-index-wn666\" (UID: \"0db90a49-adb7-4431-8624-881e45653dc9\") " pod="openstack-operators/openstack-operator-index-wn666" Jan 28 07:00:10 crc kubenswrapper[4642]: I0128 07:00:10.671983 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svxt6\" (UniqueName: \"kubernetes.io/projected/0db90a49-adb7-4431-8624-881e45653dc9-kube-api-access-svxt6\") pod \"openstack-operator-index-wn666\" (UID: \"0db90a49-adb7-4431-8624-881e45653dc9\") " pod="openstack-operators/openstack-operator-index-wn666" Jan 28 07:00:10 crc kubenswrapper[4642]: I0128 07:00:10.690964 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svxt6\" (UniqueName: \"kubernetes.io/projected/0db90a49-adb7-4431-8624-881e45653dc9-kube-api-access-svxt6\") pod \"openstack-operator-index-wn666\" (UID: \"0db90a49-adb7-4431-8624-881e45653dc9\") " pod="openstack-operators/openstack-operator-index-wn666" Jan 28 07:00:10 crc kubenswrapper[4642]: I0128 07:00:10.769536 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-wn666" Jan 28 07:00:11 crc kubenswrapper[4642]: I0128 07:00:11.137721 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-wn666"] Jan 28 07:00:11 crc kubenswrapper[4642]: W0128 07:00:11.141977 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0db90a49_adb7_4431_8624_881e45653dc9.slice/crio-ee23729724c19ced0e4d53b23dddca377325b6cef9d862a1c6c0954b5edc2cf8 WatchSource:0}: Error finding container ee23729724c19ced0e4d53b23dddca377325b6cef9d862a1c6c0954b5edc2cf8: Status 404 returned error can't find the container with id ee23729724c19ced0e4d53b23dddca377325b6cef9d862a1c6c0954b5edc2cf8 Jan 28 07:00:11 crc kubenswrapper[4642]: I0128 07:00:11.881518 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-wn666" event={"ID":"0db90a49-adb7-4431-8624-881e45653dc9","Type":"ContainerStarted","Data":"ee23729724c19ced0e4d53b23dddca377325b6cef9d862a1c6c0954b5edc2cf8"} Jan 28 07:00:12 crc kubenswrapper[4642]: I0128 07:00:12.892309 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-wn666" event={"ID":"0db90a49-adb7-4431-8624-881e45653dc9","Type":"ContainerStarted","Data":"d932b67e0d0dac3c344a0f6e6843729ad2749a00c21dbe72a9e29fa47b0f96b1"} Jan 28 07:00:12 crc kubenswrapper[4642]: I0128 07:00:12.909073 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-wn666" podStartSLOduration=2.087445662 podStartE2EDuration="2.909055694s" podCreationTimestamp="2026-01-28 07:00:10 +0000 UTC" firstStartedPulling="2026-01-28 07:00:11.143178984 +0000 UTC m=+734.375267793" lastFinishedPulling="2026-01-28 07:00:11.964789016 +0000 UTC m=+735.196877825" observedRunningTime="2026-01-28 07:00:12.905169573 +0000 UTC m=+736.137258383" watchObservedRunningTime="2026-01-28 07:00:12.909055694 +0000 UTC m=+736.141144503" Jan 28 07:00:13 crc kubenswrapper[4642]: I0128 07:00:13.282019 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-c9xns" Jan 28 07:00:13 crc kubenswrapper[4642]: I0128 07:00:13.878382 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-g79zx" Jan 28 07:00:15 crc kubenswrapper[4642]: I0128 07:00:15.637690 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-wn666"] Jan 28 07:00:15 crc kubenswrapper[4642]: I0128 07:00:15.637905 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-wn666" podUID="0db90a49-adb7-4431-8624-881e45653dc9" containerName="registry-server" containerID="cri-o://d932b67e0d0dac3c344a0f6e6843729ad2749a00c21dbe72a9e29fa47b0f96b1" gracePeriod=2 Jan 28 07:00:15 crc kubenswrapper[4642]: I0128 07:00:15.918133 4642 generic.go:334] "Generic (PLEG): container finished" podID="0db90a49-adb7-4431-8624-881e45653dc9" containerID="d932b67e0d0dac3c344a0f6e6843729ad2749a00c21dbe72a9e29fa47b0f96b1" exitCode=0 Jan 28 07:00:15 crc kubenswrapper[4642]: I0128 07:00:15.918277 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-wn666" event={"ID":"0db90a49-adb7-4431-8624-881e45653dc9","Type":"ContainerDied","Data":"d932b67e0d0dac3c344a0f6e6843729ad2749a00c21dbe72a9e29fa47b0f96b1"} Jan 28 07:00:15 crc kubenswrapper[4642]: I0128 07:00:15.943791 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-wn666" Jan 28 07:00:16 crc kubenswrapper[4642]: I0128 07:00:16.039692 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-svxt6\" (UniqueName: \"kubernetes.io/projected/0db90a49-adb7-4431-8624-881e45653dc9-kube-api-access-svxt6\") pod \"0db90a49-adb7-4431-8624-881e45653dc9\" (UID: \"0db90a49-adb7-4431-8624-881e45653dc9\") " Jan 28 07:00:16 crc kubenswrapper[4642]: I0128 07:00:16.045813 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0db90a49-adb7-4431-8624-881e45653dc9-kube-api-access-svxt6" (OuterVolumeSpecName: "kube-api-access-svxt6") pod "0db90a49-adb7-4431-8624-881e45653dc9" (UID: "0db90a49-adb7-4431-8624-881e45653dc9"). InnerVolumeSpecName "kube-api-access-svxt6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:00:16 crc kubenswrapper[4642]: I0128 07:00:16.141648 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-svxt6\" (UniqueName: \"kubernetes.io/projected/0db90a49-adb7-4431-8624-881e45653dc9-kube-api-access-svxt6\") on node \"crc\" DevicePath \"\"" Jan 28 07:00:16 crc kubenswrapper[4642]: I0128 07:00:16.242358 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-j2f6p"] Jan 28 07:00:16 crc kubenswrapper[4642]: E0128 07:00:16.242683 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0db90a49-adb7-4431-8624-881e45653dc9" containerName="registry-server" Jan 28 07:00:16 crc kubenswrapper[4642]: I0128 07:00:16.242707 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="0db90a49-adb7-4431-8624-881e45653dc9" containerName="registry-server" Jan 28 07:00:16 crc kubenswrapper[4642]: I0128 07:00:16.242815 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="0db90a49-adb7-4431-8624-881e45653dc9" containerName="registry-server" Jan 28 07:00:16 crc kubenswrapper[4642]: I0128 07:00:16.243301 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-j2f6p" Jan 28 07:00:16 crc kubenswrapper[4642]: I0128 07:00:16.248657 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-j2f6p"] Jan 28 07:00:16 crc kubenswrapper[4642]: I0128 07:00:16.344706 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2f48\" (UniqueName: \"kubernetes.io/projected/68d33b51-456a-4363-83ec-7f60de722a77-kube-api-access-v2f48\") pod \"openstack-operator-index-j2f6p\" (UID: \"68d33b51-456a-4363-83ec-7f60de722a77\") " pod="openstack-operators/openstack-operator-index-j2f6p" Jan 28 07:00:16 crc kubenswrapper[4642]: I0128 07:00:16.446563 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2f48\" (UniqueName: \"kubernetes.io/projected/68d33b51-456a-4363-83ec-7f60de722a77-kube-api-access-v2f48\") pod \"openstack-operator-index-j2f6p\" (UID: \"68d33b51-456a-4363-83ec-7f60de722a77\") " pod="openstack-operators/openstack-operator-index-j2f6p" Jan 28 07:00:16 crc kubenswrapper[4642]: I0128 07:00:16.462253 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2f48\" (UniqueName: \"kubernetes.io/projected/68d33b51-456a-4363-83ec-7f60de722a77-kube-api-access-v2f48\") pod \"openstack-operator-index-j2f6p\" (UID: \"68d33b51-456a-4363-83ec-7f60de722a77\") " pod="openstack-operators/openstack-operator-index-j2f6p" Jan 28 07:00:16 crc kubenswrapper[4642]: I0128 07:00:16.557274 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-j2f6p" Jan 28 07:00:16 crc kubenswrapper[4642]: I0128 07:00:16.927901 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-wn666" event={"ID":"0db90a49-adb7-4431-8624-881e45653dc9","Type":"ContainerDied","Data":"ee23729724c19ced0e4d53b23dddca377325b6cef9d862a1c6c0954b5edc2cf8"} Jan 28 07:00:16 crc kubenswrapper[4642]: I0128 07:00:16.928206 4642 scope.go:117] "RemoveContainer" containerID="d932b67e0d0dac3c344a0f6e6843729ad2749a00c21dbe72a9e29fa47b0f96b1" Jan 28 07:00:16 crc kubenswrapper[4642]: I0128 07:00:16.928002 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-wn666" Jan 28 07:00:16 crc kubenswrapper[4642]: I0128 07:00:16.959013 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-wn666"] Jan 28 07:00:16 crc kubenswrapper[4642]: W0128 07:00:16.961106 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod68d33b51_456a_4363_83ec_7f60de722a77.slice/crio-29638f573f13174d18f8bc3fb71481d6e4f9ec6243480979745f1f26d35ee768 WatchSource:0}: Error finding container 29638f573f13174d18f8bc3fb71481d6e4f9ec6243480979745f1f26d35ee768: Status 404 returned error can't find the container with id 29638f573f13174d18f8bc3fb71481d6e4f9ec6243480979745f1f26d35ee768 Jan 28 07:00:16 crc kubenswrapper[4642]: I0128 07:00:16.961897 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-j2f6p"] Jan 28 07:00:16 crc kubenswrapper[4642]: I0128 07:00:16.964513 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-wn666"] Jan 28 07:00:17 crc kubenswrapper[4642]: I0128 07:00:17.107276 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0db90a49-adb7-4431-8624-881e45653dc9" path="/var/lib/kubelet/pods/0db90a49-adb7-4431-8624-881e45653dc9/volumes" Jan 28 07:00:17 crc kubenswrapper[4642]: I0128 07:00:17.933919 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-j2f6p" event={"ID":"68d33b51-456a-4363-83ec-7f60de722a77","Type":"ContainerStarted","Data":"70dd38acefd320288a6663c9d12696bdfcd28b29af7064b258546b4cbaa21975"} Jan 28 07:00:17 crc kubenswrapper[4642]: I0128 07:00:17.934291 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-j2f6p" event={"ID":"68d33b51-456a-4363-83ec-7f60de722a77","Type":"ContainerStarted","Data":"29638f573f13174d18f8bc3fb71481d6e4f9ec6243480979745f1f26d35ee768"} Jan 28 07:00:17 crc kubenswrapper[4642]: I0128 07:00:17.947980 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-j2f6p" podStartSLOduration=1.438918857 podStartE2EDuration="1.947970282s" podCreationTimestamp="2026-01-28 07:00:16 +0000 UTC" firstStartedPulling="2026-01-28 07:00:16.964209456 +0000 UTC m=+740.196298265" lastFinishedPulling="2026-01-28 07:00:17.473260881 +0000 UTC m=+740.705349690" observedRunningTime="2026-01-28 07:00:17.945923461 +0000 UTC m=+741.178012270" watchObservedRunningTime="2026-01-28 07:00:17.947970282 +0000 UTC m=+741.180059091" Jan 28 07:00:26 crc kubenswrapper[4642]: I0128 07:00:26.558017 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-j2f6p" Jan 28 07:00:26 crc kubenswrapper[4642]: I0128 07:00:26.558727 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-j2f6p" Jan 28 07:00:26 crc kubenswrapper[4642]: I0128 07:00:26.583727 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-j2f6p" Jan 28 07:00:27 crc kubenswrapper[4642]: I0128 07:00:27.003215 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-j2f6p" Jan 28 07:00:28 crc kubenswrapper[4642]: I0128 07:00:28.471763 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/c63724877025f1d18ba1fa29f8076dfd209b6bb3b67e44a6aa3755fab2rpf9j"] Jan 28 07:00:28 crc kubenswrapper[4642]: I0128 07:00:28.472848 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c63724877025f1d18ba1fa29f8076dfd209b6bb3b67e44a6aa3755fab2rpf9j" Jan 28 07:00:28 crc kubenswrapper[4642]: I0128 07:00:28.475075 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-tlnfk" Jan 28 07:00:28 crc kubenswrapper[4642]: I0128 07:00:28.483646 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/c63724877025f1d18ba1fa29f8076dfd209b6bb3b67e44a6aa3755fab2rpf9j"] Jan 28 07:00:28 crc kubenswrapper[4642]: I0128 07:00:28.503156 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ca36e19a-c862-47bc-b335-0f3d55dc2d4c-util\") pod \"c63724877025f1d18ba1fa29f8076dfd209b6bb3b67e44a6aa3755fab2rpf9j\" (UID: \"ca36e19a-c862-47bc-b335-0f3d55dc2d4c\") " pod="openstack-operators/c63724877025f1d18ba1fa29f8076dfd209b6bb3b67e44a6aa3755fab2rpf9j" Jan 28 07:00:28 crc kubenswrapper[4642]: I0128 07:00:28.503291 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ca36e19a-c862-47bc-b335-0f3d55dc2d4c-bundle\") pod \"c63724877025f1d18ba1fa29f8076dfd209b6bb3b67e44a6aa3755fab2rpf9j\" (UID: \"ca36e19a-c862-47bc-b335-0f3d55dc2d4c\") " pod="openstack-operators/c63724877025f1d18ba1fa29f8076dfd209b6bb3b67e44a6aa3755fab2rpf9j" Jan 28 07:00:28 crc kubenswrapper[4642]: I0128 07:00:28.503318 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgzq2\" (UniqueName: \"kubernetes.io/projected/ca36e19a-c862-47bc-b335-0f3d55dc2d4c-kube-api-access-vgzq2\") pod \"c63724877025f1d18ba1fa29f8076dfd209b6bb3b67e44a6aa3755fab2rpf9j\" (UID: \"ca36e19a-c862-47bc-b335-0f3d55dc2d4c\") " pod="openstack-operators/c63724877025f1d18ba1fa29f8076dfd209b6bb3b67e44a6aa3755fab2rpf9j" Jan 28 07:00:28 crc kubenswrapper[4642]: I0128 07:00:28.604643 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ca36e19a-c862-47bc-b335-0f3d55dc2d4c-util\") pod \"c63724877025f1d18ba1fa29f8076dfd209b6bb3b67e44a6aa3755fab2rpf9j\" (UID: \"ca36e19a-c862-47bc-b335-0f3d55dc2d4c\") " pod="openstack-operators/c63724877025f1d18ba1fa29f8076dfd209b6bb3b67e44a6aa3755fab2rpf9j" Jan 28 07:00:28 crc kubenswrapper[4642]: I0128 07:00:28.604729 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ca36e19a-c862-47bc-b335-0f3d55dc2d4c-bundle\") pod \"c63724877025f1d18ba1fa29f8076dfd209b6bb3b67e44a6aa3755fab2rpf9j\" (UID: \"ca36e19a-c862-47bc-b335-0f3d55dc2d4c\") " pod="openstack-operators/c63724877025f1d18ba1fa29f8076dfd209b6bb3b67e44a6aa3755fab2rpf9j" Jan 28 07:00:28 crc kubenswrapper[4642]: I0128 07:00:28.604759 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgzq2\" (UniqueName: \"kubernetes.io/projected/ca36e19a-c862-47bc-b335-0f3d55dc2d4c-kube-api-access-vgzq2\") pod \"c63724877025f1d18ba1fa29f8076dfd209b6bb3b67e44a6aa3755fab2rpf9j\" (UID: \"ca36e19a-c862-47bc-b335-0f3d55dc2d4c\") " pod="openstack-operators/c63724877025f1d18ba1fa29f8076dfd209b6bb3b67e44a6aa3755fab2rpf9j" Jan 28 07:00:28 crc kubenswrapper[4642]: I0128 07:00:28.605296 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ca36e19a-c862-47bc-b335-0f3d55dc2d4c-util\") pod \"c63724877025f1d18ba1fa29f8076dfd209b6bb3b67e44a6aa3755fab2rpf9j\" (UID: \"ca36e19a-c862-47bc-b335-0f3d55dc2d4c\") " pod="openstack-operators/c63724877025f1d18ba1fa29f8076dfd209b6bb3b67e44a6aa3755fab2rpf9j" Jan 28 07:00:28 crc kubenswrapper[4642]: I0128 07:00:28.605473 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ca36e19a-c862-47bc-b335-0f3d55dc2d4c-bundle\") pod \"c63724877025f1d18ba1fa29f8076dfd209b6bb3b67e44a6aa3755fab2rpf9j\" (UID: \"ca36e19a-c862-47bc-b335-0f3d55dc2d4c\") " pod="openstack-operators/c63724877025f1d18ba1fa29f8076dfd209b6bb3b67e44a6aa3755fab2rpf9j" Jan 28 07:00:28 crc kubenswrapper[4642]: I0128 07:00:28.621958 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgzq2\" (UniqueName: \"kubernetes.io/projected/ca36e19a-c862-47bc-b335-0f3d55dc2d4c-kube-api-access-vgzq2\") pod \"c63724877025f1d18ba1fa29f8076dfd209b6bb3b67e44a6aa3755fab2rpf9j\" (UID: \"ca36e19a-c862-47bc-b335-0f3d55dc2d4c\") " pod="openstack-operators/c63724877025f1d18ba1fa29f8076dfd209b6bb3b67e44a6aa3755fab2rpf9j" Jan 28 07:00:28 crc kubenswrapper[4642]: I0128 07:00:28.789831 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c63724877025f1d18ba1fa29f8076dfd209b6bb3b67e44a6aa3755fab2rpf9j" Jan 28 07:00:29 crc kubenswrapper[4642]: I0128 07:00:29.148918 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/c63724877025f1d18ba1fa29f8076dfd209b6bb3b67e44a6aa3755fab2rpf9j"] Jan 28 07:00:29 crc kubenswrapper[4642]: I0128 07:00:29.999199 4642 generic.go:334] "Generic (PLEG): container finished" podID="ca36e19a-c862-47bc-b335-0f3d55dc2d4c" containerID="15b9b790c15f4c0e5e336001082117a715f91a2687d3e70b2dc51612b8912308" exitCode=0 Jan 28 07:00:29 crc kubenswrapper[4642]: I0128 07:00:29.999268 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c63724877025f1d18ba1fa29f8076dfd209b6bb3b67e44a6aa3755fab2rpf9j" event={"ID":"ca36e19a-c862-47bc-b335-0f3d55dc2d4c","Type":"ContainerDied","Data":"15b9b790c15f4c0e5e336001082117a715f91a2687d3e70b2dc51612b8912308"} Jan 28 07:00:30 crc kubenswrapper[4642]: I0128 07:00:29.999743 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c63724877025f1d18ba1fa29f8076dfd209b6bb3b67e44a6aa3755fab2rpf9j" event={"ID":"ca36e19a-c862-47bc-b335-0f3d55dc2d4c","Type":"ContainerStarted","Data":"28801d69ddea9e4770bc4ae2163a45a456ad3435a0c7ee445c33a4e58d80e5d4"} Jan 28 07:00:31 crc kubenswrapper[4642]: I0128 07:00:31.007517 4642 generic.go:334] "Generic (PLEG): container finished" podID="ca36e19a-c862-47bc-b335-0f3d55dc2d4c" containerID="7af43afcbf52f4201cbc71491f9165c1d147226cd79a6786a5ec87cb78de3b9e" exitCode=0 Jan 28 07:00:31 crc kubenswrapper[4642]: I0128 07:00:31.007638 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c63724877025f1d18ba1fa29f8076dfd209b6bb3b67e44a6aa3755fab2rpf9j" event={"ID":"ca36e19a-c862-47bc-b335-0f3d55dc2d4c","Type":"ContainerDied","Data":"7af43afcbf52f4201cbc71491f9165c1d147226cd79a6786a5ec87cb78de3b9e"} Jan 28 07:00:32 crc kubenswrapper[4642]: I0128 07:00:32.013549 4642 generic.go:334] "Generic (PLEG): container finished" podID="ca36e19a-c862-47bc-b335-0f3d55dc2d4c" containerID="28c2c025137243c5462993c9beecc14decdf060194d064a82584ec113b823f5e" exitCode=0 Jan 28 07:00:32 crc kubenswrapper[4642]: I0128 07:00:32.013581 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c63724877025f1d18ba1fa29f8076dfd209b6bb3b67e44a6aa3755fab2rpf9j" event={"ID":"ca36e19a-c862-47bc-b335-0f3d55dc2d4c","Type":"ContainerDied","Data":"28c2c025137243c5462993c9beecc14decdf060194d064a82584ec113b823f5e"} Jan 28 07:00:33 crc kubenswrapper[4642]: I0128 07:00:33.231811 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c63724877025f1d18ba1fa29f8076dfd209b6bb3b67e44a6aa3755fab2rpf9j" Jan 28 07:00:33 crc kubenswrapper[4642]: I0128 07:00:33.262531 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vgzq2\" (UniqueName: \"kubernetes.io/projected/ca36e19a-c862-47bc-b335-0f3d55dc2d4c-kube-api-access-vgzq2\") pod \"ca36e19a-c862-47bc-b335-0f3d55dc2d4c\" (UID: \"ca36e19a-c862-47bc-b335-0f3d55dc2d4c\") " Jan 28 07:00:33 crc kubenswrapper[4642]: I0128 07:00:33.262598 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ca36e19a-c862-47bc-b335-0f3d55dc2d4c-util\") pod \"ca36e19a-c862-47bc-b335-0f3d55dc2d4c\" (UID: \"ca36e19a-c862-47bc-b335-0f3d55dc2d4c\") " Jan 28 07:00:33 crc kubenswrapper[4642]: I0128 07:00:33.262708 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ca36e19a-c862-47bc-b335-0f3d55dc2d4c-bundle\") pod \"ca36e19a-c862-47bc-b335-0f3d55dc2d4c\" (UID: \"ca36e19a-c862-47bc-b335-0f3d55dc2d4c\") " Jan 28 07:00:33 crc kubenswrapper[4642]: I0128 07:00:33.264070 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca36e19a-c862-47bc-b335-0f3d55dc2d4c-bundle" (OuterVolumeSpecName: "bundle") pod "ca36e19a-c862-47bc-b335-0f3d55dc2d4c" (UID: "ca36e19a-c862-47bc-b335-0f3d55dc2d4c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 07:00:33 crc kubenswrapper[4642]: I0128 07:00:33.268081 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca36e19a-c862-47bc-b335-0f3d55dc2d4c-kube-api-access-vgzq2" (OuterVolumeSpecName: "kube-api-access-vgzq2") pod "ca36e19a-c862-47bc-b335-0f3d55dc2d4c" (UID: "ca36e19a-c862-47bc-b335-0f3d55dc2d4c"). InnerVolumeSpecName "kube-api-access-vgzq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:00:33 crc kubenswrapper[4642]: I0128 07:00:33.272210 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca36e19a-c862-47bc-b335-0f3d55dc2d4c-util" (OuterVolumeSpecName: "util") pod "ca36e19a-c862-47bc-b335-0f3d55dc2d4c" (UID: "ca36e19a-c862-47bc-b335-0f3d55dc2d4c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 07:00:33 crc kubenswrapper[4642]: I0128 07:00:33.363853 4642 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ca36e19a-c862-47bc-b335-0f3d55dc2d4c-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 07:00:33 crc kubenswrapper[4642]: I0128 07:00:33.363883 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vgzq2\" (UniqueName: \"kubernetes.io/projected/ca36e19a-c862-47bc-b335-0f3d55dc2d4c-kube-api-access-vgzq2\") on node \"crc\" DevicePath \"\"" Jan 28 07:00:33 crc kubenswrapper[4642]: I0128 07:00:33.363894 4642 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ca36e19a-c862-47bc-b335-0f3d55dc2d4c-util\") on node \"crc\" DevicePath \"\"" Jan 28 07:00:34 crc kubenswrapper[4642]: I0128 07:00:34.025943 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c63724877025f1d18ba1fa29f8076dfd209b6bb3b67e44a6aa3755fab2rpf9j" event={"ID":"ca36e19a-c862-47bc-b335-0f3d55dc2d4c","Type":"ContainerDied","Data":"28801d69ddea9e4770bc4ae2163a45a456ad3435a0c7ee445c33a4e58d80e5d4"} Jan 28 07:00:34 crc kubenswrapper[4642]: I0128 07:00:34.025979 4642 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28801d69ddea9e4770bc4ae2163a45a456ad3435a0c7ee445c33a4e58d80e5d4" Jan 28 07:00:34 crc kubenswrapper[4642]: I0128 07:00:34.026085 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c63724877025f1d18ba1fa29f8076dfd209b6bb3b67e44a6aa3755fab2rpf9j" Jan 28 07:00:34 crc kubenswrapper[4642]: I0128 07:00:34.043433 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tk7kq"] Jan 28 07:00:34 crc kubenswrapper[4642]: E0128 07:00:34.043665 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca36e19a-c862-47bc-b335-0f3d55dc2d4c" containerName="util" Jan 28 07:00:34 crc kubenswrapper[4642]: I0128 07:00:34.043681 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca36e19a-c862-47bc-b335-0f3d55dc2d4c" containerName="util" Jan 28 07:00:34 crc kubenswrapper[4642]: E0128 07:00:34.043700 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca36e19a-c862-47bc-b335-0f3d55dc2d4c" containerName="pull" Jan 28 07:00:34 crc kubenswrapper[4642]: I0128 07:00:34.043706 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca36e19a-c862-47bc-b335-0f3d55dc2d4c" containerName="pull" Jan 28 07:00:34 crc kubenswrapper[4642]: E0128 07:00:34.043718 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca36e19a-c862-47bc-b335-0f3d55dc2d4c" containerName="extract" Jan 28 07:00:34 crc kubenswrapper[4642]: I0128 07:00:34.043723 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca36e19a-c862-47bc-b335-0f3d55dc2d4c" containerName="extract" Jan 28 07:00:34 crc kubenswrapper[4642]: I0128 07:00:34.043835 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca36e19a-c862-47bc-b335-0f3d55dc2d4c" containerName="extract" Jan 28 07:00:34 crc kubenswrapper[4642]: I0128 07:00:34.044599 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tk7kq" Jan 28 07:00:34 crc kubenswrapper[4642]: I0128 07:00:34.062515 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tk7kq"] Jan 28 07:00:34 crc kubenswrapper[4642]: I0128 07:00:34.070337 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69d11272-eb01-4e3f-adf9-dd58922e6b8b-utilities\") pod \"redhat-operators-tk7kq\" (UID: \"69d11272-eb01-4e3f-adf9-dd58922e6b8b\") " pod="openshift-marketplace/redhat-operators-tk7kq" Jan 28 07:00:34 crc kubenswrapper[4642]: I0128 07:00:34.070385 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69d11272-eb01-4e3f-adf9-dd58922e6b8b-catalog-content\") pod \"redhat-operators-tk7kq\" (UID: \"69d11272-eb01-4e3f-adf9-dd58922e6b8b\") " pod="openshift-marketplace/redhat-operators-tk7kq" Jan 28 07:00:34 crc kubenswrapper[4642]: I0128 07:00:34.070423 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2gkl\" (UniqueName: \"kubernetes.io/projected/69d11272-eb01-4e3f-adf9-dd58922e6b8b-kube-api-access-g2gkl\") pod \"redhat-operators-tk7kq\" (UID: \"69d11272-eb01-4e3f-adf9-dd58922e6b8b\") " pod="openshift-marketplace/redhat-operators-tk7kq" Jan 28 07:00:34 crc kubenswrapper[4642]: I0128 07:00:34.171826 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69d11272-eb01-4e3f-adf9-dd58922e6b8b-utilities\") pod \"redhat-operators-tk7kq\" (UID: \"69d11272-eb01-4e3f-adf9-dd58922e6b8b\") " pod="openshift-marketplace/redhat-operators-tk7kq" Jan 28 07:00:34 crc kubenswrapper[4642]: I0128 07:00:34.171882 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69d11272-eb01-4e3f-adf9-dd58922e6b8b-catalog-content\") pod \"redhat-operators-tk7kq\" (UID: \"69d11272-eb01-4e3f-adf9-dd58922e6b8b\") " pod="openshift-marketplace/redhat-operators-tk7kq" Jan 28 07:00:34 crc kubenswrapper[4642]: I0128 07:00:34.171939 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2gkl\" (UniqueName: \"kubernetes.io/projected/69d11272-eb01-4e3f-adf9-dd58922e6b8b-kube-api-access-g2gkl\") pod \"redhat-operators-tk7kq\" (UID: \"69d11272-eb01-4e3f-adf9-dd58922e6b8b\") " pod="openshift-marketplace/redhat-operators-tk7kq" Jan 28 07:00:34 crc kubenswrapper[4642]: I0128 07:00:34.172777 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69d11272-eb01-4e3f-adf9-dd58922e6b8b-catalog-content\") pod \"redhat-operators-tk7kq\" (UID: \"69d11272-eb01-4e3f-adf9-dd58922e6b8b\") " pod="openshift-marketplace/redhat-operators-tk7kq" Jan 28 07:00:34 crc kubenswrapper[4642]: I0128 07:00:34.172990 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69d11272-eb01-4e3f-adf9-dd58922e6b8b-utilities\") pod \"redhat-operators-tk7kq\" (UID: \"69d11272-eb01-4e3f-adf9-dd58922e6b8b\") " pod="openshift-marketplace/redhat-operators-tk7kq" Jan 28 07:00:34 crc kubenswrapper[4642]: I0128 07:00:34.183909 4642 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 28 07:00:34 crc kubenswrapper[4642]: I0128 07:00:34.196818 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2gkl\" (UniqueName: \"kubernetes.io/projected/69d11272-eb01-4e3f-adf9-dd58922e6b8b-kube-api-access-g2gkl\") pod \"redhat-operators-tk7kq\" (UID: \"69d11272-eb01-4e3f-adf9-dd58922e6b8b\") " pod="openshift-marketplace/redhat-operators-tk7kq" Jan 28 07:00:34 crc kubenswrapper[4642]: I0128 07:00:34.370949 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tk7kq" Jan 28 07:00:34 crc kubenswrapper[4642]: I0128 07:00:34.739427 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tk7kq"] Jan 28 07:00:35 crc kubenswrapper[4642]: I0128 07:00:35.031974 4642 generic.go:334] "Generic (PLEG): container finished" podID="69d11272-eb01-4e3f-adf9-dd58922e6b8b" containerID="ed00d2a7b8dbe414630405944da3274419acf1000915e85ac81bc727923aa903" exitCode=0 Jan 28 07:00:35 crc kubenswrapper[4642]: I0128 07:00:35.032083 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tk7kq" event={"ID":"69d11272-eb01-4e3f-adf9-dd58922e6b8b","Type":"ContainerDied","Data":"ed00d2a7b8dbe414630405944da3274419acf1000915e85ac81bc727923aa903"} Jan 28 07:00:35 crc kubenswrapper[4642]: I0128 07:00:35.032266 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tk7kq" event={"ID":"69d11272-eb01-4e3f-adf9-dd58922e6b8b","Type":"ContainerStarted","Data":"04403136d50d245abb19acbd7c30cffbdb2bd346c95309cf02cb6f5bfb3faf83"} Jan 28 07:00:36 crc kubenswrapper[4642]: I0128 07:00:36.041348 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tk7kq" event={"ID":"69d11272-eb01-4e3f-adf9-dd58922e6b8b","Type":"ContainerStarted","Data":"996558c8a52574bf4e32d8ec46fdfcc6ba670892c3e9cc611d5a85568b022386"} Jan 28 07:00:37 crc kubenswrapper[4642]: I0128 07:00:37.050169 4642 generic.go:334] "Generic (PLEG): container finished" podID="69d11272-eb01-4e3f-adf9-dd58922e6b8b" containerID="996558c8a52574bf4e32d8ec46fdfcc6ba670892c3e9cc611d5a85568b022386" exitCode=0 Jan 28 07:00:37 crc kubenswrapper[4642]: I0128 07:00:37.050333 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tk7kq" event={"ID":"69d11272-eb01-4e3f-adf9-dd58922e6b8b","Type":"ContainerDied","Data":"996558c8a52574bf4e32d8ec46fdfcc6ba670892c3e9cc611d5a85568b022386"} Jan 28 07:00:38 crc kubenswrapper[4642]: I0128 07:00:38.059786 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tk7kq" event={"ID":"69d11272-eb01-4e3f-adf9-dd58922e6b8b","Type":"ContainerStarted","Data":"9756bbaa7ec33ba3d6e3f8ece78dbfc50b4fd36f959043593c8f6c6f94cd8bcd"} Jan 28 07:00:38 crc kubenswrapper[4642]: I0128 07:00:38.075664 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tk7kq" podStartSLOduration=1.558285058 podStartE2EDuration="4.075642632s" podCreationTimestamp="2026-01-28 07:00:34 +0000 UTC" firstStartedPulling="2026-01-28 07:00:35.033421115 +0000 UTC m=+758.265509924" lastFinishedPulling="2026-01-28 07:00:37.550778689 +0000 UTC m=+760.782867498" observedRunningTime="2026-01-28 07:00:38.074511243 +0000 UTC m=+761.306600052" watchObservedRunningTime="2026-01-28 07:00:38.075642632 +0000 UTC m=+761.307731440" Jan 28 07:00:38 crc kubenswrapper[4642]: I0128 07:00:38.199855 4642 patch_prober.go:28] interesting pod/machine-config-daemon-hdsmf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 07:00:38 crc kubenswrapper[4642]: I0128 07:00:38.199927 4642 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 07:00:38 crc kubenswrapper[4642]: I0128 07:00:38.199991 4642 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" Jan 28 07:00:38 crc kubenswrapper[4642]: I0128 07:00:38.200656 4642 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6c756017776ac34b1c728f1bab1ac90c7064431607aa921734d4ccc64382a3e1"} pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 07:00:38 crc kubenswrapper[4642]: I0128 07:00:38.200713 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" containerName="machine-config-daemon" containerID="cri-o://6c756017776ac34b1c728f1bab1ac90c7064431607aa921734d4ccc64382a3e1" gracePeriod=600 Jan 28 07:00:38 crc kubenswrapper[4642]: I0128 07:00:38.449487 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bwrfs"] Jan 28 07:00:38 crc kubenswrapper[4642]: I0128 07:00:38.452982 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bwrfs" Jan 28 07:00:38 crc kubenswrapper[4642]: I0128 07:00:38.458935 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bwrfs"] Jan 28 07:00:38 crc kubenswrapper[4642]: I0128 07:00:38.530382 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56ad2bca-f3c3-4ec7-9b13-a17f26d4fceb-utilities\") pod \"redhat-marketplace-bwrfs\" (UID: \"56ad2bca-f3c3-4ec7-9b13-a17f26d4fceb\") " pod="openshift-marketplace/redhat-marketplace-bwrfs" Jan 28 07:00:38 crc kubenswrapper[4642]: I0128 07:00:38.530444 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56ad2bca-f3c3-4ec7-9b13-a17f26d4fceb-catalog-content\") pod \"redhat-marketplace-bwrfs\" (UID: \"56ad2bca-f3c3-4ec7-9b13-a17f26d4fceb\") " pod="openshift-marketplace/redhat-marketplace-bwrfs" Jan 28 07:00:38 crc kubenswrapper[4642]: I0128 07:00:38.530485 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28fth\" (UniqueName: \"kubernetes.io/projected/56ad2bca-f3c3-4ec7-9b13-a17f26d4fceb-kube-api-access-28fth\") pod \"redhat-marketplace-bwrfs\" (UID: \"56ad2bca-f3c3-4ec7-9b13-a17f26d4fceb\") " pod="openshift-marketplace/redhat-marketplace-bwrfs" Jan 28 07:00:38 crc kubenswrapper[4642]: I0128 07:00:38.632409 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56ad2bca-f3c3-4ec7-9b13-a17f26d4fceb-utilities\") pod \"redhat-marketplace-bwrfs\" (UID: \"56ad2bca-f3c3-4ec7-9b13-a17f26d4fceb\") " pod="openshift-marketplace/redhat-marketplace-bwrfs" Jan 28 07:00:38 crc kubenswrapper[4642]: I0128 07:00:38.632478 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56ad2bca-f3c3-4ec7-9b13-a17f26d4fceb-catalog-content\") pod \"redhat-marketplace-bwrfs\" (UID: \"56ad2bca-f3c3-4ec7-9b13-a17f26d4fceb\") " pod="openshift-marketplace/redhat-marketplace-bwrfs" Jan 28 07:00:38 crc kubenswrapper[4642]: I0128 07:00:38.632520 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28fth\" (UniqueName: \"kubernetes.io/projected/56ad2bca-f3c3-4ec7-9b13-a17f26d4fceb-kube-api-access-28fth\") pod \"redhat-marketplace-bwrfs\" (UID: \"56ad2bca-f3c3-4ec7-9b13-a17f26d4fceb\") " pod="openshift-marketplace/redhat-marketplace-bwrfs" Jan 28 07:00:38 crc kubenswrapper[4642]: I0128 07:00:38.633154 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56ad2bca-f3c3-4ec7-9b13-a17f26d4fceb-utilities\") pod \"redhat-marketplace-bwrfs\" (UID: \"56ad2bca-f3c3-4ec7-9b13-a17f26d4fceb\") " pod="openshift-marketplace/redhat-marketplace-bwrfs" Jan 28 07:00:38 crc kubenswrapper[4642]: I0128 07:00:38.633160 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56ad2bca-f3c3-4ec7-9b13-a17f26d4fceb-catalog-content\") pod \"redhat-marketplace-bwrfs\" (UID: \"56ad2bca-f3c3-4ec7-9b13-a17f26d4fceb\") " pod="openshift-marketplace/redhat-marketplace-bwrfs" Jan 28 07:00:38 crc kubenswrapper[4642]: I0128 07:00:38.650995 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28fth\" (UniqueName: \"kubernetes.io/projected/56ad2bca-f3c3-4ec7-9b13-a17f26d4fceb-kube-api-access-28fth\") pod \"redhat-marketplace-bwrfs\" (UID: \"56ad2bca-f3c3-4ec7-9b13-a17f26d4fceb\") " pod="openshift-marketplace/redhat-marketplace-bwrfs" Jan 28 07:00:38 crc kubenswrapper[4642]: I0128 07:00:38.767766 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bwrfs" Jan 28 07:00:38 crc kubenswrapper[4642]: I0128 07:00:38.973206 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bwrfs"] Jan 28 07:00:39 crc kubenswrapper[4642]: I0128 07:00:39.073581 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bwrfs" event={"ID":"56ad2bca-f3c3-4ec7-9b13-a17f26d4fceb","Type":"ContainerStarted","Data":"cf5d3c4c80a6594c90c231815dd2648a7dea525e7ae6d2adc7d2acb598298edc"} Jan 28 07:00:39 crc kubenswrapper[4642]: I0128 07:00:39.076678 4642 generic.go:334] "Generic (PLEG): container finished" podID="338ae955-434d-40bd-8519-580badf3e175" containerID="6c756017776ac34b1c728f1bab1ac90c7064431607aa921734d4ccc64382a3e1" exitCode=0 Jan 28 07:00:39 crc kubenswrapper[4642]: I0128 07:00:39.076753 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" event={"ID":"338ae955-434d-40bd-8519-580badf3e175","Type":"ContainerDied","Data":"6c756017776ac34b1c728f1bab1ac90c7064431607aa921734d4ccc64382a3e1"} Jan 28 07:00:39 crc kubenswrapper[4642]: I0128 07:00:39.076794 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" event={"ID":"338ae955-434d-40bd-8519-580badf3e175","Type":"ContainerStarted","Data":"682b9c3bf1397b4c59a77c5d98ab360bbde5aa7c24a95922a398468c4fd1fcb1"} Jan 28 07:00:39 crc kubenswrapper[4642]: I0128 07:00:39.076812 4642 scope.go:117] "RemoveContainer" containerID="2613a0bc4b999e8496454b6a0b4fbac4291bb4c032e65bdad7717ec571a658c4" Jan 28 07:00:39 crc kubenswrapper[4642]: I0128 07:00:39.906741 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-554f878768-rqjln"] Jan 28 07:00:39 crc kubenswrapper[4642]: I0128 07:00:39.907578 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-554f878768-rqjln" Jan 28 07:00:39 crc kubenswrapper[4642]: I0128 07:00:39.909507 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-5x8j4" Jan 28 07:00:39 crc kubenswrapper[4642]: I0128 07:00:39.928039 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-554f878768-rqjln"] Jan 28 07:00:39 crc kubenswrapper[4642]: I0128 07:00:39.952068 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjn68\" (UniqueName: \"kubernetes.io/projected/c27f0ead-ebcd-4c83-ad72-311bcacff990-kube-api-access-xjn68\") pod \"openstack-operator-controller-init-554f878768-rqjln\" (UID: \"c27f0ead-ebcd-4c83-ad72-311bcacff990\") " pod="openstack-operators/openstack-operator-controller-init-554f878768-rqjln" Jan 28 07:00:40 crc kubenswrapper[4642]: I0128 07:00:40.053878 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjn68\" (UniqueName: \"kubernetes.io/projected/c27f0ead-ebcd-4c83-ad72-311bcacff990-kube-api-access-xjn68\") pod \"openstack-operator-controller-init-554f878768-rqjln\" (UID: \"c27f0ead-ebcd-4c83-ad72-311bcacff990\") " pod="openstack-operators/openstack-operator-controller-init-554f878768-rqjln" Jan 28 07:00:40 crc kubenswrapper[4642]: I0128 07:00:40.072763 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjn68\" (UniqueName: \"kubernetes.io/projected/c27f0ead-ebcd-4c83-ad72-311bcacff990-kube-api-access-xjn68\") pod \"openstack-operator-controller-init-554f878768-rqjln\" (UID: \"c27f0ead-ebcd-4c83-ad72-311bcacff990\") " pod="openstack-operators/openstack-operator-controller-init-554f878768-rqjln" Jan 28 07:00:40 crc kubenswrapper[4642]: I0128 07:00:40.084762 4642 generic.go:334] "Generic (PLEG): container finished" podID="56ad2bca-f3c3-4ec7-9b13-a17f26d4fceb" containerID="2bb80b4e7043a32c532376b74893c72b8e44adcc92ea781ac5c76d2e3a6c2418" exitCode=0 Jan 28 07:00:40 crc kubenswrapper[4642]: I0128 07:00:40.084815 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bwrfs" event={"ID":"56ad2bca-f3c3-4ec7-9b13-a17f26d4fceb","Type":"ContainerDied","Data":"2bb80b4e7043a32c532376b74893c72b8e44adcc92ea781ac5c76d2e3a6c2418"} Jan 28 07:00:40 crc kubenswrapper[4642]: I0128 07:00:40.220485 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-554f878768-rqjln" Jan 28 07:00:40 crc kubenswrapper[4642]: I0128 07:00:40.428398 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-554f878768-rqjln"] Jan 28 07:00:40 crc kubenswrapper[4642]: W0128 07:00:40.430198 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc27f0ead_ebcd_4c83_ad72_311bcacff990.slice/crio-238c23ea342deed6b220beb5371fcd4ece923a1a1d9cfb720e78a2d1616e697d WatchSource:0}: Error finding container 238c23ea342deed6b220beb5371fcd4ece923a1a1d9cfb720e78a2d1616e697d: Status 404 returned error can't find the container with id 238c23ea342deed6b220beb5371fcd4ece923a1a1d9cfb720e78a2d1616e697d Jan 28 07:00:41 crc kubenswrapper[4642]: I0128 07:00:41.094983 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bwrfs" event={"ID":"56ad2bca-f3c3-4ec7-9b13-a17f26d4fceb","Type":"ContainerStarted","Data":"4ce4e6975e54f78d89030f257034389c2b033805722bb1fa8f9e46603f12b365"} Jan 28 07:00:41 crc kubenswrapper[4642]: I0128 07:00:41.096954 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-554f878768-rqjln" event={"ID":"c27f0ead-ebcd-4c83-ad72-311bcacff990","Type":"ContainerStarted","Data":"238c23ea342deed6b220beb5371fcd4ece923a1a1d9cfb720e78a2d1616e697d"} Jan 28 07:00:42 crc kubenswrapper[4642]: I0128 07:00:42.108177 4642 generic.go:334] "Generic (PLEG): container finished" podID="56ad2bca-f3c3-4ec7-9b13-a17f26d4fceb" containerID="4ce4e6975e54f78d89030f257034389c2b033805722bb1fa8f9e46603f12b365" exitCode=0 Jan 28 07:00:42 crc kubenswrapper[4642]: I0128 07:00:42.108225 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bwrfs" event={"ID":"56ad2bca-f3c3-4ec7-9b13-a17f26d4fceb","Type":"ContainerDied","Data":"4ce4e6975e54f78d89030f257034389c2b033805722bb1fa8f9e46603f12b365"} Jan 28 07:00:44 crc kubenswrapper[4642]: I0128 07:00:44.372200 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tk7kq" Jan 28 07:00:44 crc kubenswrapper[4642]: I0128 07:00:44.372844 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tk7kq" Jan 28 07:00:44 crc kubenswrapper[4642]: I0128 07:00:44.405051 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tk7kq" Jan 28 07:00:45 crc kubenswrapper[4642]: I0128 07:00:45.132335 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bwrfs" event={"ID":"56ad2bca-f3c3-4ec7-9b13-a17f26d4fceb","Type":"ContainerStarted","Data":"09cc1a3defd53c4cf1ec9b340d504bc078964c3caaec532175901d3f727d4e1a"} Jan 28 07:00:45 crc kubenswrapper[4642]: I0128 07:00:45.134098 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-554f878768-rqjln" event={"ID":"c27f0ead-ebcd-4c83-ad72-311bcacff990","Type":"ContainerStarted","Data":"5e506ae547fb69db81c95bccef8930314205301f3a6120470bc67892835ac9ec"} Jan 28 07:00:45 crc kubenswrapper[4642]: I0128 07:00:45.149988 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bwrfs" podStartSLOduration=3.161890582 podStartE2EDuration="7.149975591s" podCreationTimestamp="2026-01-28 07:00:38 +0000 UTC" firstStartedPulling="2026-01-28 07:00:40.08662837 +0000 UTC m=+763.318717179" lastFinishedPulling="2026-01-28 07:00:44.07471338 +0000 UTC m=+767.306802188" observedRunningTime="2026-01-28 07:00:45.14903931 +0000 UTC m=+768.381128119" watchObservedRunningTime="2026-01-28 07:00:45.149975591 +0000 UTC m=+768.382064400" Jan 28 07:00:45 crc kubenswrapper[4642]: I0128 07:00:45.174836 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tk7kq" Jan 28 07:00:45 crc kubenswrapper[4642]: I0128 07:00:45.194143 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-554f878768-rqjln" podStartSLOduration=2.525610644 podStartE2EDuration="6.194126729s" podCreationTimestamp="2026-01-28 07:00:39 +0000 UTC" firstStartedPulling="2026-01-28 07:00:40.431937178 +0000 UTC m=+763.664025986" lastFinishedPulling="2026-01-28 07:00:44.100453262 +0000 UTC m=+767.332542071" observedRunningTime="2026-01-28 07:00:45.181723547 +0000 UTC m=+768.413812356" watchObservedRunningTime="2026-01-28 07:00:45.194126729 +0000 UTC m=+768.426215539" Jan 28 07:00:46 crc kubenswrapper[4642]: I0128 07:00:46.138820 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-554f878768-rqjln" Jan 28 07:00:46 crc kubenswrapper[4642]: I0128 07:00:46.637273 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tk7kq"] Jan 28 07:00:47 crc kubenswrapper[4642]: I0128 07:00:47.143798 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-tk7kq" podUID="69d11272-eb01-4e3f-adf9-dd58922e6b8b" containerName="registry-server" containerID="cri-o://9756bbaa7ec33ba3d6e3f8ece78dbfc50b4fd36f959043593c8f6c6f94cd8bcd" gracePeriod=2 Jan 28 07:00:47 crc kubenswrapper[4642]: I0128 07:00:47.975346 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tk7kq" Jan 28 07:00:48 crc kubenswrapper[4642]: I0128 07:00:48.075144 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69d11272-eb01-4e3f-adf9-dd58922e6b8b-utilities\") pod \"69d11272-eb01-4e3f-adf9-dd58922e6b8b\" (UID: \"69d11272-eb01-4e3f-adf9-dd58922e6b8b\") " Jan 28 07:00:48 crc kubenswrapper[4642]: I0128 07:00:48.075256 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g2gkl\" (UniqueName: \"kubernetes.io/projected/69d11272-eb01-4e3f-adf9-dd58922e6b8b-kube-api-access-g2gkl\") pod \"69d11272-eb01-4e3f-adf9-dd58922e6b8b\" (UID: \"69d11272-eb01-4e3f-adf9-dd58922e6b8b\") " Jan 28 07:00:48 crc kubenswrapper[4642]: I0128 07:00:48.075343 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69d11272-eb01-4e3f-adf9-dd58922e6b8b-catalog-content\") pod \"69d11272-eb01-4e3f-adf9-dd58922e6b8b\" (UID: \"69d11272-eb01-4e3f-adf9-dd58922e6b8b\") " Jan 28 07:00:48 crc kubenswrapper[4642]: I0128 07:00:48.075867 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69d11272-eb01-4e3f-adf9-dd58922e6b8b-utilities" (OuterVolumeSpecName: "utilities") pod "69d11272-eb01-4e3f-adf9-dd58922e6b8b" (UID: "69d11272-eb01-4e3f-adf9-dd58922e6b8b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 07:00:48 crc kubenswrapper[4642]: I0128 07:00:48.081831 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69d11272-eb01-4e3f-adf9-dd58922e6b8b-kube-api-access-g2gkl" (OuterVolumeSpecName: "kube-api-access-g2gkl") pod "69d11272-eb01-4e3f-adf9-dd58922e6b8b" (UID: "69d11272-eb01-4e3f-adf9-dd58922e6b8b"). InnerVolumeSpecName "kube-api-access-g2gkl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:00:48 crc kubenswrapper[4642]: I0128 07:00:48.153202 4642 generic.go:334] "Generic (PLEG): container finished" podID="69d11272-eb01-4e3f-adf9-dd58922e6b8b" containerID="9756bbaa7ec33ba3d6e3f8ece78dbfc50b4fd36f959043593c8f6c6f94cd8bcd" exitCode=0 Jan 28 07:00:48 crc kubenswrapper[4642]: I0128 07:00:48.153270 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tk7kq" event={"ID":"69d11272-eb01-4e3f-adf9-dd58922e6b8b","Type":"ContainerDied","Data":"9756bbaa7ec33ba3d6e3f8ece78dbfc50b4fd36f959043593c8f6c6f94cd8bcd"} Jan 28 07:00:48 crc kubenswrapper[4642]: I0128 07:00:48.153313 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tk7kq" event={"ID":"69d11272-eb01-4e3f-adf9-dd58922e6b8b","Type":"ContainerDied","Data":"04403136d50d245abb19acbd7c30cffbdb2bd346c95309cf02cb6f5bfb3faf83"} Jan 28 07:00:48 crc kubenswrapper[4642]: I0128 07:00:48.153335 4642 scope.go:117] "RemoveContainer" containerID="9756bbaa7ec33ba3d6e3f8ece78dbfc50b4fd36f959043593c8f6c6f94cd8bcd" Jan 28 07:00:48 crc kubenswrapper[4642]: I0128 07:00:48.153923 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tk7kq" Jan 28 07:00:48 crc kubenswrapper[4642]: I0128 07:00:48.165265 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69d11272-eb01-4e3f-adf9-dd58922e6b8b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "69d11272-eb01-4e3f-adf9-dd58922e6b8b" (UID: "69d11272-eb01-4e3f-adf9-dd58922e6b8b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 07:00:48 crc kubenswrapper[4642]: I0128 07:00:48.172615 4642 scope.go:117] "RemoveContainer" containerID="996558c8a52574bf4e32d8ec46fdfcc6ba670892c3e9cc611d5a85568b022386" Jan 28 07:00:48 crc kubenswrapper[4642]: I0128 07:00:48.176632 4642 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69d11272-eb01-4e3f-adf9-dd58922e6b8b-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 07:00:48 crc kubenswrapper[4642]: I0128 07:00:48.176654 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g2gkl\" (UniqueName: \"kubernetes.io/projected/69d11272-eb01-4e3f-adf9-dd58922e6b8b-kube-api-access-g2gkl\") on node \"crc\" DevicePath \"\"" Jan 28 07:00:48 crc kubenswrapper[4642]: I0128 07:00:48.176663 4642 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69d11272-eb01-4e3f-adf9-dd58922e6b8b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 07:00:48 crc kubenswrapper[4642]: I0128 07:00:48.188787 4642 scope.go:117] "RemoveContainer" containerID="ed00d2a7b8dbe414630405944da3274419acf1000915e85ac81bc727923aa903" Jan 28 07:00:48 crc kubenswrapper[4642]: I0128 07:00:48.200415 4642 scope.go:117] "RemoveContainer" containerID="9756bbaa7ec33ba3d6e3f8ece78dbfc50b4fd36f959043593c8f6c6f94cd8bcd" Jan 28 07:00:48 crc kubenswrapper[4642]: E0128 07:00:48.200716 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9756bbaa7ec33ba3d6e3f8ece78dbfc50b4fd36f959043593c8f6c6f94cd8bcd\": container with ID starting with 9756bbaa7ec33ba3d6e3f8ece78dbfc50b4fd36f959043593c8f6c6f94cd8bcd not found: ID does not exist" containerID="9756bbaa7ec33ba3d6e3f8ece78dbfc50b4fd36f959043593c8f6c6f94cd8bcd" Jan 28 07:00:48 crc kubenswrapper[4642]: I0128 07:00:48.200752 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9756bbaa7ec33ba3d6e3f8ece78dbfc50b4fd36f959043593c8f6c6f94cd8bcd"} err="failed to get container status \"9756bbaa7ec33ba3d6e3f8ece78dbfc50b4fd36f959043593c8f6c6f94cd8bcd\": rpc error: code = NotFound desc = could not find container \"9756bbaa7ec33ba3d6e3f8ece78dbfc50b4fd36f959043593c8f6c6f94cd8bcd\": container with ID starting with 9756bbaa7ec33ba3d6e3f8ece78dbfc50b4fd36f959043593c8f6c6f94cd8bcd not found: ID does not exist" Jan 28 07:00:48 crc kubenswrapper[4642]: I0128 07:00:48.200773 4642 scope.go:117] "RemoveContainer" containerID="996558c8a52574bf4e32d8ec46fdfcc6ba670892c3e9cc611d5a85568b022386" Jan 28 07:00:48 crc kubenswrapper[4642]: E0128 07:00:48.201012 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"996558c8a52574bf4e32d8ec46fdfcc6ba670892c3e9cc611d5a85568b022386\": container with ID starting with 996558c8a52574bf4e32d8ec46fdfcc6ba670892c3e9cc611d5a85568b022386 not found: ID does not exist" containerID="996558c8a52574bf4e32d8ec46fdfcc6ba670892c3e9cc611d5a85568b022386" Jan 28 07:00:48 crc kubenswrapper[4642]: I0128 07:00:48.201062 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"996558c8a52574bf4e32d8ec46fdfcc6ba670892c3e9cc611d5a85568b022386"} err="failed to get container status \"996558c8a52574bf4e32d8ec46fdfcc6ba670892c3e9cc611d5a85568b022386\": rpc error: code = NotFound desc = could not find container \"996558c8a52574bf4e32d8ec46fdfcc6ba670892c3e9cc611d5a85568b022386\": container with ID starting with 996558c8a52574bf4e32d8ec46fdfcc6ba670892c3e9cc611d5a85568b022386 not found: ID does not exist" Jan 28 07:00:48 crc kubenswrapper[4642]: I0128 07:00:48.201091 4642 scope.go:117] "RemoveContainer" containerID="ed00d2a7b8dbe414630405944da3274419acf1000915e85ac81bc727923aa903" Jan 28 07:00:48 crc kubenswrapper[4642]: E0128 07:00:48.201353 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed00d2a7b8dbe414630405944da3274419acf1000915e85ac81bc727923aa903\": container with ID starting with ed00d2a7b8dbe414630405944da3274419acf1000915e85ac81bc727923aa903 not found: ID does not exist" containerID="ed00d2a7b8dbe414630405944da3274419acf1000915e85ac81bc727923aa903" Jan 28 07:00:48 crc kubenswrapper[4642]: I0128 07:00:48.201377 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed00d2a7b8dbe414630405944da3274419acf1000915e85ac81bc727923aa903"} err="failed to get container status \"ed00d2a7b8dbe414630405944da3274419acf1000915e85ac81bc727923aa903\": rpc error: code = NotFound desc = could not find container \"ed00d2a7b8dbe414630405944da3274419acf1000915e85ac81bc727923aa903\": container with ID starting with ed00d2a7b8dbe414630405944da3274419acf1000915e85ac81bc727923aa903 not found: ID does not exist" Jan 28 07:00:48 crc kubenswrapper[4642]: I0128 07:00:48.478991 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tk7kq"] Jan 28 07:00:48 crc kubenswrapper[4642]: I0128 07:00:48.483048 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-tk7kq"] Jan 28 07:00:48 crc kubenswrapper[4642]: I0128 07:00:48.768133 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bwrfs" Jan 28 07:00:48 crc kubenswrapper[4642]: I0128 07:00:48.768231 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bwrfs" Jan 28 07:00:48 crc kubenswrapper[4642]: I0128 07:00:48.801116 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bwrfs" Jan 28 07:00:49 crc kubenswrapper[4642]: I0128 07:00:49.107839 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69d11272-eb01-4e3f-adf9-dd58922e6b8b" path="/var/lib/kubelet/pods/69d11272-eb01-4e3f-adf9-dd58922e6b8b/volumes" Jan 28 07:00:49 crc kubenswrapper[4642]: I0128 07:00:49.198150 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bwrfs" Jan 28 07:00:50 crc kubenswrapper[4642]: I0128 07:00:50.224560 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-554f878768-rqjln" Jan 28 07:00:51 crc kubenswrapper[4642]: I0128 07:00:51.241557 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bwrfs"] Jan 28 07:00:52 crc kubenswrapper[4642]: I0128 07:00:52.195029 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bwrfs" podUID="56ad2bca-f3c3-4ec7-9b13-a17f26d4fceb" containerName="registry-server" containerID="cri-o://09cc1a3defd53c4cf1ec9b340d504bc078964c3caaec532175901d3f727d4e1a" gracePeriod=2 Jan 28 07:00:52 crc kubenswrapper[4642]: I0128 07:00:52.520155 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bwrfs" Jan 28 07:00:52 crc kubenswrapper[4642]: I0128 07:00:52.634304 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56ad2bca-f3c3-4ec7-9b13-a17f26d4fceb-utilities\") pod \"56ad2bca-f3c3-4ec7-9b13-a17f26d4fceb\" (UID: \"56ad2bca-f3c3-4ec7-9b13-a17f26d4fceb\") " Jan 28 07:00:52 crc kubenswrapper[4642]: I0128 07:00:52.634361 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28fth\" (UniqueName: \"kubernetes.io/projected/56ad2bca-f3c3-4ec7-9b13-a17f26d4fceb-kube-api-access-28fth\") pod \"56ad2bca-f3c3-4ec7-9b13-a17f26d4fceb\" (UID: \"56ad2bca-f3c3-4ec7-9b13-a17f26d4fceb\") " Jan 28 07:00:52 crc kubenswrapper[4642]: I0128 07:00:52.634429 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56ad2bca-f3c3-4ec7-9b13-a17f26d4fceb-catalog-content\") pod \"56ad2bca-f3c3-4ec7-9b13-a17f26d4fceb\" (UID: \"56ad2bca-f3c3-4ec7-9b13-a17f26d4fceb\") " Jan 28 07:00:52 crc kubenswrapper[4642]: I0128 07:00:52.635113 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56ad2bca-f3c3-4ec7-9b13-a17f26d4fceb-utilities" (OuterVolumeSpecName: "utilities") pod "56ad2bca-f3c3-4ec7-9b13-a17f26d4fceb" (UID: "56ad2bca-f3c3-4ec7-9b13-a17f26d4fceb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 07:00:52 crc kubenswrapper[4642]: I0128 07:00:52.641333 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56ad2bca-f3c3-4ec7-9b13-a17f26d4fceb-kube-api-access-28fth" (OuterVolumeSpecName: "kube-api-access-28fth") pod "56ad2bca-f3c3-4ec7-9b13-a17f26d4fceb" (UID: "56ad2bca-f3c3-4ec7-9b13-a17f26d4fceb"). InnerVolumeSpecName "kube-api-access-28fth". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:00:52 crc kubenswrapper[4642]: I0128 07:00:52.650456 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56ad2bca-f3c3-4ec7-9b13-a17f26d4fceb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "56ad2bca-f3c3-4ec7-9b13-a17f26d4fceb" (UID: "56ad2bca-f3c3-4ec7-9b13-a17f26d4fceb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 07:00:52 crc kubenswrapper[4642]: I0128 07:00:52.735785 4642 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56ad2bca-f3c3-4ec7-9b13-a17f26d4fceb-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 07:00:52 crc kubenswrapper[4642]: I0128 07:00:52.735817 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28fth\" (UniqueName: \"kubernetes.io/projected/56ad2bca-f3c3-4ec7-9b13-a17f26d4fceb-kube-api-access-28fth\") on node \"crc\" DevicePath \"\"" Jan 28 07:00:52 crc kubenswrapper[4642]: I0128 07:00:52.735830 4642 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56ad2bca-f3c3-4ec7-9b13-a17f26d4fceb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 07:00:53 crc kubenswrapper[4642]: I0128 07:00:53.203413 4642 generic.go:334] "Generic (PLEG): container finished" podID="56ad2bca-f3c3-4ec7-9b13-a17f26d4fceb" containerID="09cc1a3defd53c4cf1ec9b340d504bc078964c3caaec532175901d3f727d4e1a" exitCode=0 Jan 28 07:00:53 crc kubenswrapper[4642]: I0128 07:00:53.203461 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bwrfs" Jan 28 07:00:53 crc kubenswrapper[4642]: I0128 07:00:53.203504 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bwrfs" event={"ID":"56ad2bca-f3c3-4ec7-9b13-a17f26d4fceb","Type":"ContainerDied","Data":"09cc1a3defd53c4cf1ec9b340d504bc078964c3caaec532175901d3f727d4e1a"} Jan 28 07:00:53 crc kubenswrapper[4642]: I0128 07:00:53.203574 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bwrfs" event={"ID":"56ad2bca-f3c3-4ec7-9b13-a17f26d4fceb","Type":"ContainerDied","Data":"cf5d3c4c80a6594c90c231815dd2648a7dea525e7ae6d2adc7d2acb598298edc"} Jan 28 07:00:53 crc kubenswrapper[4642]: I0128 07:00:53.203600 4642 scope.go:117] "RemoveContainer" containerID="09cc1a3defd53c4cf1ec9b340d504bc078964c3caaec532175901d3f727d4e1a" Jan 28 07:00:53 crc kubenswrapper[4642]: I0128 07:00:53.223473 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bwrfs"] Jan 28 07:00:53 crc kubenswrapper[4642]: I0128 07:00:53.227111 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bwrfs"] Jan 28 07:00:53 crc kubenswrapper[4642]: I0128 07:00:53.228245 4642 scope.go:117] "RemoveContainer" containerID="4ce4e6975e54f78d89030f257034389c2b033805722bb1fa8f9e46603f12b365" Jan 28 07:00:53 crc kubenswrapper[4642]: I0128 07:00:53.250655 4642 scope.go:117] "RemoveContainer" containerID="2bb80b4e7043a32c532376b74893c72b8e44adcc92ea781ac5c76d2e3a6c2418" Jan 28 07:00:53 crc kubenswrapper[4642]: I0128 07:00:53.263293 4642 scope.go:117] "RemoveContainer" containerID="09cc1a3defd53c4cf1ec9b340d504bc078964c3caaec532175901d3f727d4e1a" Jan 28 07:00:53 crc kubenswrapper[4642]: E0128 07:00:53.263638 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09cc1a3defd53c4cf1ec9b340d504bc078964c3caaec532175901d3f727d4e1a\": container with ID starting with 09cc1a3defd53c4cf1ec9b340d504bc078964c3caaec532175901d3f727d4e1a not found: ID does not exist" containerID="09cc1a3defd53c4cf1ec9b340d504bc078964c3caaec532175901d3f727d4e1a" Jan 28 07:00:53 crc kubenswrapper[4642]: I0128 07:00:53.263671 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09cc1a3defd53c4cf1ec9b340d504bc078964c3caaec532175901d3f727d4e1a"} err="failed to get container status \"09cc1a3defd53c4cf1ec9b340d504bc078964c3caaec532175901d3f727d4e1a\": rpc error: code = NotFound desc = could not find container \"09cc1a3defd53c4cf1ec9b340d504bc078964c3caaec532175901d3f727d4e1a\": container with ID starting with 09cc1a3defd53c4cf1ec9b340d504bc078964c3caaec532175901d3f727d4e1a not found: ID does not exist" Jan 28 07:00:53 crc kubenswrapper[4642]: I0128 07:00:53.263690 4642 scope.go:117] "RemoveContainer" containerID="4ce4e6975e54f78d89030f257034389c2b033805722bb1fa8f9e46603f12b365" Jan 28 07:00:53 crc kubenswrapper[4642]: E0128 07:00:53.264042 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ce4e6975e54f78d89030f257034389c2b033805722bb1fa8f9e46603f12b365\": container with ID starting with 4ce4e6975e54f78d89030f257034389c2b033805722bb1fa8f9e46603f12b365 not found: ID does not exist" containerID="4ce4e6975e54f78d89030f257034389c2b033805722bb1fa8f9e46603f12b365" Jan 28 07:00:53 crc kubenswrapper[4642]: I0128 07:00:53.264074 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ce4e6975e54f78d89030f257034389c2b033805722bb1fa8f9e46603f12b365"} err="failed to get container status \"4ce4e6975e54f78d89030f257034389c2b033805722bb1fa8f9e46603f12b365\": rpc error: code = NotFound desc = could not find container \"4ce4e6975e54f78d89030f257034389c2b033805722bb1fa8f9e46603f12b365\": container with ID starting with 4ce4e6975e54f78d89030f257034389c2b033805722bb1fa8f9e46603f12b365 not found: ID does not exist" Jan 28 07:00:53 crc kubenswrapper[4642]: I0128 07:00:53.264099 4642 scope.go:117] "RemoveContainer" containerID="2bb80b4e7043a32c532376b74893c72b8e44adcc92ea781ac5c76d2e3a6c2418" Jan 28 07:00:53 crc kubenswrapper[4642]: E0128 07:00:53.264392 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2bb80b4e7043a32c532376b74893c72b8e44adcc92ea781ac5c76d2e3a6c2418\": container with ID starting with 2bb80b4e7043a32c532376b74893c72b8e44adcc92ea781ac5c76d2e3a6c2418 not found: ID does not exist" containerID="2bb80b4e7043a32c532376b74893c72b8e44adcc92ea781ac5c76d2e3a6c2418" Jan 28 07:00:53 crc kubenswrapper[4642]: I0128 07:00:53.264416 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2bb80b4e7043a32c532376b74893c72b8e44adcc92ea781ac5c76d2e3a6c2418"} err="failed to get container status \"2bb80b4e7043a32c532376b74893c72b8e44adcc92ea781ac5c76d2e3a6c2418\": rpc error: code = NotFound desc = could not find container \"2bb80b4e7043a32c532376b74893c72b8e44adcc92ea781ac5c76d2e3a6c2418\": container with ID starting with 2bb80b4e7043a32c532376b74893c72b8e44adcc92ea781ac5c76d2e3a6c2418 not found: ID does not exist" Jan 28 07:00:55 crc kubenswrapper[4642]: I0128 07:00:55.107452 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56ad2bca-f3c3-4ec7-9b13-a17f26d4fceb" path="/var/lib/kubelet/pods/56ad2bca-f3c3-4ec7-9b13-a17f26d4fceb/volumes" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.260268 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-r6l8l"] Jan 28 07:01:27 crc kubenswrapper[4642]: E0128 07:01:27.260782 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69d11272-eb01-4e3f-adf9-dd58922e6b8b" containerName="registry-server" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.260793 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="69d11272-eb01-4e3f-adf9-dd58922e6b8b" containerName="registry-server" Jan 28 07:01:27 crc kubenswrapper[4642]: E0128 07:01:27.260801 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69d11272-eb01-4e3f-adf9-dd58922e6b8b" containerName="extract-utilities" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.260806 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="69d11272-eb01-4e3f-adf9-dd58922e6b8b" containerName="extract-utilities" Jan 28 07:01:27 crc kubenswrapper[4642]: E0128 07:01:27.260820 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56ad2bca-f3c3-4ec7-9b13-a17f26d4fceb" containerName="extract-content" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.260826 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="56ad2bca-f3c3-4ec7-9b13-a17f26d4fceb" containerName="extract-content" Jan 28 07:01:27 crc kubenswrapper[4642]: E0128 07:01:27.260833 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69d11272-eb01-4e3f-adf9-dd58922e6b8b" containerName="extract-content" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.260838 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="69d11272-eb01-4e3f-adf9-dd58922e6b8b" containerName="extract-content" Jan 28 07:01:27 crc kubenswrapper[4642]: E0128 07:01:27.260847 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56ad2bca-f3c3-4ec7-9b13-a17f26d4fceb" containerName="registry-server" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.260852 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="56ad2bca-f3c3-4ec7-9b13-a17f26d4fceb" containerName="registry-server" Jan 28 07:01:27 crc kubenswrapper[4642]: E0128 07:01:27.260860 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56ad2bca-f3c3-4ec7-9b13-a17f26d4fceb" containerName="extract-utilities" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.260865 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="56ad2bca-f3c3-4ec7-9b13-a17f26d4fceb" containerName="extract-utilities" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.260949 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="69d11272-eb01-4e3f-adf9-dd58922e6b8b" containerName="registry-server" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.260960 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="56ad2bca-f3c3-4ec7-9b13-a17f26d4fceb" containerName="registry-server" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.261280 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-r6l8l" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.263055 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-m77pm" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.270006 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-r6l8l"] Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.274602 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7478f7dbf9-ppss4"] Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.275275 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-ppss4" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.277424 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-cv9ph"] Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.277985 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-cv9ph" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.279588 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-7d6hh" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.279965 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-nsw7f" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.291780 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-cv9ph"] Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.294530 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7478f7dbf9-ppss4"] Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.297542 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-r2p4j"] Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.298288 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-r2p4j" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.302849 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-tx9wx" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.319209 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-wvkg2"] Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.319858 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-wvkg2" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.321230 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-469d9" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.323739 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-r2p4j"] Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.332369 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55mwm\" (UniqueName: \"kubernetes.io/projected/3b826964-4d30-4419-85ff-e4c4fab25d5f-kube-api-access-55mwm\") pod \"heat-operator-controller-manager-594c8c9d5d-wvkg2\" (UID: \"3b826964-4d30-4419-85ff-e4c4fab25d5f\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-wvkg2" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.332533 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwsm9\" (UniqueName: \"kubernetes.io/projected/8ce8250d-808a-4044-9473-ef4de236ea47-kube-api-access-jwsm9\") pod \"designate-operator-controller-manager-b45d7bf98-cv9ph\" (UID: \"8ce8250d-808a-4044-9473-ef4de236ea47\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-cv9ph" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.332635 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kx8z\" (UniqueName: \"kubernetes.io/projected/e7c99a85-efe2-41d4-8682-b91441ed42bf-kube-api-access-5kx8z\") pod \"cinder-operator-controller-manager-7478f7dbf9-ppss4\" (UID: \"e7c99a85-efe2-41d4-8682-b91441ed42bf\") " pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-ppss4" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.332789 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2j5jh\" (UniqueName: \"kubernetes.io/projected/8f714147-0e51-40d4-bc83-a1bcd90da40f-kube-api-access-2j5jh\") pod \"barbican-operator-controller-manager-7f86f8796f-r6l8l\" (UID: \"8f714147-0e51-40d4-bc83-a1bcd90da40f\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-r6l8l" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.332828 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5xb8\" (UniqueName: \"kubernetes.io/projected/926efdce-a7f6-465b-b4e8-752d78e79cae-kube-api-access-s5xb8\") pod \"glance-operator-controller-manager-78fdd796fd-r2p4j\" (UID: \"926efdce-a7f6-465b-b4e8-752d78e79cae\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-r2p4j" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.335651 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-wvkg2"] Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.351713 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-xqxpn"] Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.352509 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-xqxpn" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.355689 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-wmt7v" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.418234 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-n42jz"] Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.419366 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-n42jz" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.423734 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-xqxpn"] Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.425257 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-s2p6s" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.434069 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55mwm\" (UniqueName: \"kubernetes.io/projected/3b826964-4d30-4419-85ff-e4c4fab25d5f-kube-api-access-55mwm\") pod \"heat-operator-controller-manager-594c8c9d5d-wvkg2\" (UID: \"3b826964-4d30-4419-85ff-e4c4fab25d5f\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-wvkg2" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.434121 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwsm9\" (UniqueName: \"kubernetes.io/projected/8ce8250d-808a-4044-9473-ef4de236ea47-kube-api-access-jwsm9\") pod \"designate-operator-controller-manager-b45d7bf98-cv9ph\" (UID: \"8ce8250d-808a-4044-9473-ef4de236ea47\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-cv9ph" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.434158 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnwjd\" (UniqueName: \"kubernetes.io/projected/5af1bfbf-97ed-4ac2-b688-60b50d0800f0-kube-api-access-mnwjd\") pod \"horizon-operator-controller-manager-77d5c5b54f-xqxpn\" (UID: \"5af1bfbf-97ed-4ac2-b688-60b50d0800f0\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-xqxpn" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.434200 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8knm6\" (UniqueName: \"kubernetes.io/projected/fe0506df-e213-4430-a075-3e4a25ae3bf8-kube-api-access-8knm6\") pod \"ironic-operator-controller-manager-598f7747c9-n42jz\" (UID: \"fe0506df-e213-4430-a075-3e4a25ae3bf8\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-n42jz" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.434228 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kx8z\" (UniqueName: \"kubernetes.io/projected/e7c99a85-efe2-41d4-8682-b91441ed42bf-kube-api-access-5kx8z\") pod \"cinder-operator-controller-manager-7478f7dbf9-ppss4\" (UID: \"e7c99a85-efe2-41d4-8682-b91441ed42bf\") " pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-ppss4" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.434413 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2j5jh\" (UniqueName: \"kubernetes.io/projected/8f714147-0e51-40d4-bc83-a1bcd90da40f-kube-api-access-2j5jh\") pod \"barbican-operator-controller-manager-7f86f8796f-r6l8l\" (UID: \"8f714147-0e51-40d4-bc83-a1bcd90da40f\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-r6l8l" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.434467 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5xb8\" (UniqueName: \"kubernetes.io/projected/926efdce-a7f6-465b-b4e8-752d78e79cae-kube-api-access-s5xb8\") pod \"glance-operator-controller-manager-78fdd796fd-r2p4j\" (UID: \"926efdce-a7f6-465b-b4e8-752d78e79cae\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-r2p4j" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.466814 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kx8z\" (UniqueName: \"kubernetes.io/projected/e7c99a85-efe2-41d4-8682-b91441ed42bf-kube-api-access-5kx8z\") pod \"cinder-operator-controller-manager-7478f7dbf9-ppss4\" (UID: \"e7c99a85-efe2-41d4-8682-b91441ed42bf\") " pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-ppss4" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.474415 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-g9qq4"] Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.487336 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-jfxhp"] Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.487970 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-n42jz"] Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.488101 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-jfxhp" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.488284 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-g9qq4" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.489649 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-sttb9" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.489979 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55mwm\" (UniqueName: \"kubernetes.io/projected/3b826964-4d30-4419-85ff-e4c4fab25d5f-kube-api-access-55mwm\") pod \"heat-operator-controller-manager-594c8c9d5d-wvkg2\" (UID: \"3b826964-4d30-4419-85ff-e4c4fab25d5f\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-wvkg2" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.490393 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-f72hr" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.490546 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.495792 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwsm9\" (UniqueName: \"kubernetes.io/projected/8ce8250d-808a-4044-9473-ef4de236ea47-kube-api-access-jwsm9\") pod \"designate-operator-controller-manager-b45d7bf98-cv9ph\" (UID: \"8ce8250d-808a-4044-9473-ef4de236ea47\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-cv9ph" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.503673 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-g9qq4"] Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.514692 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2j5jh\" (UniqueName: \"kubernetes.io/projected/8f714147-0e51-40d4-bc83-a1bcd90da40f-kube-api-access-2j5jh\") pod \"barbican-operator-controller-manager-7f86f8796f-r6l8l\" (UID: \"8f714147-0e51-40d4-bc83-a1bcd90da40f\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-r6l8l" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.514705 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5xb8\" (UniqueName: \"kubernetes.io/projected/926efdce-a7f6-465b-b4e8-752d78e79cae-kube-api-access-s5xb8\") pod \"glance-operator-controller-manager-78fdd796fd-r2p4j\" (UID: \"926efdce-a7f6-465b-b4e8-752d78e79cae\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-r2p4j" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.527232 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-jfxhp"] Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.561318 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-snqv5"] Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.562045 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-snqv5" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.563473 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-s8dnv" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.563879 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/33d74ff8-8576-4acc-8233-df91f8c11cbd-cert\") pod \"infra-operator-controller-manager-694cf4f878-g9qq4\" (UID: \"33d74ff8-8576-4acc-8233-df91f8c11cbd\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-g9qq4" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.563915 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2fff\" (UniqueName: \"kubernetes.io/projected/fd2f775c-8111-4523-b235-1e61f428b03e-kube-api-access-x2fff\") pod \"keystone-operator-controller-manager-b8b6d4659-jfxhp\" (UID: \"fd2f775c-8111-4523-b235-1e61f428b03e\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-jfxhp" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.563940 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnwjd\" (UniqueName: \"kubernetes.io/projected/5af1bfbf-97ed-4ac2-b688-60b50d0800f0-kube-api-access-mnwjd\") pod \"horizon-operator-controller-manager-77d5c5b54f-xqxpn\" (UID: \"5af1bfbf-97ed-4ac2-b688-60b50d0800f0\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-xqxpn" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.563963 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8knm6\" (UniqueName: \"kubernetes.io/projected/fe0506df-e213-4430-a075-3e4a25ae3bf8-kube-api-access-8knm6\") pod \"ironic-operator-controller-manager-598f7747c9-n42jz\" (UID: \"fe0506df-e213-4430-a075-3e4a25ae3bf8\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-n42jz" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.563983 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfw4c\" (UniqueName: \"kubernetes.io/projected/33d74ff8-8576-4acc-8233-df91f8c11cbd-kube-api-access-hfw4c\") pod \"infra-operator-controller-manager-694cf4f878-g9qq4\" (UID: \"33d74ff8-8576-4acc-8233-df91f8c11cbd\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-g9qq4" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.576575 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-snqv5"] Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.576828 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-r6l8l" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.580295 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-5n97g"] Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.581014 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-5n97g" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.586084 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-nxs2v"] Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.586729 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-nxs2v" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.588805 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-ppss4" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.589968 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-7bdb645866-gxt6x"] Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.591622 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-gxt6x" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.592065 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-djxl7" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.592096 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-5n97g"] Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.592569 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-cv9ph" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.592867 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-4z727" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.594329 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-4jtsp" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.595544 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4cd88d46-4xmct"] Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.596015 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-4xmct" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.599802 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8knm6\" (UniqueName: \"kubernetes.io/projected/fe0506df-e213-4430-a075-3e4a25ae3bf8-kube-api-access-8knm6\") pod \"ironic-operator-controller-manager-598f7747c9-n42jz\" (UID: \"fe0506df-e213-4430-a075-3e4a25ae3bf8\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-n42jz" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.601125 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-59rkz" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.605999 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-nxs2v"] Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.611924 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-7bdb645866-gxt6x"] Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.612717 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnwjd\" (UniqueName: \"kubernetes.io/projected/5af1bfbf-97ed-4ac2-b688-60b50d0800f0-kube-api-access-mnwjd\") pod \"horizon-operator-controller-manager-77d5c5b54f-xqxpn\" (UID: \"5af1bfbf-97ed-4ac2-b688-60b50d0800f0\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-xqxpn" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.613004 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-r2p4j" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.624219 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4cd88d46-4xmct"] Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.630637 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-5b5d4999dch6b8n"] Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.631399 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b5d4999dch6b8n" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.632819 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.633041 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-9fzdx" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.634948 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-gpvnm"] Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.636034 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-gpvnm" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.636828 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-gpvnm"] Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.638034 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-v24mz" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.649711 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-5b5d4999dch6b8n"] Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.662086 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-lfdnj"] Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.662808 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-lfdnj" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.664885 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-lfdnj"] Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.665063 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dk6x\" (UniqueName: \"kubernetes.io/projected/c56780c4-c549-4261-807d-c85fa6bbb166-kube-api-access-2dk6x\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-5n97g\" (UID: \"c56780c4-c549-4261-807d-c85fa6bbb166\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-5n97g" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.665111 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/33d74ff8-8576-4acc-8233-df91f8c11cbd-cert\") pod \"infra-operator-controller-manager-694cf4f878-g9qq4\" (UID: \"33d74ff8-8576-4acc-8233-df91f8c11cbd\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-g9qq4" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.665130 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmj2v\" (UniqueName: \"kubernetes.io/projected/7c0247c0-e28d-4914-8d63-d90f9ad06fe3-kube-api-access-lmj2v\") pod \"nova-operator-controller-manager-7bdb645866-gxt6x\" (UID: \"7c0247c0-e28d-4914-8d63-d90f9ad06fe3\") " pod="openstack-operators/nova-operator-controller-manager-7bdb645866-gxt6x" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.665154 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2fff\" (UniqueName: \"kubernetes.io/projected/fd2f775c-8111-4523-b235-1e61f428b03e-kube-api-access-x2fff\") pod \"keystone-operator-controller-manager-b8b6d4659-jfxhp\" (UID: \"fd2f775c-8111-4523-b235-1e61f428b03e\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-jfxhp" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.665180 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hl6dv\" (UniqueName: \"kubernetes.io/projected/955adb33-713e-4988-a885-8c26474165e5-kube-api-access-hl6dv\") pod \"neutron-operator-controller-manager-78d58447c5-nxs2v\" (UID: \"955adb33-713e-4988-a885-8c26474165e5\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-nxs2v" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.665218 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e5eb1461-1a4f-403d-bc4f-c05d36ad23e8-cert\") pod \"openstack-baremetal-operator-controller-manager-5b5d4999dch6b8n\" (UID: \"e5eb1461-1a4f-403d-bc4f-c05d36ad23e8\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b5d4999dch6b8n" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.665240 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfw4c\" (UniqueName: \"kubernetes.io/projected/33d74ff8-8576-4acc-8233-df91f8c11cbd-kube-api-access-hfw4c\") pod \"infra-operator-controller-manager-694cf4f878-g9qq4\" (UID: \"33d74ff8-8576-4acc-8233-df91f8c11cbd\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-g9qq4" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.665270 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbkf6\" (UniqueName: \"kubernetes.io/projected/e5eb1461-1a4f-403d-bc4f-c05d36ad23e8-kube-api-access-rbkf6\") pod \"openstack-baremetal-operator-controller-manager-5b5d4999dch6b8n\" (UID: \"e5eb1461-1a4f-403d-bc4f-c05d36ad23e8\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b5d4999dch6b8n" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.665362 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdhd5\" (UniqueName: \"kubernetes.io/projected/43c6d7b6-0086-4de0-b6d6-1a313d0c7214-kube-api-access-jdhd5\") pod \"octavia-operator-controller-manager-5f4cd88d46-4xmct\" (UID: \"43c6d7b6-0086-4de0-b6d6-1a313d0c7214\") " pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-4xmct" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.665378 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnc8x\" (UniqueName: \"kubernetes.io/projected/d0b658bf-5e42-4af9-93ce-b6e0b03b1db2-kube-api-access-lnc8x\") pod \"manila-operator-controller-manager-78c6999f6f-snqv5\" (UID: \"d0b658bf-5e42-4af9-93ce-b6e0b03b1db2\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-snqv5" Jan 28 07:01:27 crc kubenswrapper[4642]: E0128 07:01:27.666077 4642 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 28 07:01:27 crc kubenswrapper[4642]: E0128 07:01:27.666110 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/33d74ff8-8576-4acc-8233-df91f8c11cbd-cert podName:33d74ff8-8576-4acc-8233-df91f8c11cbd nodeName:}" failed. No retries permitted until 2026-01-28 07:01:28.166098638 +0000 UTC m=+811.398187448 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/33d74ff8-8576-4acc-8233-df91f8c11cbd-cert") pod "infra-operator-controller-manager-694cf4f878-g9qq4" (UID: "33d74ff8-8576-4acc-8233-df91f8c11cbd") : secret "infra-operator-webhook-server-cert" not found Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.669525 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-b49jc" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.679132 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-wvkg2" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.682601 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-x62jq"] Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.683286 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-x62jq" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.685270 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-2p4g7" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.692254 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2fff\" (UniqueName: \"kubernetes.io/projected/fd2f775c-8111-4523-b235-1e61f428b03e-kube-api-access-x2fff\") pod \"keystone-operator-controller-manager-b8b6d4659-jfxhp\" (UID: \"fd2f775c-8111-4523-b235-1e61f428b03e\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-jfxhp" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.693802 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfw4c\" (UniqueName: \"kubernetes.io/projected/33d74ff8-8576-4acc-8233-df91f8c11cbd-kube-api-access-hfw4c\") pod \"infra-operator-controller-manager-694cf4f878-g9qq4\" (UID: \"33d74ff8-8576-4acc-8233-df91f8c11cbd\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-g9qq4" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.712522 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-xqxpn" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.713114 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-x62jq"] Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.746146 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-g5765"] Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.747041 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-g5765" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.757258 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-g5765"] Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.757273 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-drfps" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.765850 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hl6dv\" (UniqueName: \"kubernetes.io/projected/955adb33-713e-4988-a885-8c26474165e5-kube-api-access-hl6dv\") pod \"neutron-operator-controller-manager-78d58447c5-nxs2v\" (UID: \"955adb33-713e-4988-a885-8c26474165e5\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-nxs2v" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.765884 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e5eb1461-1a4f-403d-bc4f-c05d36ad23e8-cert\") pod \"openstack-baremetal-operator-controller-manager-5b5d4999dch6b8n\" (UID: \"e5eb1461-1a4f-403d-bc4f-c05d36ad23e8\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b5d4999dch6b8n" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.765910 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqh9b\" (UniqueName: \"kubernetes.io/projected/d1e9a5df-6796-4bdb-8412-f2f832aeebd3-kube-api-access-hqh9b\") pod \"placement-operator-controller-manager-79d5ccc684-lfdnj\" (UID: \"d1e9a5df-6796-4bdb-8412-f2f832aeebd3\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-lfdnj" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.765936 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbkf6\" (UniqueName: \"kubernetes.io/projected/e5eb1461-1a4f-403d-bc4f-c05d36ad23e8-kube-api-access-rbkf6\") pod \"openstack-baremetal-operator-controller-manager-5b5d4999dch6b8n\" (UID: \"e5eb1461-1a4f-403d-bc4f-c05d36ad23e8\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b5d4999dch6b8n" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.765988 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2x2zz\" (UniqueName: \"kubernetes.io/projected/ef130a26-1119-48ca-87c7-9def2d39f0b5-kube-api-access-2x2zz\") pod \"ovn-operator-controller-manager-6f75f45d54-gpvnm\" (UID: \"ef130a26-1119-48ca-87c7-9def2d39f0b5\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-gpvnm" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.766006 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gq8x\" (UniqueName: \"kubernetes.io/projected/be43dd0d-944f-4d01-8e8f-22adc9306708-kube-api-access-4gq8x\") pod \"swift-operator-controller-manager-547cbdb99f-x62jq\" (UID: \"be43dd0d-944f-4d01-8e8f-22adc9306708\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-x62jq" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.766026 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdhd5\" (UniqueName: \"kubernetes.io/projected/43c6d7b6-0086-4de0-b6d6-1a313d0c7214-kube-api-access-jdhd5\") pod \"octavia-operator-controller-manager-5f4cd88d46-4xmct\" (UID: \"43c6d7b6-0086-4de0-b6d6-1a313d0c7214\") " pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-4xmct" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.766461 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnc8x\" (UniqueName: \"kubernetes.io/projected/d0b658bf-5e42-4af9-93ce-b6e0b03b1db2-kube-api-access-lnc8x\") pod \"manila-operator-controller-manager-78c6999f6f-snqv5\" (UID: \"d0b658bf-5e42-4af9-93ce-b6e0b03b1db2\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-snqv5" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.766497 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dk6x\" (UniqueName: \"kubernetes.io/projected/c56780c4-c549-4261-807d-c85fa6bbb166-kube-api-access-2dk6x\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-5n97g\" (UID: \"c56780c4-c549-4261-807d-c85fa6bbb166\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-5n97g" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.766530 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmj2v\" (UniqueName: \"kubernetes.io/projected/7c0247c0-e28d-4914-8d63-d90f9ad06fe3-kube-api-access-lmj2v\") pod \"nova-operator-controller-manager-7bdb645866-gxt6x\" (UID: \"7c0247c0-e28d-4914-8d63-d90f9ad06fe3\") " pod="openstack-operators/nova-operator-controller-manager-7bdb645866-gxt6x" Jan 28 07:01:27 crc kubenswrapper[4642]: E0128 07:01:27.766633 4642 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 07:01:27 crc kubenswrapper[4642]: E0128 07:01:27.766688 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e5eb1461-1a4f-403d-bc4f-c05d36ad23e8-cert podName:e5eb1461-1a4f-403d-bc4f-c05d36ad23e8 nodeName:}" failed. No retries permitted until 2026-01-28 07:01:28.266674004 +0000 UTC m=+811.498762814 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e5eb1461-1a4f-403d-bc4f-c05d36ad23e8-cert") pod "openstack-baremetal-operator-controller-manager-5b5d4999dch6b8n" (UID: "e5eb1461-1a4f-403d-bc4f-c05d36ad23e8") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.796566 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdhd5\" (UniqueName: \"kubernetes.io/projected/43c6d7b6-0086-4de0-b6d6-1a313d0c7214-kube-api-access-jdhd5\") pod \"octavia-operator-controller-manager-5f4cd88d46-4xmct\" (UID: \"43c6d7b6-0086-4de0-b6d6-1a313d0c7214\") " pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-4xmct" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.797478 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmj2v\" (UniqueName: \"kubernetes.io/projected/7c0247c0-e28d-4914-8d63-d90f9ad06fe3-kube-api-access-lmj2v\") pod \"nova-operator-controller-manager-7bdb645866-gxt6x\" (UID: \"7c0247c0-e28d-4914-8d63-d90f9ad06fe3\") " pod="openstack-operators/nova-operator-controller-manager-7bdb645866-gxt6x" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.826320 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hl6dv\" (UniqueName: \"kubernetes.io/projected/955adb33-713e-4988-a885-8c26474165e5-kube-api-access-hl6dv\") pod \"neutron-operator-controller-manager-78d58447c5-nxs2v\" (UID: \"955adb33-713e-4988-a885-8c26474165e5\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-nxs2v" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.826819 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dk6x\" (UniqueName: \"kubernetes.io/projected/c56780c4-c549-4261-807d-c85fa6bbb166-kube-api-access-2dk6x\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-5n97g\" (UID: \"c56780c4-c549-4261-807d-c85fa6bbb166\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-5n97g" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.826879 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-8n8j8"] Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.829136 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-8n8j8" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.829156 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbkf6\" (UniqueName: \"kubernetes.io/projected/e5eb1461-1a4f-403d-bc4f-c05d36ad23e8-kube-api-access-rbkf6\") pod \"openstack-baremetal-operator-controller-manager-5b5d4999dch6b8n\" (UID: \"e5eb1461-1a4f-403d-bc4f-c05d36ad23e8\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b5d4999dch6b8n" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.834165 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-gpl2c" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.846202 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnc8x\" (UniqueName: \"kubernetes.io/projected/d0b658bf-5e42-4af9-93ce-b6e0b03b1db2-kube-api-access-lnc8x\") pod \"manila-operator-controller-manager-78c6999f6f-snqv5\" (UID: \"d0b658bf-5e42-4af9-93ce-b6e0b03b1db2\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-snqv5" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.850954 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-8n8j8"] Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.856068 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-n42jz" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.869628 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqh9b\" (UniqueName: \"kubernetes.io/projected/d1e9a5df-6796-4bdb-8412-f2f832aeebd3-kube-api-access-hqh9b\") pod \"placement-operator-controller-manager-79d5ccc684-lfdnj\" (UID: \"d1e9a5df-6796-4bdb-8412-f2f832aeebd3\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-lfdnj" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.869701 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzpqh\" (UniqueName: \"kubernetes.io/projected/79a5daf5-be64-4759-bbb6-6d3850ff574e-kube-api-access-wzpqh\") pod \"test-operator-controller-manager-69797bbcbd-8n8j8\" (UID: \"79a5daf5-be64-4759-bbb6-6d3850ff574e\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-8n8j8" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.869734 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2x2zz\" (UniqueName: \"kubernetes.io/projected/ef130a26-1119-48ca-87c7-9def2d39f0b5-kube-api-access-2x2zz\") pod \"ovn-operator-controller-manager-6f75f45d54-gpvnm\" (UID: \"ef130a26-1119-48ca-87c7-9def2d39f0b5\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-gpvnm" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.869751 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gq8x\" (UniqueName: \"kubernetes.io/projected/be43dd0d-944f-4d01-8e8f-22adc9306708-kube-api-access-4gq8x\") pod \"swift-operator-controller-manager-547cbdb99f-x62jq\" (UID: \"be43dd0d-944f-4d01-8e8f-22adc9306708\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-x62jq" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.869773 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hckv\" (UniqueName: \"kubernetes.io/projected/65108034-33b6-4b00-8bc0-6dbf2955510c-kube-api-access-8hckv\") pod \"telemetry-operator-controller-manager-85cd9769bb-g5765\" (UID: \"65108034-33b6-4b00-8bc0-6dbf2955510c\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-g5765" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.872089 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-vrkkm"] Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.872789 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-vrkkm" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.879255 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-x7sfq" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.884631 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-vrkkm"] Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.886217 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-jfxhp" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.886754 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqh9b\" (UniqueName: \"kubernetes.io/projected/d1e9a5df-6796-4bdb-8412-f2f832aeebd3-kube-api-access-hqh9b\") pod \"placement-operator-controller-manager-79d5ccc684-lfdnj\" (UID: \"d1e9a5df-6796-4bdb-8412-f2f832aeebd3\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-lfdnj" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.890531 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gq8x\" (UniqueName: \"kubernetes.io/projected/be43dd0d-944f-4d01-8e8f-22adc9306708-kube-api-access-4gq8x\") pod \"swift-operator-controller-manager-547cbdb99f-x62jq\" (UID: \"be43dd0d-944f-4d01-8e8f-22adc9306708\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-x62jq" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.902541 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2x2zz\" (UniqueName: \"kubernetes.io/projected/ef130a26-1119-48ca-87c7-9def2d39f0b5-kube-api-access-2x2zz\") pod \"ovn-operator-controller-manager-6f75f45d54-gpvnm\" (UID: \"ef130a26-1119-48ca-87c7-9def2d39f0b5\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-gpvnm" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.922107 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-snqv5" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.931207 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-5n97g" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.950844 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-nxs2v" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.972743 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvx9j\" (UniqueName: \"kubernetes.io/projected/ffd25d2c-380e-4a54-a2af-ca488f438da7-kube-api-access-xvx9j\") pod \"watcher-operator-controller-manager-564965969-vrkkm\" (UID: \"ffd25d2c-380e-4a54-a2af-ca488f438da7\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-vrkkm" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.972784 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzpqh\" (UniqueName: \"kubernetes.io/projected/79a5daf5-be64-4759-bbb6-6d3850ff574e-kube-api-access-wzpqh\") pod \"test-operator-controller-manager-69797bbcbd-8n8j8\" (UID: \"79a5daf5-be64-4759-bbb6-6d3850ff574e\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-8n8j8" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.972744 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-gxt6x" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.972819 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hckv\" (UniqueName: \"kubernetes.io/projected/65108034-33b6-4b00-8bc0-6dbf2955510c-kube-api-access-8hckv\") pod \"telemetry-operator-controller-manager-85cd9769bb-g5765\" (UID: \"65108034-33b6-4b00-8bc0-6dbf2955510c\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-g5765" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.975882 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-4xmct" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.981824 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-9f67d7-9kg2t"] Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.982602 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-9f67d7-9kg2t" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.992046 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-sh6m7" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.992176 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.992324 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.998460 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzpqh\" (UniqueName: \"kubernetes.io/projected/79a5daf5-be64-4759-bbb6-6d3850ff574e-kube-api-access-wzpqh\") pod \"test-operator-controller-manager-69797bbcbd-8n8j8\" (UID: \"79a5daf5-be64-4759-bbb6-6d3850ff574e\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-8n8j8" Jan 28 07:01:27 crc kubenswrapper[4642]: I0128 07:01:27.999427 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hckv\" (UniqueName: \"kubernetes.io/projected/65108034-33b6-4b00-8bc0-6dbf2955510c-kube-api-access-8hckv\") pod \"telemetry-operator-controller-manager-85cd9769bb-g5765\" (UID: \"65108034-33b6-4b00-8bc0-6dbf2955510c\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-g5765" Jan 28 07:01:28 crc kubenswrapper[4642]: I0128 07:01:28.006740 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-9f67d7-9kg2t"] Jan 28 07:01:28 crc kubenswrapper[4642]: I0128 07:01:28.007761 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-x62jq" Jan 28 07:01:28 crc kubenswrapper[4642]: I0128 07:01:28.013565 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-gpvnm" Jan 28 07:01:28 crc kubenswrapper[4642]: I0128 07:01:28.025811 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-lfdnj" Jan 28 07:01:28 crc kubenswrapper[4642]: I0128 07:01:28.046965 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-r2p4j"] Jan 28 07:01:28 crc kubenswrapper[4642]: I0128 07:01:28.056075 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-r6l8l"] Jan 28 07:01:28 crc kubenswrapper[4642]: I0128 07:01:28.073784 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a453bbb9-176c-413b-82dd-294ecb3bdb2b-webhook-certs\") pod \"openstack-operator-controller-manager-9f67d7-9kg2t\" (UID: \"a453bbb9-176c-413b-82dd-294ecb3bdb2b\") " pod="openstack-operators/openstack-operator-controller-manager-9f67d7-9kg2t" Jan 28 07:01:28 crc kubenswrapper[4642]: I0128 07:01:28.073843 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvx9j\" (UniqueName: \"kubernetes.io/projected/ffd25d2c-380e-4a54-a2af-ca488f438da7-kube-api-access-xvx9j\") pod \"watcher-operator-controller-manager-564965969-vrkkm\" (UID: \"ffd25d2c-380e-4a54-a2af-ca488f438da7\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-vrkkm" Jan 28 07:01:28 crc kubenswrapper[4642]: I0128 07:01:28.073869 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a453bbb9-176c-413b-82dd-294ecb3bdb2b-metrics-certs\") pod \"openstack-operator-controller-manager-9f67d7-9kg2t\" (UID: \"a453bbb9-176c-413b-82dd-294ecb3bdb2b\") " pod="openstack-operators/openstack-operator-controller-manager-9f67d7-9kg2t" Jan 28 07:01:28 crc kubenswrapper[4642]: I0128 07:01:28.073910 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69njm\" (UniqueName: \"kubernetes.io/projected/a453bbb9-176c-413b-82dd-294ecb3bdb2b-kube-api-access-69njm\") pod \"openstack-operator-controller-manager-9f67d7-9kg2t\" (UID: \"a453bbb9-176c-413b-82dd-294ecb3bdb2b\") " pod="openstack-operators/openstack-operator-controller-manager-9f67d7-9kg2t" Jan 28 07:01:28 crc kubenswrapper[4642]: I0128 07:01:28.080663 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8kkj6"] Jan 28 07:01:28 crc kubenswrapper[4642]: I0128 07:01:28.081806 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8kkj6" Jan 28 07:01:28 crc kubenswrapper[4642]: I0128 07:01:28.086152 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-qpjv6" Jan 28 07:01:28 crc kubenswrapper[4642]: I0128 07:01:28.086467 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8kkj6"] Jan 28 07:01:28 crc kubenswrapper[4642]: I0128 07:01:28.096067 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvx9j\" (UniqueName: \"kubernetes.io/projected/ffd25d2c-380e-4a54-a2af-ca488f438da7-kube-api-access-xvx9j\") pod \"watcher-operator-controller-manager-564965969-vrkkm\" (UID: \"ffd25d2c-380e-4a54-a2af-ca488f438da7\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-vrkkm" Jan 28 07:01:28 crc kubenswrapper[4642]: W0128 07:01:28.102590 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f714147_0e51_40d4_bc83_a1bcd90da40f.slice/crio-d89e9a1aae57b0a0f550f4db2e20f3311c2015bff1715eff81aacc8ad1f284f2 WatchSource:0}: Error finding container d89e9a1aae57b0a0f550f4db2e20f3311c2015bff1715eff81aacc8ad1f284f2: Status 404 returned error can't find the container with id d89e9a1aae57b0a0f550f4db2e20f3311c2015bff1715eff81aacc8ad1f284f2 Jan 28 07:01:28 crc kubenswrapper[4642]: I0128 07:01:28.103428 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-g5765" Jan 28 07:01:28 crc kubenswrapper[4642]: I0128 07:01:28.162925 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7478f7dbf9-ppss4"] Jan 28 07:01:28 crc kubenswrapper[4642]: I0128 07:01:28.168651 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-8n8j8" Jan 28 07:01:28 crc kubenswrapper[4642]: I0128 07:01:28.174905 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a453bbb9-176c-413b-82dd-294ecb3bdb2b-webhook-certs\") pod \"openstack-operator-controller-manager-9f67d7-9kg2t\" (UID: \"a453bbb9-176c-413b-82dd-294ecb3bdb2b\") " pod="openstack-operators/openstack-operator-controller-manager-9f67d7-9kg2t" Jan 28 07:01:28 crc kubenswrapper[4642]: I0128 07:01:28.174968 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4kn2\" (UniqueName: \"kubernetes.io/projected/1bbb1fbc-a22c-4a90-b15a-abf791757ef2-kube-api-access-q4kn2\") pod \"rabbitmq-cluster-operator-manager-668c99d594-8kkj6\" (UID: \"1bbb1fbc-a22c-4a90-b15a-abf791757ef2\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8kkj6" Jan 28 07:01:28 crc kubenswrapper[4642]: I0128 07:01:28.175029 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a453bbb9-176c-413b-82dd-294ecb3bdb2b-metrics-certs\") pod \"openstack-operator-controller-manager-9f67d7-9kg2t\" (UID: \"a453bbb9-176c-413b-82dd-294ecb3bdb2b\") " pod="openstack-operators/openstack-operator-controller-manager-9f67d7-9kg2t" Jan 28 07:01:28 crc kubenswrapper[4642]: E0128 07:01:28.175120 4642 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 28 07:01:28 crc kubenswrapper[4642]: I0128 07:01:28.175159 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69njm\" (UniqueName: \"kubernetes.io/projected/a453bbb9-176c-413b-82dd-294ecb3bdb2b-kube-api-access-69njm\") pod \"openstack-operator-controller-manager-9f67d7-9kg2t\" (UID: \"a453bbb9-176c-413b-82dd-294ecb3bdb2b\") " pod="openstack-operators/openstack-operator-controller-manager-9f67d7-9kg2t" Jan 28 07:01:28 crc kubenswrapper[4642]: E0128 07:01:28.175194 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a453bbb9-176c-413b-82dd-294ecb3bdb2b-metrics-certs podName:a453bbb9-176c-413b-82dd-294ecb3bdb2b nodeName:}" failed. No retries permitted until 2026-01-28 07:01:28.675149554 +0000 UTC m=+811.907238363 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a453bbb9-176c-413b-82dd-294ecb3bdb2b-metrics-certs") pod "openstack-operator-controller-manager-9f67d7-9kg2t" (UID: "a453bbb9-176c-413b-82dd-294ecb3bdb2b") : secret "metrics-server-cert" not found Jan 28 07:01:28 crc kubenswrapper[4642]: I0128 07:01:28.175248 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/33d74ff8-8576-4acc-8233-df91f8c11cbd-cert\") pod \"infra-operator-controller-manager-694cf4f878-g9qq4\" (UID: \"33d74ff8-8576-4acc-8233-df91f8c11cbd\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-g9qq4" Jan 28 07:01:28 crc kubenswrapper[4642]: E0128 07:01:28.175486 4642 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 28 07:01:28 crc kubenswrapper[4642]: E0128 07:01:28.175539 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/33d74ff8-8576-4acc-8233-df91f8c11cbd-cert podName:33d74ff8-8576-4acc-8233-df91f8c11cbd nodeName:}" failed. No retries permitted until 2026-01-28 07:01:29.175524719 +0000 UTC m=+812.407613528 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/33d74ff8-8576-4acc-8233-df91f8c11cbd-cert") pod "infra-operator-controller-manager-694cf4f878-g9qq4" (UID: "33d74ff8-8576-4acc-8233-df91f8c11cbd") : secret "infra-operator-webhook-server-cert" not found Jan 28 07:01:28 crc kubenswrapper[4642]: E0128 07:01:28.175784 4642 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 28 07:01:28 crc kubenswrapper[4642]: E0128 07:01:28.175839 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a453bbb9-176c-413b-82dd-294ecb3bdb2b-webhook-certs podName:a453bbb9-176c-413b-82dd-294ecb3bdb2b nodeName:}" failed. No retries permitted until 2026-01-28 07:01:28.675812601 +0000 UTC m=+811.907901411 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a453bbb9-176c-413b-82dd-294ecb3bdb2b-webhook-certs") pod "openstack-operator-controller-manager-9f67d7-9kg2t" (UID: "a453bbb9-176c-413b-82dd-294ecb3bdb2b") : secret "webhook-server-cert" not found Jan 28 07:01:28 crc kubenswrapper[4642]: I0128 07:01:28.186521 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-cv9ph"] Jan 28 07:01:28 crc kubenswrapper[4642]: I0128 07:01:28.192879 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69njm\" (UniqueName: \"kubernetes.io/projected/a453bbb9-176c-413b-82dd-294ecb3bdb2b-kube-api-access-69njm\") pod \"openstack-operator-controller-manager-9f67d7-9kg2t\" (UID: \"a453bbb9-176c-413b-82dd-294ecb3bdb2b\") " pod="openstack-operators/openstack-operator-controller-manager-9f67d7-9kg2t" Jan 28 07:01:28 crc kubenswrapper[4642]: I0128 07:01:28.199087 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-vrkkm" Jan 28 07:01:28 crc kubenswrapper[4642]: I0128 07:01:28.276846 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4kn2\" (UniqueName: \"kubernetes.io/projected/1bbb1fbc-a22c-4a90-b15a-abf791757ef2-kube-api-access-q4kn2\") pod \"rabbitmq-cluster-operator-manager-668c99d594-8kkj6\" (UID: \"1bbb1fbc-a22c-4a90-b15a-abf791757ef2\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8kkj6" Jan 28 07:01:28 crc kubenswrapper[4642]: I0128 07:01:28.276945 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e5eb1461-1a4f-403d-bc4f-c05d36ad23e8-cert\") pod \"openstack-baremetal-operator-controller-manager-5b5d4999dch6b8n\" (UID: \"e5eb1461-1a4f-403d-bc4f-c05d36ad23e8\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b5d4999dch6b8n" Jan 28 07:01:28 crc kubenswrapper[4642]: E0128 07:01:28.277061 4642 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 07:01:28 crc kubenswrapper[4642]: E0128 07:01:28.277105 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e5eb1461-1a4f-403d-bc4f-c05d36ad23e8-cert podName:e5eb1461-1a4f-403d-bc4f-c05d36ad23e8 nodeName:}" failed. No retries permitted until 2026-01-28 07:01:29.277093204 +0000 UTC m=+812.509182012 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e5eb1461-1a4f-403d-bc4f-c05d36ad23e8-cert") pod "openstack-baremetal-operator-controller-manager-5b5d4999dch6b8n" (UID: "e5eb1461-1a4f-403d-bc4f-c05d36ad23e8") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 07:01:28 crc kubenswrapper[4642]: I0128 07:01:28.301780 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4kn2\" (UniqueName: \"kubernetes.io/projected/1bbb1fbc-a22c-4a90-b15a-abf791757ef2-kube-api-access-q4kn2\") pod \"rabbitmq-cluster-operator-manager-668c99d594-8kkj6\" (UID: \"1bbb1fbc-a22c-4a90-b15a-abf791757ef2\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8kkj6" Jan 28 07:01:28 crc kubenswrapper[4642]: I0128 07:01:28.310608 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-xqxpn"] Jan 28 07:01:28 crc kubenswrapper[4642]: W0128 07:01:28.316162 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5af1bfbf_97ed_4ac2_b688_60b50d0800f0.slice/crio-9f74433bcc68ac520a0f2fb1f6f575e4e1f87dbf3a7214e59628860a9da8ec75 WatchSource:0}: Error finding container 9f74433bcc68ac520a0f2fb1f6f575e4e1f87dbf3a7214e59628860a9da8ec75: Status 404 returned error can't find the container with id 9f74433bcc68ac520a0f2fb1f6f575e4e1f87dbf3a7214e59628860a9da8ec75 Jan 28 07:01:28 crc kubenswrapper[4642]: I0128 07:01:28.321704 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-wvkg2"] Jan 28 07:01:28 crc kubenswrapper[4642]: W0128 07:01:28.353997 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b826964_4d30_4419_85ff_e4c4fab25d5f.slice/crio-4e8ab723ee9e518d05bc4ea786430d862f95565a34264c693d8ba1e4c3ecb138 WatchSource:0}: Error finding container 4e8ab723ee9e518d05bc4ea786430d862f95565a34264c693d8ba1e4c3ecb138: Status 404 returned error can't find the container with id 4e8ab723ee9e518d05bc4ea786430d862f95565a34264c693d8ba1e4c3ecb138 Jan 28 07:01:28 crc kubenswrapper[4642]: I0128 07:01:28.403308 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-jfxhp"] Jan 28 07:01:28 crc kubenswrapper[4642]: I0128 07:01:28.407150 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-r6l8l" event={"ID":"8f714147-0e51-40d4-bc83-a1bcd90da40f","Type":"ContainerStarted","Data":"d89e9a1aae57b0a0f550f4db2e20f3311c2015bff1715eff81aacc8ad1f284f2"} Jan 28 07:01:28 crc kubenswrapper[4642]: I0128 07:01:28.409131 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-r2p4j" event={"ID":"926efdce-a7f6-465b-b4e8-752d78e79cae","Type":"ContainerStarted","Data":"3a746ed8a80eead9db4ba3f477ffd8598bc7be38f381609563093ae08f300b75"} Jan 28 07:01:28 crc kubenswrapper[4642]: W0128 07:01:28.410215 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfd2f775c_8111_4523_b235_1e61f428b03e.slice/crio-cf850eeab3c9136134fdc691b66e0c6aab04d0e10b8e2e50af6e909edc80e4f6 WatchSource:0}: Error finding container cf850eeab3c9136134fdc691b66e0c6aab04d0e10b8e2e50af6e909edc80e4f6: Status 404 returned error can't find the container with id cf850eeab3c9136134fdc691b66e0c6aab04d0e10b8e2e50af6e909edc80e4f6 Jan 28 07:01:28 crc kubenswrapper[4642]: I0128 07:01:28.410466 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-ppss4" event={"ID":"e7c99a85-efe2-41d4-8682-b91441ed42bf","Type":"ContainerStarted","Data":"06742afbcd5bfa11ebee606d1623d703be229cc4f90bfe11777b27c9d9c4f22c"} Jan 28 07:01:28 crc kubenswrapper[4642]: I0128 07:01:28.411747 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-xqxpn" event={"ID":"5af1bfbf-97ed-4ac2-b688-60b50d0800f0","Type":"ContainerStarted","Data":"9f74433bcc68ac520a0f2fb1f6f575e4e1f87dbf3a7214e59628860a9da8ec75"} Jan 28 07:01:28 crc kubenswrapper[4642]: I0128 07:01:28.412704 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-wvkg2" event={"ID":"3b826964-4d30-4419-85ff-e4c4fab25d5f","Type":"ContainerStarted","Data":"4e8ab723ee9e518d05bc4ea786430d862f95565a34264c693d8ba1e4c3ecb138"} Jan 28 07:01:28 crc kubenswrapper[4642]: I0128 07:01:28.414738 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-cv9ph" event={"ID":"8ce8250d-808a-4044-9473-ef4de236ea47","Type":"ContainerStarted","Data":"68f15bcfb0f9374fce3ffff96cabd98aac82573d84af1a338c58c5e9e7ecc349"} Jan 28 07:01:28 crc kubenswrapper[4642]: I0128 07:01:28.423852 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8kkj6" Jan 28 07:01:28 crc kubenswrapper[4642]: I0128 07:01:28.563979 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-n42jz"] Jan 28 07:01:28 crc kubenswrapper[4642]: I0128 07:01:28.584551 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-snqv5"] Jan 28 07:01:28 crc kubenswrapper[4642]: I0128 07:01:28.588257 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-nxs2v"] Jan 28 07:01:28 crc kubenswrapper[4642]: W0128 07:01:28.597645 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod955adb33_713e_4988_a885_8c26474165e5.slice/crio-5acc9f0aac6f2ad6298739b7c72684aef0c8e553e21f2ecb0ddc54e91f2a4675 WatchSource:0}: Error finding container 5acc9f0aac6f2ad6298739b7c72684aef0c8e553e21f2ecb0ddc54e91f2a4675: Status 404 returned error can't find the container with id 5acc9f0aac6f2ad6298739b7c72684aef0c8e553e21f2ecb0ddc54e91f2a4675 Jan 28 07:01:28 crc kubenswrapper[4642]: W0128 07:01:28.598087 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd0b658bf_5e42_4af9_93ce_b6e0b03b1db2.slice/crio-10b7975a8fe46ccacdf23a2d36624b09deea636f5f3b1f1590b165d571006daf WatchSource:0}: Error finding container 10b7975a8fe46ccacdf23a2d36624b09deea636f5f3b1f1590b165d571006daf: Status 404 returned error can't find the container with id 10b7975a8fe46ccacdf23a2d36624b09deea636f5f3b1f1590b165d571006daf Jan 28 07:01:28 crc kubenswrapper[4642]: I0128 07:01:28.685977 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a453bbb9-176c-413b-82dd-294ecb3bdb2b-webhook-certs\") pod \"openstack-operator-controller-manager-9f67d7-9kg2t\" (UID: \"a453bbb9-176c-413b-82dd-294ecb3bdb2b\") " pod="openstack-operators/openstack-operator-controller-manager-9f67d7-9kg2t" Jan 28 07:01:28 crc kubenswrapper[4642]: I0128 07:01:28.686059 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a453bbb9-176c-413b-82dd-294ecb3bdb2b-metrics-certs\") pod \"openstack-operator-controller-manager-9f67d7-9kg2t\" (UID: \"a453bbb9-176c-413b-82dd-294ecb3bdb2b\") " pod="openstack-operators/openstack-operator-controller-manager-9f67d7-9kg2t" Jan 28 07:01:28 crc kubenswrapper[4642]: E0128 07:01:28.686158 4642 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 28 07:01:28 crc kubenswrapper[4642]: E0128 07:01:28.686233 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a453bbb9-176c-413b-82dd-294ecb3bdb2b-webhook-certs podName:a453bbb9-176c-413b-82dd-294ecb3bdb2b nodeName:}" failed. No retries permitted until 2026-01-28 07:01:29.68621592 +0000 UTC m=+812.918304730 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a453bbb9-176c-413b-82dd-294ecb3bdb2b-webhook-certs") pod "openstack-operator-controller-manager-9f67d7-9kg2t" (UID: "a453bbb9-176c-413b-82dd-294ecb3bdb2b") : secret "webhook-server-cert" not found Jan 28 07:01:28 crc kubenswrapper[4642]: E0128 07:01:28.686247 4642 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 28 07:01:28 crc kubenswrapper[4642]: E0128 07:01:28.686292 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a453bbb9-176c-413b-82dd-294ecb3bdb2b-metrics-certs podName:a453bbb9-176c-413b-82dd-294ecb3bdb2b nodeName:}" failed. No retries permitted until 2026-01-28 07:01:29.686277566 +0000 UTC m=+812.918366375 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a453bbb9-176c-413b-82dd-294ecb3bdb2b-metrics-certs") pod "openstack-operator-controller-manager-9f67d7-9kg2t" (UID: "a453bbb9-176c-413b-82dd-294ecb3bdb2b") : secret "metrics-server-cert" not found Jan 28 07:01:28 crc kubenswrapper[4642]: I0128 07:01:28.694079 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-lfdnj"] Jan 28 07:01:28 crc kubenswrapper[4642]: W0128 07:01:28.707985 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1e9a5df_6796_4bdb_8412_f2f832aeebd3.slice/crio-d1276d9941207eb602d54b368a575c6a5023c1a35f7e2470fbae14488b82c09b WatchSource:0}: Error finding container d1276d9941207eb602d54b368a575c6a5023c1a35f7e2470fbae14488b82c09b: Status 404 returned error can't find the container with id d1276d9941207eb602d54b368a575c6a5023c1a35f7e2470fbae14488b82c09b Jan 28 07:01:28 crc kubenswrapper[4642]: I0128 07:01:28.708212 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-gpvnm"] Jan 28 07:01:28 crc kubenswrapper[4642]: W0128 07:01:28.709101 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7c0247c0_e28d_4914_8d63_d90f9ad06fe3.slice/crio-034272903db87ae9b126387efc575cc6d990670f46377467bbb32ae8bce73bf2 WatchSource:0}: Error finding container 034272903db87ae9b126387efc575cc6d990670f46377467bbb32ae8bce73bf2: Status 404 returned error can't find the container with id 034272903db87ae9b126387efc575cc6d990670f46377467bbb32ae8bce73bf2 Jan 28 07:01:28 crc kubenswrapper[4642]: I0128 07:01:28.711419 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-7bdb645866-gxt6x"] Jan 28 07:01:28 crc kubenswrapper[4642]: I0128 07:01:28.714391 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-5n97g"] Jan 28 07:01:28 crc kubenswrapper[4642]: I0128 07:01:28.717420 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4cd88d46-4xmct"] Jan 28 07:01:28 crc kubenswrapper[4642]: W0128 07:01:28.722628 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc56780c4_c549_4261_807d_c85fa6bbb166.slice/crio-57b25d55670d9d18b5c803cd9cfdfb1f8613e05de20eea69966ba2b95956a393 WatchSource:0}: Error finding container 57b25d55670d9d18b5c803cd9cfdfb1f8613e05de20eea69966ba2b95956a393: Status 404 returned error can't find the container with id 57b25d55670d9d18b5c803cd9cfdfb1f8613e05de20eea69966ba2b95956a393 Jan 28 07:01:28 crc kubenswrapper[4642]: E0128 07:01:28.730389 4642 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2dk6x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-6b9fb5fdcb-5n97g_openstack-operators(c56780c4-c549-4261-807d-c85fa6bbb166): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 28 07:01:28 crc kubenswrapper[4642]: E0128 07:01:28.731533 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-5n97g" podUID="c56780c4-c549-4261-807d-c85fa6bbb166" Jan 28 07:01:28 crc kubenswrapper[4642]: W0128 07:01:28.731815 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod43c6d7b6_0086_4de0_b6d6_1a313d0c7214.slice/crio-ce656e5a4940b627448d960c7a5898642608a167e814535bdd223f73c33312c1 WatchSource:0}: Error finding container ce656e5a4940b627448d960c7a5898642608a167e814535bdd223f73c33312c1: Status 404 returned error can't find the container with id ce656e5a4940b627448d960c7a5898642608a167e814535bdd223f73c33312c1 Jan 28 07:01:28 crc kubenswrapper[4642]: E0128 07:01:28.734064 4642 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:ed489f21a0c72557d2da5a271808f19b7c7b85ef32fd9f4aa91bdbfc5bca3bdd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jdhd5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-5f4cd88d46-4xmct_openstack-operators(43c6d7b6-0086-4de0-b6d6-1a313d0c7214): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 28 07:01:28 crc kubenswrapper[4642]: E0128 07:01:28.735485 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-4xmct" podUID="43c6d7b6-0086-4de0-b6d6-1a313d0c7214" Jan 28 07:01:28 crc kubenswrapper[4642]: I0128 07:01:28.805593 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-g5765"] Jan 28 07:01:28 crc kubenswrapper[4642]: I0128 07:01:28.809249 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-vrkkm"] Jan 28 07:01:28 crc kubenswrapper[4642]: I0128 07:01:28.819360 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-8n8j8"] Jan 28 07:01:28 crc kubenswrapper[4642]: E0128 07:01:28.819399 4642 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xvx9j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-564965969-vrkkm_openstack-operators(ffd25d2c-380e-4a54-a2af-ca488f438da7): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 28 07:01:28 crc kubenswrapper[4642]: E0128 07:01:28.819702 4642 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8hckv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-85cd9769bb-g5765_openstack-operators(65108034-33b6-4b00-8bc0-6dbf2955510c): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 28 07:01:28 crc kubenswrapper[4642]: E0128 07:01:28.820946 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-g5765" podUID="65108034-33b6-4b00-8bc0-6dbf2955510c" Jan 28 07:01:28 crc kubenswrapper[4642]: E0128 07:01:28.820973 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-vrkkm" podUID="ffd25d2c-380e-4a54-a2af-ca488f438da7" Jan 28 07:01:28 crc kubenswrapper[4642]: W0128 07:01:28.825337 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod79a5daf5_be64_4759_bbb6_6d3850ff574e.slice/crio-d983a65a78d1dc2581cd98bd9f13658124d4a7b92bec916597b91ca56d167477 WatchSource:0}: Error finding container d983a65a78d1dc2581cd98bd9f13658124d4a7b92bec916597b91ca56d167477: Status 404 returned error can't find the container with id d983a65a78d1dc2581cd98bd9f13658124d4a7b92bec916597b91ca56d167477 Jan 28 07:01:28 crc kubenswrapper[4642]: E0128 07:01:28.826877 4642 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wzpqh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-69797bbcbd-8n8j8_openstack-operators(79a5daf5-be64-4759-bbb6-6d3850ff574e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 28 07:01:28 crc kubenswrapper[4642]: E0128 07:01:28.828023 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-8n8j8" podUID="79a5daf5-be64-4759-bbb6-6d3850ff574e" Jan 28 07:01:28 crc kubenswrapper[4642]: I0128 07:01:28.830030 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-x62jq"] Jan 28 07:01:28 crc kubenswrapper[4642]: W0128 07:01:28.837649 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe43dd0d_944f_4d01_8e8f_22adc9306708.slice/crio-475243318a2e5dab9242c87b3a2133ca09df9d9b626c9cedb80a297c8b9c3f39 WatchSource:0}: Error finding container 475243318a2e5dab9242c87b3a2133ca09df9d9b626c9cedb80a297c8b9c3f39: Status 404 returned error can't find the container with id 475243318a2e5dab9242c87b3a2133ca09df9d9b626c9cedb80a297c8b9c3f39 Jan 28 07:01:28 crc kubenswrapper[4642]: E0128 07:01:28.841726 4642 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4gq8x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-547cbdb99f-x62jq_openstack-operators(be43dd0d-944f-4d01-8e8f-22adc9306708): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 28 07:01:28 crc kubenswrapper[4642]: E0128 07:01:28.843049 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-x62jq" podUID="be43dd0d-944f-4d01-8e8f-22adc9306708" Jan 28 07:01:28 crc kubenswrapper[4642]: I0128 07:01:28.893523 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8kkj6"] Jan 28 07:01:28 crc kubenswrapper[4642]: W0128 07:01:28.898880 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1bbb1fbc_a22c_4a90_b15a_abf791757ef2.slice/crio-0016621adc13034aa3d252ae8bee841a010db16f7fd57ea672546aa95c6bab95 WatchSource:0}: Error finding container 0016621adc13034aa3d252ae8bee841a010db16f7fd57ea672546aa95c6bab95: Status 404 returned error can't find the container with id 0016621adc13034aa3d252ae8bee841a010db16f7fd57ea672546aa95c6bab95 Jan 28 07:01:29 crc kubenswrapper[4642]: I0128 07:01:29.192314 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/33d74ff8-8576-4acc-8233-df91f8c11cbd-cert\") pod \"infra-operator-controller-manager-694cf4f878-g9qq4\" (UID: \"33d74ff8-8576-4acc-8233-df91f8c11cbd\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-g9qq4" Jan 28 07:01:29 crc kubenswrapper[4642]: E0128 07:01:29.192501 4642 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 28 07:01:29 crc kubenswrapper[4642]: E0128 07:01:29.192695 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/33d74ff8-8576-4acc-8233-df91f8c11cbd-cert podName:33d74ff8-8576-4acc-8233-df91f8c11cbd nodeName:}" failed. No retries permitted until 2026-01-28 07:01:31.192678606 +0000 UTC m=+814.424767415 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/33d74ff8-8576-4acc-8233-df91f8c11cbd-cert") pod "infra-operator-controller-manager-694cf4f878-g9qq4" (UID: "33d74ff8-8576-4acc-8233-df91f8c11cbd") : secret "infra-operator-webhook-server-cert" not found Jan 28 07:01:29 crc kubenswrapper[4642]: I0128 07:01:29.294311 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e5eb1461-1a4f-403d-bc4f-c05d36ad23e8-cert\") pod \"openstack-baremetal-operator-controller-manager-5b5d4999dch6b8n\" (UID: \"e5eb1461-1a4f-403d-bc4f-c05d36ad23e8\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b5d4999dch6b8n" Jan 28 07:01:29 crc kubenswrapper[4642]: E0128 07:01:29.294429 4642 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 07:01:29 crc kubenswrapper[4642]: E0128 07:01:29.294477 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e5eb1461-1a4f-403d-bc4f-c05d36ad23e8-cert podName:e5eb1461-1a4f-403d-bc4f-c05d36ad23e8 nodeName:}" failed. No retries permitted until 2026-01-28 07:01:31.294464108 +0000 UTC m=+814.526552918 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e5eb1461-1a4f-403d-bc4f-c05d36ad23e8-cert") pod "openstack-baremetal-operator-controller-manager-5b5d4999dch6b8n" (UID: "e5eb1461-1a4f-403d-bc4f-c05d36ad23e8") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 07:01:29 crc kubenswrapper[4642]: I0128 07:01:29.420533 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-g5765" event={"ID":"65108034-33b6-4b00-8bc0-6dbf2955510c","Type":"ContainerStarted","Data":"31b971036f28df6dd62903935e84f1081e4d9c5b24d399381945da79b52c1d27"} Jan 28 07:01:29 crc kubenswrapper[4642]: E0128 07:01:29.422408 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-g5765" podUID="65108034-33b6-4b00-8bc0-6dbf2955510c" Jan 28 07:01:29 crc kubenswrapper[4642]: I0128 07:01:29.422893 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-snqv5" event={"ID":"d0b658bf-5e42-4af9-93ce-b6e0b03b1db2","Type":"ContainerStarted","Data":"10b7975a8fe46ccacdf23a2d36624b09deea636f5f3b1f1590b165d571006daf"} Jan 28 07:01:29 crc kubenswrapper[4642]: I0128 07:01:29.423831 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-gpvnm" event={"ID":"ef130a26-1119-48ca-87c7-9def2d39f0b5","Type":"ContainerStarted","Data":"2f9154162010a86f40b7743c460d4bcec8f3e9983cb578032e25a538cff7b50d"} Jan 28 07:01:29 crc kubenswrapper[4642]: I0128 07:01:29.426540 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-nxs2v" event={"ID":"955adb33-713e-4988-a885-8c26474165e5","Type":"ContainerStarted","Data":"5acc9f0aac6f2ad6298739b7c72684aef0c8e553e21f2ecb0ddc54e91f2a4675"} Jan 28 07:01:29 crc kubenswrapper[4642]: I0128 07:01:29.427726 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-5n97g" event={"ID":"c56780c4-c549-4261-807d-c85fa6bbb166","Type":"ContainerStarted","Data":"57b25d55670d9d18b5c803cd9cfdfb1f8613e05de20eea69966ba2b95956a393"} Jan 28 07:01:29 crc kubenswrapper[4642]: E0128 07:01:29.429476 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-5n97g" podUID="c56780c4-c549-4261-807d-c85fa6bbb166" Jan 28 07:01:29 crc kubenswrapper[4642]: I0128 07:01:29.430504 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-8n8j8" event={"ID":"79a5daf5-be64-4759-bbb6-6d3850ff574e","Type":"ContainerStarted","Data":"d983a65a78d1dc2581cd98bd9f13658124d4a7b92bec916597b91ca56d167477"} Jan 28 07:01:29 crc kubenswrapper[4642]: E0128 07:01:29.431787 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-8n8j8" podUID="79a5daf5-be64-4759-bbb6-6d3850ff574e" Jan 28 07:01:29 crc kubenswrapper[4642]: I0128 07:01:29.434701 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-n42jz" event={"ID":"fe0506df-e213-4430-a075-3e4a25ae3bf8","Type":"ContainerStarted","Data":"4badfbe7b4df800eb87f3e38028da9385e1b7b87dc2bdea89fa5dcb5da569fb6"} Jan 28 07:01:29 crc kubenswrapper[4642]: I0128 07:01:29.435618 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-jfxhp" event={"ID":"fd2f775c-8111-4523-b235-1e61f428b03e","Type":"ContainerStarted","Data":"cf850eeab3c9136134fdc691b66e0c6aab04d0e10b8e2e50af6e909edc80e4f6"} Jan 28 07:01:29 crc kubenswrapper[4642]: I0128 07:01:29.437902 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-x62jq" event={"ID":"be43dd0d-944f-4d01-8e8f-22adc9306708","Type":"ContainerStarted","Data":"475243318a2e5dab9242c87b3a2133ca09df9d9b626c9cedb80a297c8b9c3f39"} Jan 28 07:01:29 crc kubenswrapper[4642]: E0128 07:01:29.439826 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922\\\"\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-x62jq" podUID="be43dd0d-944f-4d01-8e8f-22adc9306708" Jan 28 07:01:29 crc kubenswrapper[4642]: I0128 07:01:29.441181 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-vrkkm" event={"ID":"ffd25d2c-380e-4a54-a2af-ca488f438da7","Type":"ContainerStarted","Data":"26923170c9782573489a095f5ab45f1ba412aed3bfe0c14ef9192f780504dc8e"} Jan 28 07:01:29 crc kubenswrapper[4642]: E0128 07:01:29.442130 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-vrkkm" podUID="ffd25d2c-380e-4a54-a2af-ca488f438da7" Jan 28 07:01:29 crc kubenswrapper[4642]: I0128 07:01:29.445870 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-4xmct" event={"ID":"43c6d7b6-0086-4de0-b6d6-1a313d0c7214","Type":"ContainerStarted","Data":"ce656e5a4940b627448d960c7a5898642608a167e814535bdd223f73c33312c1"} Jan 28 07:01:29 crc kubenswrapper[4642]: E0128 07:01:29.457489 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:ed489f21a0c72557d2da5a271808f19b7c7b85ef32fd9f4aa91bdbfc5bca3bdd\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-4xmct" podUID="43c6d7b6-0086-4de0-b6d6-1a313d0c7214" Jan 28 07:01:29 crc kubenswrapper[4642]: I0128 07:01:29.459565 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-gxt6x" event={"ID":"7c0247c0-e28d-4914-8d63-d90f9ad06fe3","Type":"ContainerStarted","Data":"034272903db87ae9b126387efc575cc6d990670f46377467bbb32ae8bce73bf2"} Jan 28 07:01:29 crc kubenswrapper[4642]: I0128 07:01:29.462791 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-lfdnj" event={"ID":"d1e9a5df-6796-4bdb-8412-f2f832aeebd3","Type":"ContainerStarted","Data":"d1276d9941207eb602d54b368a575c6a5023c1a35f7e2470fbae14488b82c09b"} Jan 28 07:01:29 crc kubenswrapper[4642]: I0128 07:01:29.469564 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8kkj6" event={"ID":"1bbb1fbc-a22c-4a90-b15a-abf791757ef2","Type":"ContainerStarted","Data":"0016621adc13034aa3d252ae8bee841a010db16f7fd57ea672546aa95c6bab95"} Jan 28 07:01:29 crc kubenswrapper[4642]: I0128 07:01:29.699811 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a453bbb9-176c-413b-82dd-294ecb3bdb2b-webhook-certs\") pod \"openstack-operator-controller-manager-9f67d7-9kg2t\" (UID: \"a453bbb9-176c-413b-82dd-294ecb3bdb2b\") " pod="openstack-operators/openstack-operator-controller-manager-9f67d7-9kg2t" Jan 28 07:01:29 crc kubenswrapper[4642]: I0128 07:01:29.699898 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a453bbb9-176c-413b-82dd-294ecb3bdb2b-metrics-certs\") pod \"openstack-operator-controller-manager-9f67d7-9kg2t\" (UID: \"a453bbb9-176c-413b-82dd-294ecb3bdb2b\") " pod="openstack-operators/openstack-operator-controller-manager-9f67d7-9kg2t" Jan 28 07:01:29 crc kubenswrapper[4642]: E0128 07:01:29.699970 4642 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 28 07:01:29 crc kubenswrapper[4642]: E0128 07:01:29.700038 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a453bbb9-176c-413b-82dd-294ecb3bdb2b-webhook-certs podName:a453bbb9-176c-413b-82dd-294ecb3bdb2b nodeName:}" failed. No retries permitted until 2026-01-28 07:01:31.700018231 +0000 UTC m=+814.932107041 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a453bbb9-176c-413b-82dd-294ecb3bdb2b-webhook-certs") pod "openstack-operator-controller-manager-9f67d7-9kg2t" (UID: "a453bbb9-176c-413b-82dd-294ecb3bdb2b") : secret "webhook-server-cert" not found Jan 28 07:01:29 crc kubenswrapper[4642]: E0128 07:01:29.700077 4642 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 28 07:01:29 crc kubenswrapper[4642]: E0128 07:01:29.700149 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a453bbb9-176c-413b-82dd-294ecb3bdb2b-metrics-certs podName:a453bbb9-176c-413b-82dd-294ecb3bdb2b nodeName:}" failed. No retries permitted until 2026-01-28 07:01:31.700133548 +0000 UTC m=+814.932222348 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a453bbb9-176c-413b-82dd-294ecb3bdb2b-metrics-certs") pod "openstack-operator-controller-manager-9f67d7-9kg2t" (UID: "a453bbb9-176c-413b-82dd-294ecb3bdb2b") : secret "metrics-server-cert" not found Jan 28 07:01:30 crc kubenswrapper[4642]: E0128 07:01:30.478570 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-5n97g" podUID="c56780c4-c549-4261-807d-c85fa6bbb166" Jan 28 07:01:30 crc kubenswrapper[4642]: E0128 07:01:30.478941 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-g5765" podUID="65108034-33b6-4b00-8bc0-6dbf2955510c" Jan 28 07:01:30 crc kubenswrapper[4642]: E0128 07:01:30.478958 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922\\\"\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-x62jq" podUID="be43dd0d-944f-4d01-8e8f-22adc9306708" Jan 28 07:01:30 crc kubenswrapper[4642]: E0128 07:01:30.478987 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:ed489f21a0c72557d2da5a271808f19b7c7b85ef32fd9f4aa91bdbfc5bca3bdd\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-4xmct" podUID="43c6d7b6-0086-4de0-b6d6-1a313d0c7214" Jan 28 07:01:30 crc kubenswrapper[4642]: E0128 07:01:30.479005 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-vrkkm" podUID="ffd25d2c-380e-4a54-a2af-ca488f438da7" Jan 28 07:01:30 crc kubenswrapper[4642]: E0128 07:01:30.479145 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-8n8j8" podUID="79a5daf5-be64-4759-bbb6-6d3850ff574e" Jan 28 07:01:31 crc kubenswrapper[4642]: I0128 07:01:31.222664 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/33d74ff8-8576-4acc-8233-df91f8c11cbd-cert\") pod \"infra-operator-controller-manager-694cf4f878-g9qq4\" (UID: \"33d74ff8-8576-4acc-8233-df91f8c11cbd\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-g9qq4" Jan 28 07:01:31 crc kubenswrapper[4642]: E0128 07:01:31.222824 4642 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 28 07:01:31 crc kubenswrapper[4642]: E0128 07:01:31.222909 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/33d74ff8-8576-4acc-8233-df91f8c11cbd-cert podName:33d74ff8-8576-4acc-8233-df91f8c11cbd nodeName:}" failed. No retries permitted until 2026-01-28 07:01:35.222891535 +0000 UTC m=+818.454980344 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/33d74ff8-8576-4acc-8233-df91f8c11cbd-cert") pod "infra-operator-controller-manager-694cf4f878-g9qq4" (UID: "33d74ff8-8576-4acc-8233-df91f8c11cbd") : secret "infra-operator-webhook-server-cert" not found Jan 28 07:01:31 crc kubenswrapper[4642]: I0128 07:01:31.324166 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e5eb1461-1a4f-403d-bc4f-c05d36ad23e8-cert\") pod \"openstack-baremetal-operator-controller-manager-5b5d4999dch6b8n\" (UID: \"e5eb1461-1a4f-403d-bc4f-c05d36ad23e8\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b5d4999dch6b8n" Jan 28 07:01:31 crc kubenswrapper[4642]: E0128 07:01:31.324316 4642 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 07:01:31 crc kubenswrapper[4642]: E0128 07:01:31.324404 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e5eb1461-1a4f-403d-bc4f-c05d36ad23e8-cert podName:e5eb1461-1a4f-403d-bc4f-c05d36ad23e8 nodeName:}" failed. No retries permitted until 2026-01-28 07:01:35.324388656 +0000 UTC m=+818.556477465 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e5eb1461-1a4f-403d-bc4f-c05d36ad23e8-cert") pod "openstack-baremetal-operator-controller-manager-5b5d4999dch6b8n" (UID: "e5eb1461-1a4f-403d-bc4f-c05d36ad23e8") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 07:01:31 crc kubenswrapper[4642]: I0128 07:01:31.728900 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a453bbb9-176c-413b-82dd-294ecb3bdb2b-webhook-certs\") pod \"openstack-operator-controller-manager-9f67d7-9kg2t\" (UID: \"a453bbb9-176c-413b-82dd-294ecb3bdb2b\") " pod="openstack-operators/openstack-operator-controller-manager-9f67d7-9kg2t" Jan 28 07:01:31 crc kubenswrapper[4642]: I0128 07:01:31.728980 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a453bbb9-176c-413b-82dd-294ecb3bdb2b-metrics-certs\") pod \"openstack-operator-controller-manager-9f67d7-9kg2t\" (UID: \"a453bbb9-176c-413b-82dd-294ecb3bdb2b\") " pod="openstack-operators/openstack-operator-controller-manager-9f67d7-9kg2t" Jan 28 07:01:31 crc kubenswrapper[4642]: E0128 07:01:31.729076 4642 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 28 07:01:31 crc kubenswrapper[4642]: E0128 07:01:31.729114 4642 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 28 07:01:31 crc kubenswrapper[4642]: E0128 07:01:31.729233 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a453bbb9-176c-413b-82dd-294ecb3bdb2b-webhook-certs podName:a453bbb9-176c-413b-82dd-294ecb3bdb2b nodeName:}" failed. No retries permitted until 2026-01-28 07:01:35.729216863 +0000 UTC m=+818.961305672 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a453bbb9-176c-413b-82dd-294ecb3bdb2b-webhook-certs") pod "openstack-operator-controller-manager-9f67d7-9kg2t" (UID: "a453bbb9-176c-413b-82dd-294ecb3bdb2b") : secret "webhook-server-cert" not found Jan 28 07:01:31 crc kubenswrapper[4642]: E0128 07:01:31.729276 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a453bbb9-176c-413b-82dd-294ecb3bdb2b-metrics-certs podName:a453bbb9-176c-413b-82dd-294ecb3bdb2b nodeName:}" failed. No retries permitted until 2026-01-28 07:01:35.72925265 +0000 UTC m=+818.961341459 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a453bbb9-176c-413b-82dd-294ecb3bdb2b-metrics-certs") pod "openstack-operator-controller-manager-9f67d7-9kg2t" (UID: "a453bbb9-176c-413b-82dd-294ecb3bdb2b") : secret "metrics-server-cert" not found Jan 28 07:01:35 crc kubenswrapper[4642]: I0128 07:01:35.289009 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/33d74ff8-8576-4acc-8233-df91f8c11cbd-cert\") pod \"infra-operator-controller-manager-694cf4f878-g9qq4\" (UID: \"33d74ff8-8576-4acc-8233-df91f8c11cbd\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-g9qq4" Jan 28 07:01:35 crc kubenswrapper[4642]: E0128 07:01:35.289222 4642 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 28 07:01:35 crc kubenswrapper[4642]: E0128 07:01:35.289421 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/33d74ff8-8576-4acc-8233-df91f8c11cbd-cert podName:33d74ff8-8576-4acc-8233-df91f8c11cbd nodeName:}" failed. No retries permitted until 2026-01-28 07:01:43.289406109 +0000 UTC m=+826.521494919 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/33d74ff8-8576-4acc-8233-df91f8c11cbd-cert") pod "infra-operator-controller-manager-694cf4f878-g9qq4" (UID: "33d74ff8-8576-4acc-8233-df91f8c11cbd") : secret "infra-operator-webhook-server-cert" not found Jan 28 07:01:35 crc kubenswrapper[4642]: I0128 07:01:35.390643 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e5eb1461-1a4f-403d-bc4f-c05d36ad23e8-cert\") pod \"openstack-baremetal-operator-controller-manager-5b5d4999dch6b8n\" (UID: \"e5eb1461-1a4f-403d-bc4f-c05d36ad23e8\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b5d4999dch6b8n" Jan 28 07:01:35 crc kubenswrapper[4642]: E0128 07:01:35.390809 4642 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 07:01:35 crc kubenswrapper[4642]: E0128 07:01:35.390882 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e5eb1461-1a4f-403d-bc4f-c05d36ad23e8-cert podName:e5eb1461-1a4f-403d-bc4f-c05d36ad23e8 nodeName:}" failed. No retries permitted until 2026-01-28 07:01:43.390862903 +0000 UTC m=+826.622951712 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e5eb1461-1a4f-403d-bc4f-c05d36ad23e8-cert") pod "openstack-baremetal-operator-controller-manager-5b5d4999dch6b8n" (UID: "e5eb1461-1a4f-403d-bc4f-c05d36ad23e8") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 07:01:35 crc kubenswrapper[4642]: I0128 07:01:35.794833 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a453bbb9-176c-413b-82dd-294ecb3bdb2b-metrics-certs\") pod \"openstack-operator-controller-manager-9f67d7-9kg2t\" (UID: \"a453bbb9-176c-413b-82dd-294ecb3bdb2b\") " pod="openstack-operators/openstack-operator-controller-manager-9f67d7-9kg2t" Jan 28 07:01:35 crc kubenswrapper[4642]: I0128 07:01:35.794964 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a453bbb9-176c-413b-82dd-294ecb3bdb2b-webhook-certs\") pod \"openstack-operator-controller-manager-9f67d7-9kg2t\" (UID: \"a453bbb9-176c-413b-82dd-294ecb3bdb2b\") " pod="openstack-operators/openstack-operator-controller-manager-9f67d7-9kg2t" Jan 28 07:01:35 crc kubenswrapper[4642]: E0128 07:01:35.794961 4642 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 28 07:01:35 crc kubenswrapper[4642]: E0128 07:01:35.794999 4642 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 28 07:01:35 crc kubenswrapper[4642]: E0128 07:01:35.795063 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a453bbb9-176c-413b-82dd-294ecb3bdb2b-metrics-certs podName:a453bbb9-176c-413b-82dd-294ecb3bdb2b nodeName:}" failed. No retries permitted until 2026-01-28 07:01:43.79504792 +0000 UTC m=+827.027136720 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a453bbb9-176c-413b-82dd-294ecb3bdb2b-metrics-certs") pod "openstack-operator-controller-manager-9f67d7-9kg2t" (UID: "a453bbb9-176c-413b-82dd-294ecb3bdb2b") : secret "metrics-server-cert" not found Jan 28 07:01:35 crc kubenswrapper[4642]: E0128 07:01:35.795151 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a453bbb9-176c-413b-82dd-294ecb3bdb2b-webhook-certs podName:a453bbb9-176c-413b-82dd-294ecb3bdb2b nodeName:}" failed. No retries permitted until 2026-01-28 07:01:43.795131909 +0000 UTC m=+827.027220718 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a453bbb9-176c-413b-82dd-294ecb3bdb2b-webhook-certs") pod "openstack-operator-controller-manager-9f67d7-9kg2t" (UID: "a453bbb9-176c-413b-82dd-294ecb3bdb2b") : secret "webhook-server-cert" not found Jan 28 07:01:40 crc kubenswrapper[4642]: I0128 07:01:40.537491 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-ppss4" event={"ID":"e7c99a85-efe2-41d4-8682-b91441ed42bf","Type":"ContainerStarted","Data":"4ff57900b4f5f4f2f24dd7d0446c9dc292a1d4f5511a97b89ae093354cb5c9b9"} Jan 28 07:01:40 crc kubenswrapper[4642]: I0128 07:01:40.539028 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-ppss4" Jan 28 07:01:40 crc kubenswrapper[4642]: I0128 07:01:40.540376 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-r2p4j" event={"ID":"926efdce-a7f6-465b-b4e8-752d78e79cae","Type":"ContainerStarted","Data":"b262cb0a24dd8290089409960cd13031423769a0bc14cdf65333275c1e14475b"} Jan 28 07:01:40 crc kubenswrapper[4642]: I0128 07:01:40.540482 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-r2p4j" Jan 28 07:01:40 crc kubenswrapper[4642]: I0128 07:01:40.541673 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-jfxhp" event={"ID":"fd2f775c-8111-4523-b235-1e61f428b03e","Type":"ContainerStarted","Data":"f47af51db1b15af5a3d6911d81321adcff30d2cbfc6c269659db0664749bbc20"} Jan 28 07:01:40 crc kubenswrapper[4642]: I0128 07:01:40.542121 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-jfxhp" Jan 28 07:01:40 crc kubenswrapper[4642]: I0128 07:01:40.543400 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-gpvnm" event={"ID":"ef130a26-1119-48ca-87c7-9def2d39f0b5","Type":"ContainerStarted","Data":"04c69b245fc6256b2d917ae6dc6bc3406c6307f45990110d56f67eacce32a45d"} Jan 28 07:01:40 crc kubenswrapper[4642]: I0128 07:01:40.543997 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-gpvnm" Jan 28 07:01:40 crc kubenswrapper[4642]: I0128 07:01:40.544979 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-n42jz" event={"ID":"fe0506df-e213-4430-a075-3e4a25ae3bf8","Type":"ContainerStarted","Data":"30b3db9d588d5f9cccf99c12f868c6ed57bab0d3798c97b2e84676897eaa8faf"} Jan 28 07:01:40 crc kubenswrapper[4642]: I0128 07:01:40.545400 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-n42jz" Jan 28 07:01:40 crc kubenswrapper[4642]: I0128 07:01:40.546340 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-r6l8l" event={"ID":"8f714147-0e51-40d4-bc83-a1bcd90da40f","Type":"ContainerStarted","Data":"8206de30883beec7ef8db65f8c3dd13dd2b7dcc07f4738f93632b9dfb4a1cf95"} Jan 28 07:01:40 crc kubenswrapper[4642]: I0128 07:01:40.546727 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-r6l8l" Jan 28 07:01:40 crc kubenswrapper[4642]: I0128 07:01:40.547722 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-wvkg2" event={"ID":"3b826964-4d30-4419-85ff-e4c4fab25d5f","Type":"ContainerStarted","Data":"3b9d49a1382af1358377104829d665f324a868ca008c325986a96d89ff45a9d8"} Jan 28 07:01:40 crc kubenswrapper[4642]: I0128 07:01:40.548102 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-wvkg2" Jan 28 07:01:40 crc kubenswrapper[4642]: I0128 07:01:40.549427 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-snqv5" event={"ID":"d0b658bf-5e42-4af9-93ce-b6e0b03b1db2","Type":"ContainerStarted","Data":"210e7b628ab77d5afb58f81f9228f396faab85df61da5c9bd539c5b1cbc5d213"} Jan 28 07:01:40 crc kubenswrapper[4642]: I0128 07:01:40.549520 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-snqv5" Jan 28 07:01:40 crc kubenswrapper[4642]: I0128 07:01:40.550579 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-gxt6x" event={"ID":"7c0247c0-e28d-4914-8d63-d90f9ad06fe3","Type":"ContainerStarted","Data":"74a98b71ef3595d62755896c8b39610a4f6ec4d576a8a79990b1dde44131c57a"} Jan 28 07:01:40 crc kubenswrapper[4642]: I0128 07:01:40.550948 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-gxt6x" Jan 28 07:01:40 crc kubenswrapper[4642]: I0128 07:01:40.552249 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8kkj6" event={"ID":"1bbb1fbc-a22c-4a90-b15a-abf791757ef2","Type":"ContainerStarted","Data":"18f523b7d92a227ee4dfb0c7b4ab167938ae3edc896aff811e7e54758f5e692c"} Jan 28 07:01:40 crc kubenswrapper[4642]: I0128 07:01:40.553777 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-xqxpn" event={"ID":"5af1bfbf-97ed-4ac2-b688-60b50d0800f0","Type":"ContainerStarted","Data":"383b4b92b14361672a3897301e6d2a4c6522fb91bba037bd7444bd8b4f4eacb8"} Jan 28 07:01:40 crc kubenswrapper[4642]: I0128 07:01:40.553982 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-xqxpn" Jan 28 07:01:40 crc kubenswrapper[4642]: I0128 07:01:40.555435 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-cv9ph" event={"ID":"8ce8250d-808a-4044-9473-ef4de236ea47","Type":"ContainerStarted","Data":"94cffdefa9fe81c59a90a252d41915b6a99f0a9e9f5160db08e9763d58269440"} Jan 28 07:01:40 crc kubenswrapper[4642]: I0128 07:01:40.555664 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-cv9ph" Jan 28 07:01:40 crc kubenswrapper[4642]: I0128 07:01:40.557176 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-nxs2v" event={"ID":"955adb33-713e-4988-a885-8c26474165e5","Type":"ContainerStarted","Data":"08aefec0e1ecc6dad90ba5fe986b3ab661c75afb1d4eb54003875977e4850549"} Jan 28 07:01:40 crc kubenswrapper[4642]: I0128 07:01:40.557356 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-nxs2v" Jan 28 07:01:40 crc kubenswrapper[4642]: I0128 07:01:40.558533 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-lfdnj" event={"ID":"d1e9a5df-6796-4bdb-8412-f2f832aeebd3","Type":"ContainerStarted","Data":"25aaa6c24f6b168573e6adaf74a2147386f73a191ed211ff9f592d59f3d5558b"} Jan 28 07:01:40 crc kubenswrapper[4642]: I0128 07:01:40.558943 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-lfdnj" Jan 28 07:01:40 crc kubenswrapper[4642]: I0128 07:01:40.608380 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-ppss4" podStartSLOduration=2.161752939 podStartE2EDuration="13.608370135s" podCreationTimestamp="2026-01-28 07:01:27 +0000 UTC" firstStartedPulling="2026-01-28 07:01:28.233722553 +0000 UTC m=+811.465811363" lastFinishedPulling="2026-01-28 07:01:39.68033975 +0000 UTC m=+822.912428559" observedRunningTime="2026-01-28 07:01:40.587629536 +0000 UTC m=+823.819718346" watchObservedRunningTime="2026-01-28 07:01:40.608370135 +0000 UTC m=+823.840458945" Jan 28 07:01:40 crc kubenswrapper[4642]: I0128 07:01:40.609684 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-r2p4j" podStartSLOduration=1.599814329 podStartE2EDuration="13.609677826s" podCreationTimestamp="2026-01-28 07:01:27 +0000 UTC" firstStartedPulling="2026-01-28 07:01:28.085696802 +0000 UTC m=+811.317785611" lastFinishedPulling="2026-01-28 07:01:40.095560308 +0000 UTC m=+823.327649108" observedRunningTime="2026-01-28 07:01:40.607129441 +0000 UTC m=+823.839218250" watchObservedRunningTime="2026-01-28 07:01:40.609677826 +0000 UTC m=+823.841766634" Jan 28 07:01:40 crc kubenswrapper[4642]: I0128 07:01:40.674380 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-wvkg2" podStartSLOduration=1.972136404 podStartE2EDuration="13.674365217s" podCreationTimestamp="2026-01-28 07:01:27 +0000 UTC" firstStartedPulling="2026-01-28 07:01:28.358515389 +0000 UTC m=+811.590604198" lastFinishedPulling="2026-01-28 07:01:40.060744201 +0000 UTC m=+823.292833011" observedRunningTime="2026-01-28 07:01:40.671753805 +0000 UTC m=+823.903842613" watchObservedRunningTime="2026-01-28 07:01:40.674365217 +0000 UTC m=+823.906454026" Jan 28 07:01:40 crc kubenswrapper[4642]: I0128 07:01:40.771284 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-snqv5" podStartSLOduration=2.294842751 podStartE2EDuration="13.771266241s" podCreationTimestamp="2026-01-28 07:01:27 +0000 UTC" firstStartedPulling="2026-01-28 07:01:28.599633258 +0000 UTC m=+811.831722067" lastFinishedPulling="2026-01-28 07:01:40.076056747 +0000 UTC m=+823.308145557" observedRunningTime="2026-01-28 07:01:40.770679086 +0000 UTC m=+824.002767895" watchObservedRunningTime="2026-01-28 07:01:40.771266241 +0000 UTC m=+824.003355050" Jan 28 07:01:40 crc kubenswrapper[4642]: I0128 07:01:40.831466 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-xqxpn" podStartSLOduration=2.084851138 podStartE2EDuration="13.831448068s" podCreationTimestamp="2026-01-28 07:01:27 +0000 UTC" firstStartedPulling="2026-01-28 07:01:28.326868543 +0000 UTC m=+811.558957352" lastFinishedPulling="2026-01-28 07:01:40.073465472 +0000 UTC m=+823.305554282" observedRunningTime="2026-01-28 07:01:40.823376863 +0000 UTC m=+824.055465672" watchObservedRunningTime="2026-01-28 07:01:40.831448068 +0000 UTC m=+824.063536876" Jan 28 07:01:40 crc kubenswrapper[4642]: I0128 07:01:40.896009 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-jfxhp" podStartSLOduration=2.21329499 podStartE2EDuration="13.895995656s" podCreationTimestamp="2026-01-28 07:01:27 +0000 UTC" firstStartedPulling="2026-01-28 07:01:28.412296162 +0000 UTC m=+811.644384972" lastFinishedPulling="2026-01-28 07:01:40.094996829 +0000 UTC m=+823.327085638" observedRunningTime="2026-01-28 07:01:40.89004908 +0000 UTC m=+824.122137889" watchObservedRunningTime="2026-01-28 07:01:40.895995656 +0000 UTC m=+824.128084465" Jan 28 07:01:41 crc kubenswrapper[4642]: I0128 07:01:41.026409 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-gpvnm" podStartSLOduration=3.066307088 podStartE2EDuration="14.026395339s" podCreationTimestamp="2026-01-28 07:01:27 +0000 UTC" firstStartedPulling="2026-01-28 07:01:28.720214831 +0000 UTC m=+811.952303640" lastFinishedPulling="2026-01-28 07:01:39.680303081 +0000 UTC m=+822.912391891" observedRunningTime="2026-01-28 07:01:40.980900963 +0000 UTC m=+824.212989772" watchObservedRunningTime="2026-01-28 07:01:41.026395339 +0000 UTC m=+824.258484149" Jan 28 07:01:41 crc kubenswrapper[4642]: I0128 07:01:41.068294 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8kkj6" podStartSLOduration=1.85509719 podStartE2EDuration="13.068278651s" podCreationTimestamp="2026-01-28 07:01:28 +0000 UTC" firstStartedPulling="2026-01-28 07:01:28.900900533 +0000 UTC m=+812.132989341" lastFinishedPulling="2026-01-28 07:01:40.114081992 +0000 UTC m=+823.346170802" observedRunningTime="2026-01-28 07:01:41.024343809 +0000 UTC m=+824.256432609" watchObservedRunningTime="2026-01-28 07:01:41.068278651 +0000 UTC m=+824.300367461" Jan 28 07:01:41 crc kubenswrapper[4642]: I0128 07:01:41.069723 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-gxt6x" podStartSLOduration=2.729930433 podStartE2EDuration="14.069713641s" podCreationTimestamp="2026-01-28 07:01:27 +0000 UTC" firstStartedPulling="2026-01-28 07:01:28.720941737 +0000 UTC m=+811.953030546" lastFinishedPulling="2026-01-28 07:01:40.060724945 +0000 UTC m=+823.292813754" observedRunningTime="2026-01-28 07:01:41.057220819 +0000 UTC m=+824.289309628" watchObservedRunningTime="2026-01-28 07:01:41.069713641 +0000 UTC m=+824.301802450" Jan 28 07:01:41 crc kubenswrapper[4642]: I0128 07:01:41.167478 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-lfdnj" podStartSLOduration=2.79171749 podStartE2EDuration="14.167461578s" podCreationTimestamp="2026-01-28 07:01:27 +0000 UTC" firstStartedPulling="2026-01-28 07:01:28.718978675 +0000 UTC m=+811.951067484" lastFinishedPulling="2026-01-28 07:01:40.094722763 +0000 UTC m=+823.326811572" observedRunningTime="2026-01-28 07:01:41.163790551 +0000 UTC m=+824.395879361" watchObservedRunningTime="2026-01-28 07:01:41.167461578 +0000 UTC m=+824.399550387" Jan 28 07:01:41 crc kubenswrapper[4642]: I0128 07:01:41.188164 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-r6l8l" podStartSLOduration=2.613818249 podStartE2EDuration="14.188146181s" podCreationTimestamp="2026-01-28 07:01:27 +0000 UTC" firstStartedPulling="2026-01-28 07:01:28.10522545 +0000 UTC m=+811.337314259" lastFinishedPulling="2026-01-28 07:01:39.679553382 +0000 UTC m=+822.911642191" observedRunningTime="2026-01-28 07:01:41.188054047 +0000 UTC m=+824.420142856" watchObservedRunningTime="2026-01-28 07:01:41.188146181 +0000 UTC m=+824.420234989" Jan 28 07:01:41 crc kubenswrapper[4642]: I0128 07:01:41.205567 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-nxs2v" podStartSLOduration=2.74415865 podStartE2EDuration="14.205551594s" podCreationTimestamp="2026-01-28 07:01:27 +0000 UTC" firstStartedPulling="2026-01-28 07:01:28.599141834 +0000 UTC m=+811.831230643" lastFinishedPulling="2026-01-28 07:01:40.060534777 +0000 UTC m=+823.292623587" observedRunningTime="2026-01-28 07:01:41.201575173 +0000 UTC m=+824.433663983" watchObservedRunningTime="2026-01-28 07:01:41.205551594 +0000 UTC m=+824.437640403" Jan 28 07:01:41 crc kubenswrapper[4642]: I0128 07:01:41.226756 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-cv9ph" podStartSLOduration=2.358059346 podStartE2EDuration="14.226743592s" podCreationTimestamp="2026-01-28 07:01:27 +0000 UTC" firstStartedPulling="2026-01-28 07:01:28.226139638 +0000 UTC m=+811.458228447" lastFinishedPulling="2026-01-28 07:01:40.094823884 +0000 UTC m=+823.326912693" observedRunningTime="2026-01-28 07:01:41.224711289 +0000 UTC m=+824.456800097" watchObservedRunningTime="2026-01-28 07:01:41.226743592 +0000 UTC m=+824.458832400" Jan 28 07:01:41 crc kubenswrapper[4642]: I0128 07:01:41.272393 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-n42jz" podStartSLOduration=2.798621608 podStartE2EDuration="14.272373462s" podCreationTimestamp="2026-01-28 07:01:27 +0000 UTC" firstStartedPulling="2026-01-28 07:01:28.586776171 +0000 UTC m=+811.818864980" lastFinishedPulling="2026-01-28 07:01:40.060528025 +0000 UTC m=+823.292616834" observedRunningTime="2026-01-28 07:01:41.252777708 +0000 UTC m=+824.484866517" watchObservedRunningTime="2026-01-28 07:01:41.272373462 +0000 UTC m=+824.504462271" Jan 28 07:01:43 crc kubenswrapper[4642]: I0128 07:01:43.305424 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/33d74ff8-8576-4acc-8233-df91f8c11cbd-cert\") pod \"infra-operator-controller-manager-694cf4f878-g9qq4\" (UID: \"33d74ff8-8576-4acc-8233-df91f8c11cbd\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-g9qq4" Jan 28 07:01:43 crc kubenswrapper[4642]: E0128 07:01:43.305569 4642 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 28 07:01:43 crc kubenswrapper[4642]: E0128 07:01:43.306082 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/33d74ff8-8576-4acc-8233-df91f8c11cbd-cert podName:33d74ff8-8576-4acc-8233-df91f8c11cbd nodeName:}" failed. No retries permitted until 2026-01-28 07:01:59.306064515 +0000 UTC m=+842.538153324 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/33d74ff8-8576-4acc-8233-df91f8c11cbd-cert") pod "infra-operator-controller-manager-694cf4f878-g9qq4" (UID: "33d74ff8-8576-4acc-8233-df91f8c11cbd") : secret "infra-operator-webhook-server-cert" not found Jan 28 07:01:43 crc kubenswrapper[4642]: I0128 07:01:43.407628 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e5eb1461-1a4f-403d-bc4f-c05d36ad23e8-cert\") pod \"openstack-baremetal-operator-controller-manager-5b5d4999dch6b8n\" (UID: \"e5eb1461-1a4f-403d-bc4f-c05d36ad23e8\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b5d4999dch6b8n" Jan 28 07:01:43 crc kubenswrapper[4642]: E0128 07:01:43.407805 4642 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 07:01:43 crc kubenswrapper[4642]: E0128 07:01:43.407884 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e5eb1461-1a4f-403d-bc4f-c05d36ad23e8-cert podName:e5eb1461-1a4f-403d-bc4f-c05d36ad23e8 nodeName:}" failed. No retries permitted until 2026-01-28 07:01:59.40786701 +0000 UTC m=+842.639955819 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e5eb1461-1a4f-403d-bc4f-c05d36ad23e8-cert") pod "openstack-baremetal-operator-controller-manager-5b5d4999dch6b8n" (UID: "e5eb1461-1a4f-403d-bc4f-c05d36ad23e8") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 07:01:43 crc kubenswrapper[4642]: I0128 07:01:43.812850 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a453bbb9-176c-413b-82dd-294ecb3bdb2b-webhook-certs\") pod \"openstack-operator-controller-manager-9f67d7-9kg2t\" (UID: \"a453bbb9-176c-413b-82dd-294ecb3bdb2b\") " pod="openstack-operators/openstack-operator-controller-manager-9f67d7-9kg2t" Jan 28 07:01:43 crc kubenswrapper[4642]: I0128 07:01:43.812924 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a453bbb9-176c-413b-82dd-294ecb3bdb2b-metrics-certs\") pod \"openstack-operator-controller-manager-9f67d7-9kg2t\" (UID: \"a453bbb9-176c-413b-82dd-294ecb3bdb2b\") " pod="openstack-operators/openstack-operator-controller-manager-9f67d7-9kg2t" Jan 28 07:01:43 crc kubenswrapper[4642]: E0128 07:01:43.813035 4642 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 28 07:01:43 crc kubenswrapper[4642]: E0128 07:01:43.813061 4642 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 28 07:01:43 crc kubenswrapper[4642]: E0128 07:01:43.813108 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a453bbb9-176c-413b-82dd-294ecb3bdb2b-webhook-certs podName:a453bbb9-176c-413b-82dd-294ecb3bdb2b nodeName:}" failed. No retries permitted until 2026-01-28 07:01:59.81309049 +0000 UTC m=+843.045179300 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a453bbb9-176c-413b-82dd-294ecb3bdb2b-webhook-certs") pod "openstack-operator-controller-manager-9f67d7-9kg2t" (UID: "a453bbb9-176c-413b-82dd-294ecb3bdb2b") : secret "webhook-server-cert" not found Jan 28 07:01:43 crc kubenswrapper[4642]: E0128 07:01:43.813125 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a453bbb9-176c-413b-82dd-294ecb3bdb2b-metrics-certs podName:a453bbb9-176c-413b-82dd-294ecb3bdb2b nodeName:}" failed. No retries permitted until 2026-01-28 07:01:59.813118873 +0000 UTC m=+843.045207683 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a453bbb9-176c-413b-82dd-294ecb3bdb2b-metrics-certs") pod "openstack-operator-controller-manager-9f67d7-9kg2t" (UID: "a453bbb9-176c-413b-82dd-294ecb3bdb2b") : secret "metrics-server-cert" not found Jan 28 07:01:47 crc kubenswrapper[4642]: I0128 07:01:47.586484 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-r6l8l" Jan 28 07:01:47 crc kubenswrapper[4642]: I0128 07:01:47.591852 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-ppss4" Jan 28 07:01:47 crc kubenswrapper[4642]: I0128 07:01:47.594168 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-cv9ph" Jan 28 07:01:47 crc kubenswrapper[4642]: I0128 07:01:47.604933 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-5n97g" event={"ID":"c56780c4-c549-4261-807d-c85fa6bbb166","Type":"ContainerStarted","Data":"620161d4e238b7281a6b8688c41197a3a175cef2fed2588c22105967cfcf2b38"} Jan 28 07:01:47 crc kubenswrapper[4642]: I0128 07:01:47.605711 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-5n97g" Jan 28 07:01:47 crc kubenswrapper[4642]: I0128 07:01:47.622586 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-r2p4j" Jan 28 07:01:47 crc kubenswrapper[4642]: I0128 07:01:47.685028 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-wvkg2" Jan 28 07:01:47 crc kubenswrapper[4642]: I0128 07:01:47.700319 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-5n97g" podStartSLOduration=2.397731629 podStartE2EDuration="20.700305669s" podCreationTimestamp="2026-01-28 07:01:27 +0000 UTC" firstStartedPulling="2026-01-28 07:01:28.730273612 +0000 UTC m=+811.962362422" lastFinishedPulling="2026-01-28 07:01:47.032847653 +0000 UTC m=+830.264936462" observedRunningTime="2026-01-28 07:01:47.697435951 +0000 UTC m=+830.929524750" watchObservedRunningTime="2026-01-28 07:01:47.700305669 +0000 UTC m=+830.932394478" Jan 28 07:01:47 crc kubenswrapper[4642]: I0128 07:01:47.732938 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-xqxpn" Jan 28 07:01:47 crc kubenswrapper[4642]: I0128 07:01:47.864738 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-n42jz" Jan 28 07:01:47 crc kubenswrapper[4642]: I0128 07:01:47.891970 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-jfxhp" Jan 28 07:01:47 crc kubenswrapper[4642]: I0128 07:01:47.924908 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-snqv5" Jan 28 07:01:47 crc kubenswrapper[4642]: I0128 07:01:47.952928 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-nxs2v" Jan 28 07:01:47 crc kubenswrapper[4642]: I0128 07:01:47.978307 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-gxt6x" Jan 28 07:01:48 crc kubenswrapper[4642]: I0128 07:01:48.022440 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-gpvnm" Jan 28 07:01:48 crc kubenswrapper[4642]: I0128 07:01:48.032945 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-lfdnj" Jan 28 07:01:51 crc kubenswrapper[4642]: I0128 07:01:51.634676 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-x62jq" event={"ID":"be43dd0d-944f-4d01-8e8f-22adc9306708","Type":"ContainerStarted","Data":"c5c7248a784fd8829106bf123848102b3317b05552344ef3e8280d22571deaa7"} Jan 28 07:01:51 crc kubenswrapper[4642]: I0128 07:01:51.635522 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-x62jq" Jan 28 07:01:51 crc kubenswrapper[4642]: I0128 07:01:51.636818 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-vrkkm" event={"ID":"ffd25d2c-380e-4a54-a2af-ca488f438da7","Type":"ContainerStarted","Data":"73e638710d71fa737794dea33df3279e4c7abe80d380b932da3a6f7208aaba4a"} Jan 28 07:01:51 crc kubenswrapper[4642]: I0128 07:01:51.636933 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-564965969-vrkkm" Jan 28 07:01:51 crc kubenswrapper[4642]: I0128 07:01:51.638001 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-4xmct" event={"ID":"43c6d7b6-0086-4de0-b6d6-1a313d0c7214","Type":"ContainerStarted","Data":"039d71f3e510ab5ff07f62ec76835235b05184cd4c73f5df1d5a777a13ac35c0"} Jan 28 07:01:51 crc kubenswrapper[4642]: I0128 07:01:51.638207 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-4xmct" Jan 28 07:01:51 crc kubenswrapper[4642]: I0128 07:01:51.639029 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-8n8j8" event={"ID":"79a5daf5-be64-4759-bbb6-6d3850ff574e","Type":"ContainerStarted","Data":"987fb0251644cd5a5ab3139f8b669edb6d58d3b6859ae8a18b6af3e711f0f73a"} Jan 28 07:01:51 crc kubenswrapper[4642]: I0128 07:01:51.639200 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-8n8j8" Jan 28 07:01:51 crc kubenswrapper[4642]: I0128 07:01:51.640036 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-g5765" event={"ID":"65108034-33b6-4b00-8bc0-6dbf2955510c","Type":"ContainerStarted","Data":"a75bffe03df23f1a37aeac1fe5e85061efff678de824e583ac60ade557e5f098"} Jan 28 07:01:51 crc kubenswrapper[4642]: I0128 07:01:51.640235 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-g5765" Jan 28 07:01:51 crc kubenswrapper[4642]: I0128 07:01:51.650632 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-x62jq" podStartSLOduration=2.806166962 podStartE2EDuration="24.650620213s" podCreationTimestamp="2026-01-28 07:01:27 +0000 UTC" firstStartedPulling="2026-01-28 07:01:28.84164543 +0000 UTC m=+812.073734239" lastFinishedPulling="2026-01-28 07:01:50.68609868 +0000 UTC m=+833.918187490" observedRunningTime="2026-01-28 07:01:51.649146991 +0000 UTC m=+834.881235800" watchObservedRunningTime="2026-01-28 07:01:51.650620213 +0000 UTC m=+834.882709023" Jan 28 07:01:51 crc kubenswrapper[4642]: I0128 07:01:51.663064 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-564965969-vrkkm" podStartSLOduration=2.807535346 podStartE2EDuration="24.663056088s" podCreationTimestamp="2026-01-28 07:01:27 +0000 UTC" firstStartedPulling="2026-01-28 07:01:28.819249648 +0000 UTC m=+812.051338456" lastFinishedPulling="2026-01-28 07:01:50.674770388 +0000 UTC m=+833.906859198" observedRunningTime="2026-01-28 07:01:51.659605336 +0000 UTC m=+834.891694155" watchObservedRunningTime="2026-01-28 07:01:51.663056088 +0000 UTC m=+834.895144897" Jan 28 07:01:51 crc kubenswrapper[4642]: I0128 07:01:51.675279 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-g5765" podStartSLOduration=2.809895126 podStartE2EDuration="24.675260277s" podCreationTimestamp="2026-01-28 07:01:27 +0000 UTC" firstStartedPulling="2026-01-28 07:01:28.819590218 +0000 UTC m=+812.051679028" lastFinishedPulling="2026-01-28 07:01:50.68495537 +0000 UTC m=+833.917044179" observedRunningTime="2026-01-28 07:01:51.6703236 +0000 UTC m=+834.902412409" watchObservedRunningTime="2026-01-28 07:01:51.675260277 +0000 UTC m=+834.907349085" Jan 28 07:01:51 crc kubenswrapper[4642]: I0128 07:01:51.683058 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-4xmct" podStartSLOduration=2.74118742 podStartE2EDuration="24.683044581s" podCreationTimestamp="2026-01-28 07:01:27 +0000 UTC" firstStartedPulling="2026-01-28 07:01:28.733948457 +0000 UTC m=+811.966037265" lastFinishedPulling="2026-01-28 07:01:50.675805617 +0000 UTC m=+833.907894426" observedRunningTime="2026-01-28 07:01:51.678561527 +0000 UTC m=+834.910650326" watchObservedRunningTime="2026-01-28 07:01:51.683044581 +0000 UTC m=+834.915133390" Jan 28 07:01:51 crc kubenswrapper[4642]: I0128 07:01:51.694557 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-8n8j8" podStartSLOduration=2.849712232 podStartE2EDuration="24.694543933s" podCreationTimestamp="2026-01-28 07:01:27 +0000 UTC" firstStartedPulling="2026-01-28 07:01:28.826802757 +0000 UTC m=+812.058891567" lastFinishedPulling="2026-01-28 07:01:50.67163446 +0000 UTC m=+833.903723268" observedRunningTime="2026-01-28 07:01:51.690423502 +0000 UTC m=+834.922512311" watchObservedRunningTime="2026-01-28 07:01:51.694543933 +0000 UTC m=+834.926632742" Jan 28 07:01:57 crc kubenswrapper[4642]: I0128 07:01:57.933430 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-5n97g" Jan 28 07:01:57 crc kubenswrapper[4642]: I0128 07:01:57.978438 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-4xmct" Jan 28 07:01:58 crc kubenswrapper[4642]: I0128 07:01:58.010637 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-x62jq" Jan 28 07:01:58 crc kubenswrapper[4642]: I0128 07:01:58.106851 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-g5765" Jan 28 07:01:58 crc kubenswrapper[4642]: I0128 07:01:58.171141 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-8n8j8" Jan 28 07:01:58 crc kubenswrapper[4642]: I0128 07:01:58.201858 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wd5zv"] Jan 28 07:01:58 crc kubenswrapper[4642]: I0128 07:01:58.203468 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wd5zv" Jan 28 07:01:58 crc kubenswrapper[4642]: I0128 07:01:58.205548 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-564965969-vrkkm" Jan 28 07:01:58 crc kubenswrapper[4642]: I0128 07:01:58.213536 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wd5zv"] Jan 28 07:01:58 crc kubenswrapper[4642]: I0128 07:01:58.353060 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23eb3975-e1eb-4a5a-a478-14638e6db700-catalog-content\") pod \"certified-operators-wd5zv\" (UID: \"23eb3975-e1eb-4a5a-a478-14638e6db700\") " pod="openshift-marketplace/certified-operators-wd5zv" Jan 28 07:01:58 crc kubenswrapper[4642]: I0128 07:01:58.353139 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7m8k\" (UniqueName: \"kubernetes.io/projected/23eb3975-e1eb-4a5a-a478-14638e6db700-kube-api-access-x7m8k\") pod \"certified-operators-wd5zv\" (UID: \"23eb3975-e1eb-4a5a-a478-14638e6db700\") " pod="openshift-marketplace/certified-operators-wd5zv" Jan 28 07:01:58 crc kubenswrapper[4642]: I0128 07:01:58.353376 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23eb3975-e1eb-4a5a-a478-14638e6db700-utilities\") pod \"certified-operators-wd5zv\" (UID: \"23eb3975-e1eb-4a5a-a478-14638e6db700\") " pod="openshift-marketplace/certified-operators-wd5zv" Jan 28 07:01:58 crc kubenswrapper[4642]: I0128 07:01:58.455275 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7m8k\" (UniqueName: \"kubernetes.io/projected/23eb3975-e1eb-4a5a-a478-14638e6db700-kube-api-access-x7m8k\") pod \"certified-operators-wd5zv\" (UID: \"23eb3975-e1eb-4a5a-a478-14638e6db700\") " pod="openshift-marketplace/certified-operators-wd5zv" Jan 28 07:01:58 crc kubenswrapper[4642]: I0128 07:01:58.455421 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23eb3975-e1eb-4a5a-a478-14638e6db700-utilities\") pod \"certified-operators-wd5zv\" (UID: \"23eb3975-e1eb-4a5a-a478-14638e6db700\") " pod="openshift-marketplace/certified-operators-wd5zv" Jan 28 07:01:58 crc kubenswrapper[4642]: I0128 07:01:58.455493 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23eb3975-e1eb-4a5a-a478-14638e6db700-catalog-content\") pod \"certified-operators-wd5zv\" (UID: \"23eb3975-e1eb-4a5a-a478-14638e6db700\") " pod="openshift-marketplace/certified-operators-wd5zv" Jan 28 07:01:58 crc kubenswrapper[4642]: I0128 07:01:58.455908 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23eb3975-e1eb-4a5a-a478-14638e6db700-utilities\") pod \"certified-operators-wd5zv\" (UID: \"23eb3975-e1eb-4a5a-a478-14638e6db700\") " pod="openshift-marketplace/certified-operators-wd5zv" Jan 28 07:01:58 crc kubenswrapper[4642]: I0128 07:01:58.455959 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23eb3975-e1eb-4a5a-a478-14638e6db700-catalog-content\") pod \"certified-operators-wd5zv\" (UID: \"23eb3975-e1eb-4a5a-a478-14638e6db700\") " pod="openshift-marketplace/certified-operators-wd5zv" Jan 28 07:01:58 crc kubenswrapper[4642]: I0128 07:01:58.473485 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7m8k\" (UniqueName: \"kubernetes.io/projected/23eb3975-e1eb-4a5a-a478-14638e6db700-kube-api-access-x7m8k\") pod \"certified-operators-wd5zv\" (UID: \"23eb3975-e1eb-4a5a-a478-14638e6db700\") " pod="openshift-marketplace/certified-operators-wd5zv" Jan 28 07:01:58 crc kubenswrapper[4642]: I0128 07:01:58.517927 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wd5zv" Jan 28 07:01:58 crc kubenswrapper[4642]: I0128 07:01:58.774898 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wd5zv"] Jan 28 07:01:58 crc kubenswrapper[4642]: W0128 07:01:58.789962 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod23eb3975_e1eb_4a5a_a478_14638e6db700.slice/crio-f225fe4b753ebbe6631b2032d833b43b4bb5af2fcebb10b5a11ba87eca4c53a5 WatchSource:0}: Error finding container f225fe4b753ebbe6631b2032d833b43b4bb5af2fcebb10b5a11ba87eca4c53a5: Status 404 returned error can't find the container with id f225fe4b753ebbe6631b2032d833b43b4bb5af2fcebb10b5a11ba87eca4c53a5 Jan 28 07:01:59 crc kubenswrapper[4642]: I0128 07:01:59.369423 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/33d74ff8-8576-4acc-8233-df91f8c11cbd-cert\") pod \"infra-operator-controller-manager-694cf4f878-g9qq4\" (UID: \"33d74ff8-8576-4acc-8233-df91f8c11cbd\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-g9qq4" Jan 28 07:01:59 crc kubenswrapper[4642]: I0128 07:01:59.377510 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/33d74ff8-8576-4acc-8233-df91f8c11cbd-cert\") pod \"infra-operator-controller-manager-694cf4f878-g9qq4\" (UID: \"33d74ff8-8576-4acc-8233-df91f8c11cbd\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-g9qq4" Jan 28 07:01:59 crc kubenswrapper[4642]: I0128 07:01:59.470813 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e5eb1461-1a4f-403d-bc4f-c05d36ad23e8-cert\") pod \"openstack-baremetal-operator-controller-manager-5b5d4999dch6b8n\" (UID: \"e5eb1461-1a4f-403d-bc4f-c05d36ad23e8\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b5d4999dch6b8n" Jan 28 07:01:59 crc kubenswrapper[4642]: I0128 07:01:59.475640 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e5eb1461-1a4f-403d-bc4f-c05d36ad23e8-cert\") pod \"openstack-baremetal-operator-controller-manager-5b5d4999dch6b8n\" (UID: \"e5eb1461-1a4f-403d-bc4f-c05d36ad23e8\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b5d4999dch6b8n" Jan 28 07:01:59 crc kubenswrapper[4642]: I0128 07:01:59.498490 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-9fzdx" Jan 28 07:01:59 crc kubenswrapper[4642]: I0128 07:01:59.507320 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b5d4999dch6b8n" Jan 28 07:01:59 crc kubenswrapper[4642]: I0128 07:01:59.678718 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-f72hr" Jan 28 07:01:59 crc kubenswrapper[4642]: I0128 07:01:59.687710 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-g9qq4" Jan 28 07:01:59 crc kubenswrapper[4642]: I0128 07:01:59.708389 4642 generic.go:334] "Generic (PLEG): container finished" podID="23eb3975-e1eb-4a5a-a478-14638e6db700" containerID="69badfa7e7bcc481a5b3a59d16b2e0c86e89b24573c4fb13d16b6d01dcb8bd18" exitCode=0 Jan 28 07:01:59 crc kubenswrapper[4642]: I0128 07:01:59.708452 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wd5zv" event={"ID":"23eb3975-e1eb-4a5a-a478-14638e6db700","Type":"ContainerDied","Data":"69badfa7e7bcc481a5b3a59d16b2e0c86e89b24573c4fb13d16b6d01dcb8bd18"} Jan 28 07:01:59 crc kubenswrapper[4642]: I0128 07:01:59.708494 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wd5zv" event={"ID":"23eb3975-e1eb-4a5a-a478-14638e6db700","Type":"ContainerStarted","Data":"f225fe4b753ebbe6631b2032d833b43b4bb5af2fcebb10b5a11ba87eca4c53a5"} Jan 28 07:01:59 crc kubenswrapper[4642]: I0128 07:01:59.875728 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a453bbb9-176c-413b-82dd-294ecb3bdb2b-webhook-certs\") pod \"openstack-operator-controller-manager-9f67d7-9kg2t\" (UID: \"a453bbb9-176c-413b-82dd-294ecb3bdb2b\") " pod="openstack-operators/openstack-operator-controller-manager-9f67d7-9kg2t" Jan 28 07:01:59 crc kubenswrapper[4642]: I0128 07:01:59.876146 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a453bbb9-176c-413b-82dd-294ecb3bdb2b-metrics-certs\") pod \"openstack-operator-controller-manager-9f67d7-9kg2t\" (UID: \"a453bbb9-176c-413b-82dd-294ecb3bdb2b\") " pod="openstack-operators/openstack-operator-controller-manager-9f67d7-9kg2t" Jan 28 07:01:59 crc kubenswrapper[4642]: I0128 07:01:59.881995 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a453bbb9-176c-413b-82dd-294ecb3bdb2b-metrics-certs\") pod \"openstack-operator-controller-manager-9f67d7-9kg2t\" (UID: \"a453bbb9-176c-413b-82dd-294ecb3bdb2b\") " pod="openstack-operators/openstack-operator-controller-manager-9f67d7-9kg2t" Jan 28 07:01:59 crc kubenswrapper[4642]: I0128 07:01:59.883322 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a453bbb9-176c-413b-82dd-294ecb3bdb2b-webhook-certs\") pod \"openstack-operator-controller-manager-9f67d7-9kg2t\" (UID: \"a453bbb9-176c-413b-82dd-294ecb3bdb2b\") " pod="openstack-operators/openstack-operator-controller-manager-9f67d7-9kg2t" Jan 28 07:01:59 crc kubenswrapper[4642]: I0128 07:01:59.886089 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-5b5d4999dch6b8n"] Jan 28 07:01:59 crc kubenswrapper[4642]: W0128 07:01:59.890140 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode5eb1461_1a4f_403d_bc4f_c05d36ad23e8.slice/crio-7ba25041ac4e9fa4c89a2be5ca8c00d1449fff7125e01da93e0bb1215d921d56 WatchSource:0}: Error finding container 7ba25041ac4e9fa4c89a2be5ca8c00d1449fff7125e01da93e0bb1215d921d56: Status 404 returned error can't find the container with id 7ba25041ac4e9fa4c89a2be5ca8c00d1449fff7125e01da93e0bb1215d921d56 Jan 28 07:02:00 crc kubenswrapper[4642]: I0128 07:02:00.062032 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-g9qq4"] Jan 28 07:02:00 crc kubenswrapper[4642]: W0128 07:02:00.067349 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33d74ff8_8576_4acc_8233_df91f8c11cbd.slice/crio-97b62e259d4938b6537d974b7fbe63a755a90ab46991b199b4341557c0b988eb WatchSource:0}: Error finding container 97b62e259d4938b6537d974b7fbe63a755a90ab46991b199b4341557c0b988eb: Status 404 returned error can't find the container with id 97b62e259d4938b6537d974b7fbe63a755a90ab46991b199b4341557c0b988eb Jan 28 07:02:00 crc kubenswrapper[4642]: I0128 07:02:00.115725 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-sh6m7" Jan 28 07:02:00 crc kubenswrapper[4642]: I0128 07:02:00.124926 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-9f67d7-9kg2t" Jan 28 07:02:00 crc kubenswrapper[4642]: I0128 07:02:00.557091 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-9f67d7-9kg2t"] Jan 28 07:02:00 crc kubenswrapper[4642]: I0128 07:02:00.724001 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wd5zv" event={"ID":"23eb3975-e1eb-4a5a-a478-14638e6db700","Type":"ContainerStarted","Data":"4685034fc2e19cbe7ef467a5372aaf671fc55f0e0cd676e87565476185d058db"} Jan 28 07:02:00 crc kubenswrapper[4642]: I0128 07:02:00.726219 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-9f67d7-9kg2t" event={"ID":"a453bbb9-176c-413b-82dd-294ecb3bdb2b","Type":"ContainerStarted","Data":"e28639cf434c3dad83f759ab6c69a582204c65387efb370554d6c513fc8dd814"} Jan 28 07:02:00 crc kubenswrapper[4642]: I0128 07:02:00.726260 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-9f67d7-9kg2t" event={"ID":"a453bbb9-176c-413b-82dd-294ecb3bdb2b","Type":"ContainerStarted","Data":"4ff109c7401a1900f621dbcc892a5742020b5ec03c38695337512187192f9c97"} Jan 28 07:02:00 crc kubenswrapper[4642]: I0128 07:02:00.726302 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-9f67d7-9kg2t" Jan 28 07:02:00 crc kubenswrapper[4642]: I0128 07:02:00.727980 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b5d4999dch6b8n" event={"ID":"e5eb1461-1a4f-403d-bc4f-c05d36ad23e8","Type":"ContainerStarted","Data":"7ba25041ac4e9fa4c89a2be5ca8c00d1449fff7125e01da93e0bb1215d921d56"} Jan 28 07:02:00 crc kubenswrapper[4642]: I0128 07:02:00.729222 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-g9qq4" event={"ID":"33d74ff8-8576-4acc-8233-df91f8c11cbd","Type":"ContainerStarted","Data":"97b62e259d4938b6537d974b7fbe63a755a90ab46991b199b4341557c0b988eb"} Jan 28 07:02:00 crc kubenswrapper[4642]: I0128 07:02:00.760557 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-9f67d7-9kg2t" podStartSLOduration=33.760542406 podStartE2EDuration="33.760542406s" podCreationTimestamp="2026-01-28 07:01:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 07:02:00.756622812 +0000 UTC m=+843.988711621" watchObservedRunningTime="2026-01-28 07:02:00.760542406 +0000 UTC m=+843.992631205" Jan 28 07:02:01 crc kubenswrapper[4642]: I0128 07:02:01.742659 4642 generic.go:334] "Generic (PLEG): container finished" podID="23eb3975-e1eb-4a5a-a478-14638e6db700" containerID="4685034fc2e19cbe7ef467a5372aaf671fc55f0e0cd676e87565476185d058db" exitCode=0 Jan 28 07:02:01 crc kubenswrapper[4642]: I0128 07:02:01.742871 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wd5zv" event={"ID":"23eb3975-e1eb-4a5a-a478-14638e6db700","Type":"ContainerDied","Data":"4685034fc2e19cbe7ef467a5372aaf671fc55f0e0cd676e87565476185d058db"} Jan 28 07:02:02 crc kubenswrapper[4642]: I0128 07:02:02.754651 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b5d4999dch6b8n" event={"ID":"e5eb1461-1a4f-403d-bc4f-c05d36ad23e8","Type":"ContainerStarted","Data":"66b8d245d338df855525ec52e071ef90256044e0e77ad4c804cb45e013a3bd74"} Jan 28 07:02:02 crc kubenswrapper[4642]: I0128 07:02:02.755075 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b5d4999dch6b8n" Jan 28 07:02:02 crc kubenswrapper[4642]: I0128 07:02:02.756106 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-g9qq4" event={"ID":"33d74ff8-8576-4acc-8233-df91f8c11cbd","Type":"ContainerStarted","Data":"7ec07720e60c9503e71bc3fe728cd0a64e4983d39d31e60992735bfb5f6bb34b"} Jan 28 07:02:02 crc kubenswrapper[4642]: I0128 07:02:02.756822 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-g9qq4" Jan 28 07:02:02 crc kubenswrapper[4642]: I0128 07:02:02.776419 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b5d4999dch6b8n" podStartSLOduration=33.467748335 podStartE2EDuration="35.776406035s" podCreationTimestamp="2026-01-28 07:01:27 +0000 UTC" firstStartedPulling="2026-01-28 07:01:59.892512998 +0000 UTC m=+843.124601807" lastFinishedPulling="2026-01-28 07:02:02.201170698 +0000 UTC m=+845.433259507" observedRunningTime="2026-01-28 07:02:02.774624623 +0000 UTC m=+846.006713432" watchObservedRunningTime="2026-01-28 07:02:02.776406035 +0000 UTC m=+846.008494843" Jan 28 07:02:02 crc kubenswrapper[4642]: I0128 07:02:02.797013 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-g9qq4" podStartSLOduration=33.665208834 podStartE2EDuration="35.79699123s" podCreationTimestamp="2026-01-28 07:01:27 +0000 UTC" firstStartedPulling="2026-01-28 07:02:00.069394674 +0000 UTC m=+843.301483483" lastFinishedPulling="2026-01-28 07:02:02.20117707 +0000 UTC m=+845.433265879" observedRunningTime="2026-01-28 07:02:02.790862651 +0000 UTC m=+846.022951460" watchObservedRunningTime="2026-01-28 07:02:02.79699123 +0000 UTC m=+846.029080039" Jan 28 07:02:03 crc kubenswrapper[4642]: I0128 07:02:03.764705 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wd5zv" event={"ID":"23eb3975-e1eb-4a5a-a478-14638e6db700","Type":"ContainerStarted","Data":"076889d2834754387ba857dc127642d5c6bf687e67f477a9a332bf02cf696518"} Jan 28 07:02:03 crc kubenswrapper[4642]: I0128 07:02:03.779578 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wd5zv" podStartSLOduration=2.795890238 podStartE2EDuration="5.77956199s" podCreationTimestamp="2026-01-28 07:01:58 +0000 UTC" firstStartedPulling="2026-01-28 07:01:59.712155954 +0000 UTC m=+842.944244763" lastFinishedPulling="2026-01-28 07:02:02.695827706 +0000 UTC m=+845.927916515" observedRunningTime="2026-01-28 07:02:03.778249009 +0000 UTC m=+847.010337819" watchObservedRunningTime="2026-01-28 07:02:03.77956199 +0000 UTC m=+847.011650798" Jan 28 07:02:08 crc kubenswrapper[4642]: I0128 07:02:08.519207 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wd5zv" Jan 28 07:02:08 crc kubenswrapper[4642]: I0128 07:02:08.519558 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wd5zv" Jan 28 07:02:08 crc kubenswrapper[4642]: I0128 07:02:08.554381 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wd5zv" Jan 28 07:02:08 crc kubenswrapper[4642]: I0128 07:02:08.828928 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wd5zv" Jan 28 07:02:08 crc kubenswrapper[4642]: I0128 07:02:08.864411 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wd5zv"] Jan 28 07:02:09 crc kubenswrapper[4642]: I0128 07:02:09.513095 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b5d4999dch6b8n" Jan 28 07:02:09 crc kubenswrapper[4642]: I0128 07:02:09.692815 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-g9qq4" Jan 28 07:02:10 crc kubenswrapper[4642]: I0128 07:02:10.133213 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-9f67d7-9kg2t" Jan 28 07:02:10 crc kubenswrapper[4642]: I0128 07:02:10.811282 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wd5zv" podUID="23eb3975-e1eb-4a5a-a478-14638e6db700" containerName="registry-server" containerID="cri-o://076889d2834754387ba857dc127642d5c6bf687e67f477a9a332bf02cf696518" gracePeriod=2 Jan 28 07:02:11 crc kubenswrapper[4642]: I0128 07:02:11.178945 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wd5zv" Jan 28 07:02:11 crc kubenswrapper[4642]: I0128 07:02:11.243861 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23eb3975-e1eb-4a5a-a478-14638e6db700-catalog-content\") pod \"23eb3975-e1eb-4a5a-a478-14638e6db700\" (UID: \"23eb3975-e1eb-4a5a-a478-14638e6db700\") " Jan 28 07:02:11 crc kubenswrapper[4642]: I0128 07:02:11.243921 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7m8k\" (UniqueName: \"kubernetes.io/projected/23eb3975-e1eb-4a5a-a478-14638e6db700-kube-api-access-x7m8k\") pod \"23eb3975-e1eb-4a5a-a478-14638e6db700\" (UID: \"23eb3975-e1eb-4a5a-a478-14638e6db700\") " Jan 28 07:02:11 crc kubenswrapper[4642]: I0128 07:02:11.243967 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23eb3975-e1eb-4a5a-a478-14638e6db700-utilities\") pod \"23eb3975-e1eb-4a5a-a478-14638e6db700\" (UID: \"23eb3975-e1eb-4a5a-a478-14638e6db700\") " Jan 28 07:02:11 crc kubenswrapper[4642]: I0128 07:02:11.244729 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23eb3975-e1eb-4a5a-a478-14638e6db700-utilities" (OuterVolumeSpecName: "utilities") pod "23eb3975-e1eb-4a5a-a478-14638e6db700" (UID: "23eb3975-e1eb-4a5a-a478-14638e6db700"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 07:02:11 crc kubenswrapper[4642]: I0128 07:02:11.249306 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23eb3975-e1eb-4a5a-a478-14638e6db700-kube-api-access-x7m8k" (OuterVolumeSpecName: "kube-api-access-x7m8k") pod "23eb3975-e1eb-4a5a-a478-14638e6db700" (UID: "23eb3975-e1eb-4a5a-a478-14638e6db700"). InnerVolumeSpecName "kube-api-access-x7m8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:02:11 crc kubenswrapper[4642]: I0128 07:02:11.346363 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7m8k\" (UniqueName: \"kubernetes.io/projected/23eb3975-e1eb-4a5a-a478-14638e6db700-kube-api-access-x7m8k\") on node \"crc\" DevicePath \"\"" Jan 28 07:02:11 crc kubenswrapper[4642]: I0128 07:02:11.346393 4642 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23eb3975-e1eb-4a5a-a478-14638e6db700-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 07:02:11 crc kubenswrapper[4642]: I0128 07:02:11.357299 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23eb3975-e1eb-4a5a-a478-14638e6db700-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "23eb3975-e1eb-4a5a-a478-14638e6db700" (UID: "23eb3975-e1eb-4a5a-a478-14638e6db700"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 07:02:11 crc kubenswrapper[4642]: I0128 07:02:11.448019 4642 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23eb3975-e1eb-4a5a-a478-14638e6db700-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 07:02:11 crc kubenswrapper[4642]: I0128 07:02:11.819315 4642 generic.go:334] "Generic (PLEG): container finished" podID="23eb3975-e1eb-4a5a-a478-14638e6db700" containerID="076889d2834754387ba857dc127642d5c6bf687e67f477a9a332bf02cf696518" exitCode=0 Jan 28 07:02:11 crc kubenswrapper[4642]: I0128 07:02:11.819383 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wd5zv" Jan 28 07:02:11 crc kubenswrapper[4642]: I0128 07:02:11.819384 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wd5zv" event={"ID":"23eb3975-e1eb-4a5a-a478-14638e6db700","Type":"ContainerDied","Data":"076889d2834754387ba857dc127642d5c6bf687e67f477a9a332bf02cf696518"} Jan 28 07:02:11 crc kubenswrapper[4642]: I0128 07:02:11.819794 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wd5zv" event={"ID":"23eb3975-e1eb-4a5a-a478-14638e6db700","Type":"ContainerDied","Data":"f225fe4b753ebbe6631b2032d833b43b4bb5af2fcebb10b5a11ba87eca4c53a5"} Jan 28 07:02:11 crc kubenswrapper[4642]: I0128 07:02:11.819831 4642 scope.go:117] "RemoveContainer" containerID="076889d2834754387ba857dc127642d5c6bf687e67f477a9a332bf02cf696518" Jan 28 07:02:11 crc kubenswrapper[4642]: I0128 07:02:11.842677 4642 scope.go:117] "RemoveContainer" containerID="4685034fc2e19cbe7ef467a5372aaf671fc55f0e0cd676e87565476185d058db" Jan 28 07:02:11 crc kubenswrapper[4642]: I0128 07:02:11.844173 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wd5zv"] Jan 28 07:02:11 crc kubenswrapper[4642]: I0128 07:02:11.847563 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wd5zv"] Jan 28 07:02:11 crc kubenswrapper[4642]: I0128 07:02:11.870124 4642 scope.go:117] "RemoveContainer" containerID="69badfa7e7bcc481a5b3a59d16b2e0c86e89b24573c4fb13d16b6d01dcb8bd18" Jan 28 07:02:11 crc kubenswrapper[4642]: I0128 07:02:11.885462 4642 scope.go:117] "RemoveContainer" containerID="076889d2834754387ba857dc127642d5c6bf687e67f477a9a332bf02cf696518" Jan 28 07:02:11 crc kubenswrapper[4642]: E0128 07:02:11.885832 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"076889d2834754387ba857dc127642d5c6bf687e67f477a9a332bf02cf696518\": container with ID starting with 076889d2834754387ba857dc127642d5c6bf687e67f477a9a332bf02cf696518 not found: ID does not exist" containerID="076889d2834754387ba857dc127642d5c6bf687e67f477a9a332bf02cf696518" Jan 28 07:02:11 crc kubenswrapper[4642]: I0128 07:02:11.885883 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"076889d2834754387ba857dc127642d5c6bf687e67f477a9a332bf02cf696518"} err="failed to get container status \"076889d2834754387ba857dc127642d5c6bf687e67f477a9a332bf02cf696518\": rpc error: code = NotFound desc = could not find container \"076889d2834754387ba857dc127642d5c6bf687e67f477a9a332bf02cf696518\": container with ID starting with 076889d2834754387ba857dc127642d5c6bf687e67f477a9a332bf02cf696518 not found: ID does not exist" Jan 28 07:02:11 crc kubenswrapper[4642]: I0128 07:02:11.885908 4642 scope.go:117] "RemoveContainer" containerID="4685034fc2e19cbe7ef467a5372aaf671fc55f0e0cd676e87565476185d058db" Jan 28 07:02:11 crc kubenswrapper[4642]: E0128 07:02:11.886168 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4685034fc2e19cbe7ef467a5372aaf671fc55f0e0cd676e87565476185d058db\": container with ID starting with 4685034fc2e19cbe7ef467a5372aaf671fc55f0e0cd676e87565476185d058db not found: ID does not exist" containerID="4685034fc2e19cbe7ef467a5372aaf671fc55f0e0cd676e87565476185d058db" Jan 28 07:02:11 crc kubenswrapper[4642]: I0128 07:02:11.886210 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4685034fc2e19cbe7ef467a5372aaf671fc55f0e0cd676e87565476185d058db"} err="failed to get container status \"4685034fc2e19cbe7ef467a5372aaf671fc55f0e0cd676e87565476185d058db\": rpc error: code = NotFound desc = could not find container \"4685034fc2e19cbe7ef467a5372aaf671fc55f0e0cd676e87565476185d058db\": container with ID starting with 4685034fc2e19cbe7ef467a5372aaf671fc55f0e0cd676e87565476185d058db not found: ID does not exist" Jan 28 07:02:11 crc kubenswrapper[4642]: I0128 07:02:11.886229 4642 scope.go:117] "RemoveContainer" containerID="69badfa7e7bcc481a5b3a59d16b2e0c86e89b24573c4fb13d16b6d01dcb8bd18" Jan 28 07:02:11 crc kubenswrapper[4642]: E0128 07:02:11.886669 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69badfa7e7bcc481a5b3a59d16b2e0c86e89b24573c4fb13d16b6d01dcb8bd18\": container with ID starting with 69badfa7e7bcc481a5b3a59d16b2e0c86e89b24573c4fb13d16b6d01dcb8bd18 not found: ID does not exist" containerID="69badfa7e7bcc481a5b3a59d16b2e0c86e89b24573c4fb13d16b6d01dcb8bd18" Jan 28 07:02:11 crc kubenswrapper[4642]: I0128 07:02:11.886695 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69badfa7e7bcc481a5b3a59d16b2e0c86e89b24573c4fb13d16b6d01dcb8bd18"} err="failed to get container status \"69badfa7e7bcc481a5b3a59d16b2e0c86e89b24573c4fb13d16b6d01dcb8bd18\": rpc error: code = NotFound desc = could not find container \"69badfa7e7bcc481a5b3a59d16b2e0c86e89b24573c4fb13d16b6d01dcb8bd18\": container with ID starting with 69badfa7e7bcc481a5b3a59d16b2e0c86e89b24573c4fb13d16b6d01dcb8bd18 not found: ID does not exist" Jan 28 07:02:13 crc kubenswrapper[4642]: I0128 07:02:13.104766 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23eb3975-e1eb-4a5a-a478-14638e6db700" path="/var/lib/kubelet/pods/23eb3975-e1eb-4a5a-a478-14638e6db700/volumes" Jan 28 07:02:26 crc kubenswrapper[4642]: I0128 07:02:26.835238 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-869584cbbf-mcgrs"] Jan 28 07:02:26 crc kubenswrapper[4642]: E0128 07:02:26.835811 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23eb3975-e1eb-4a5a-a478-14638e6db700" containerName="extract-content" Jan 28 07:02:26 crc kubenswrapper[4642]: I0128 07:02:26.835822 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="23eb3975-e1eb-4a5a-a478-14638e6db700" containerName="extract-content" Jan 28 07:02:26 crc kubenswrapper[4642]: E0128 07:02:26.835836 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23eb3975-e1eb-4a5a-a478-14638e6db700" containerName="registry-server" Jan 28 07:02:26 crc kubenswrapper[4642]: I0128 07:02:26.835841 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="23eb3975-e1eb-4a5a-a478-14638e6db700" containerName="registry-server" Jan 28 07:02:26 crc kubenswrapper[4642]: E0128 07:02:26.835856 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23eb3975-e1eb-4a5a-a478-14638e6db700" containerName="extract-utilities" Jan 28 07:02:26 crc kubenswrapper[4642]: I0128 07:02:26.835861 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="23eb3975-e1eb-4a5a-a478-14638e6db700" containerName="extract-utilities" Jan 28 07:02:26 crc kubenswrapper[4642]: I0128 07:02:26.835965 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="23eb3975-e1eb-4a5a-a478-14638e6db700" containerName="registry-server" Jan 28 07:02:26 crc kubenswrapper[4642]: I0128 07:02:26.836531 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-869584cbbf-mcgrs" Jan 28 07:02:26 crc kubenswrapper[4642]: I0128 07:02:26.845714 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-869584cbbf-mcgrs"] Jan 28 07:02:26 crc kubenswrapper[4642]: I0128 07:02:26.846993 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 28 07:02:26 crc kubenswrapper[4642]: I0128 07:02:26.847303 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-bhmw5" Jan 28 07:02:26 crc kubenswrapper[4642]: I0128 07:02:26.847631 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 28 07:02:26 crc kubenswrapper[4642]: I0128 07:02:26.847834 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 28 07:02:26 crc kubenswrapper[4642]: I0128 07:02:26.878384 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-fc877b869-qgvmg"] Jan 28 07:02:26 crc kubenswrapper[4642]: I0128 07:02:26.879344 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fc877b869-qgvmg" Jan 28 07:02:26 crc kubenswrapper[4642]: I0128 07:02:26.881308 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 28 07:02:26 crc kubenswrapper[4642]: I0128 07:02:26.889605 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fc877b869-qgvmg"] Jan 28 07:02:26 crc kubenswrapper[4642]: I0128 07:02:26.924516 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/558fcc2b-c367-48b5-a3b6-de01fca006f3-dns-svc\") pod \"dnsmasq-dns-fc877b869-qgvmg\" (UID: \"558fcc2b-c367-48b5-a3b6-de01fca006f3\") " pod="openstack/dnsmasq-dns-fc877b869-qgvmg" Jan 28 07:02:26 crc kubenswrapper[4642]: I0128 07:02:26.924606 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlr5d\" (UniqueName: \"kubernetes.io/projected/f0e67125-c7dc-452b-a629-c2b24eda8144-kube-api-access-nlr5d\") pod \"dnsmasq-dns-869584cbbf-mcgrs\" (UID: \"f0e67125-c7dc-452b-a629-c2b24eda8144\") " pod="openstack/dnsmasq-dns-869584cbbf-mcgrs" Jan 28 07:02:26 crc kubenswrapper[4642]: I0128 07:02:26.924658 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/558fcc2b-c367-48b5-a3b6-de01fca006f3-config\") pod \"dnsmasq-dns-fc877b869-qgvmg\" (UID: \"558fcc2b-c367-48b5-a3b6-de01fca006f3\") " pod="openstack/dnsmasq-dns-fc877b869-qgvmg" Jan 28 07:02:26 crc kubenswrapper[4642]: I0128 07:02:26.924695 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mq75n\" (UniqueName: \"kubernetes.io/projected/558fcc2b-c367-48b5-a3b6-de01fca006f3-kube-api-access-mq75n\") pod \"dnsmasq-dns-fc877b869-qgvmg\" (UID: \"558fcc2b-c367-48b5-a3b6-de01fca006f3\") " pod="openstack/dnsmasq-dns-fc877b869-qgvmg" Jan 28 07:02:26 crc kubenswrapper[4642]: I0128 07:02:26.924711 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0e67125-c7dc-452b-a629-c2b24eda8144-config\") pod \"dnsmasq-dns-869584cbbf-mcgrs\" (UID: \"f0e67125-c7dc-452b-a629-c2b24eda8144\") " pod="openstack/dnsmasq-dns-869584cbbf-mcgrs" Jan 28 07:02:27 crc kubenswrapper[4642]: I0128 07:02:27.026153 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mq75n\" (UniqueName: \"kubernetes.io/projected/558fcc2b-c367-48b5-a3b6-de01fca006f3-kube-api-access-mq75n\") pod \"dnsmasq-dns-fc877b869-qgvmg\" (UID: \"558fcc2b-c367-48b5-a3b6-de01fca006f3\") " pod="openstack/dnsmasq-dns-fc877b869-qgvmg" Jan 28 07:02:27 crc kubenswrapper[4642]: I0128 07:02:27.026225 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0e67125-c7dc-452b-a629-c2b24eda8144-config\") pod \"dnsmasq-dns-869584cbbf-mcgrs\" (UID: \"f0e67125-c7dc-452b-a629-c2b24eda8144\") " pod="openstack/dnsmasq-dns-869584cbbf-mcgrs" Jan 28 07:02:27 crc kubenswrapper[4642]: I0128 07:02:27.026307 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/558fcc2b-c367-48b5-a3b6-de01fca006f3-dns-svc\") pod \"dnsmasq-dns-fc877b869-qgvmg\" (UID: \"558fcc2b-c367-48b5-a3b6-de01fca006f3\") " pod="openstack/dnsmasq-dns-fc877b869-qgvmg" Jan 28 07:02:27 crc kubenswrapper[4642]: I0128 07:02:27.026388 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlr5d\" (UniqueName: \"kubernetes.io/projected/f0e67125-c7dc-452b-a629-c2b24eda8144-kube-api-access-nlr5d\") pod \"dnsmasq-dns-869584cbbf-mcgrs\" (UID: \"f0e67125-c7dc-452b-a629-c2b24eda8144\") " pod="openstack/dnsmasq-dns-869584cbbf-mcgrs" Jan 28 07:02:27 crc kubenswrapper[4642]: I0128 07:02:27.026417 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/558fcc2b-c367-48b5-a3b6-de01fca006f3-config\") pod \"dnsmasq-dns-fc877b869-qgvmg\" (UID: \"558fcc2b-c367-48b5-a3b6-de01fca006f3\") " pod="openstack/dnsmasq-dns-fc877b869-qgvmg" Jan 28 07:02:27 crc kubenswrapper[4642]: I0128 07:02:27.027129 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/558fcc2b-c367-48b5-a3b6-de01fca006f3-config\") pod \"dnsmasq-dns-fc877b869-qgvmg\" (UID: \"558fcc2b-c367-48b5-a3b6-de01fca006f3\") " pod="openstack/dnsmasq-dns-fc877b869-qgvmg" Jan 28 07:02:27 crc kubenswrapper[4642]: I0128 07:02:27.027130 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/558fcc2b-c367-48b5-a3b6-de01fca006f3-dns-svc\") pod \"dnsmasq-dns-fc877b869-qgvmg\" (UID: \"558fcc2b-c367-48b5-a3b6-de01fca006f3\") " pod="openstack/dnsmasq-dns-fc877b869-qgvmg" Jan 28 07:02:27 crc kubenswrapper[4642]: I0128 07:02:27.027135 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0e67125-c7dc-452b-a629-c2b24eda8144-config\") pod \"dnsmasq-dns-869584cbbf-mcgrs\" (UID: \"f0e67125-c7dc-452b-a629-c2b24eda8144\") " pod="openstack/dnsmasq-dns-869584cbbf-mcgrs" Jan 28 07:02:27 crc kubenswrapper[4642]: I0128 07:02:27.041139 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mq75n\" (UniqueName: \"kubernetes.io/projected/558fcc2b-c367-48b5-a3b6-de01fca006f3-kube-api-access-mq75n\") pod \"dnsmasq-dns-fc877b869-qgvmg\" (UID: \"558fcc2b-c367-48b5-a3b6-de01fca006f3\") " pod="openstack/dnsmasq-dns-fc877b869-qgvmg" Jan 28 07:02:27 crc kubenswrapper[4642]: I0128 07:02:27.041169 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlr5d\" (UniqueName: \"kubernetes.io/projected/f0e67125-c7dc-452b-a629-c2b24eda8144-kube-api-access-nlr5d\") pod \"dnsmasq-dns-869584cbbf-mcgrs\" (UID: \"f0e67125-c7dc-452b-a629-c2b24eda8144\") " pod="openstack/dnsmasq-dns-869584cbbf-mcgrs" Jan 28 07:02:27 crc kubenswrapper[4642]: I0128 07:02:27.155624 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-869584cbbf-mcgrs" Jan 28 07:02:27 crc kubenswrapper[4642]: I0128 07:02:27.190198 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fc877b869-qgvmg" Jan 28 07:02:27 crc kubenswrapper[4642]: I0128 07:02:27.518279 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-869584cbbf-mcgrs"] Jan 28 07:02:27 crc kubenswrapper[4642]: I0128 07:02:27.522913 4642 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 07:02:27 crc kubenswrapper[4642]: I0128 07:02:27.578387 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fc877b869-qgvmg"] Jan 28 07:02:27 crc kubenswrapper[4642]: W0128 07:02:27.579915 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod558fcc2b_c367_48b5_a3b6_de01fca006f3.slice/crio-23e9a110534c3fc8ac4368604e5249abaa62a3b1383861500a3806f74cb654f9 WatchSource:0}: Error finding container 23e9a110534c3fc8ac4368604e5249abaa62a3b1383861500a3806f74cb654f9: Status 404 returned error can't find the container with id 23e9a110534c3fc8ac4368604e5249abaa62a3b1383861500a3806f74cb654f9 Jan 28 07:02:27 crc kubenswrapper[4642]: I0128 07:02:27.911843 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-869584cbbf-mcgrs" event={"ID":"f0e67125-c7dc-452b-a629-c2b24eda8144","Type":"ContainerStarted","Data":"9ba3cc7e3c955f1b4037ccf407f250a5eca7fc19ae7c1a2c21f7ec0f84aff1f9"} Jan 28 07:02:27 crc kubenswrapper[4642]: I0128 07:02:27.912698 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fc877b869-qgvmg" event={"ID":"558fcc2b-c367-48b5-a3b6-de01fca006f3","Type":"ContainerStarted","Data":"23e9a110534c3fc8ac4368604e5249abaa62a3b1383861500a3806f74cb654f9"} Jan 28 07:02:29 crc kubenswrapper[4642]: I0128 07:02:29.738803 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-869584cbbf-mcgrs"] Jan 28 07:02:29 crc kubenswrapper[4642]: I0128 07:02:29.761888 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-59fc74d969-mwlrv"] Jan 28 07:02:29 crc kubenswrapper[4642]: I0128 07:02:29.762818 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59fc74d969-mwlrv" Jan 28 07:02:29 crc kubenswrapper[4642]: I0128 07:02:29.776493 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59fc74d969-mwlrv"] Jan 28 07:02:29 crc kubenswrapper[4642]: I0128 07:02:29.859149 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5254c758-f70f-49c8-b0d6-9754ba3a16c6-config\") pod \"dnsmasq-dns-59fc74d969-mwlrv\" (UID: \"5254c758-f70f-49c8-b0d6-9754ba3a16c6\") " pod="openstack/dnsmasq-dns-59fc74d969-mwlrv" Jan 28 07:02:29 crc kubenswrapper[4642]: I0128 07:02:29.860179 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5254c758-f70f-49c8-b0d6-9754ba3a16c6-dns-svc\") pod \"dnsmasq-dns-59fc74d969-mwlrv\" (UID: \"5254c758-f70f-49c8-b0d6-9754ba3a16c6\") " pod="openstack/dnsmasq-dns-59fc74d969-mwlrv" Jan 28 07:02:29 crc kubenswrapper[4642]: I0128 07:02:29.860282 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwsbv\" (UniqueName: \"kubernetes.io/projected/5254c758-f70f-49c8-b0d6-9754ba3a16c6-kube-api-access-kwsbv\") pod \"dnsmasq-dns-59fc74d969-mwlrv\" (UID: \"5254c758-f70f-49c8-b0d6-9754ba3a16c6\") " pod="openstack/dnsmasq-dns-59fc74d969-mwlrv" Jan 28 07:02:29 crc kubenswrapper[4642]: I0128 07:02:29.961739 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5254c758-f70f-49c8-b0d6-9754ba3a16c6-config\") pod \"dnsmasq-dns-59fc74d969-mwlrv\" (UID: \"5254c758-f70f-49c8-b0d6-9754ba3a16c6\") " pod="openstack/dnsmasq-dns-59fc74d969-mwlrv" Jan 28 07:02:29 crc kubenswrapper[4642]: I0128 07:02:29.962497 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5254c758-f70f-49c8-b0d6-9754ba3a16c6-config\") pod \"dnsmasq-dns-59fc74d969-mwlrv\" (UID: \"5254c758-f70f-49c8-b0d6-9754ba3a16c6\") " pod="openstack/dnsmasq-dns-59fc74d969-mwlrv" Jan 28 07:02:29 crc kubenswrapper[4642]: I0128 07:02:29.962614 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5254c758-f70f-49c8-b0d6-9754ba3a16c6-dns-svc\") pod \"dnsmasq-dns-59fc74d969-mwlrv\" (UID: \"5254c758-f70f-49c8-b0d6-9754ba3a16c6\") " pod="openstack/dnsmasq-dns-59fc74d969-mwlrv" Jan 28 07:02:29 crc kubenswrapper[4642]: I0128 07:02:29.963743 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5254c758-f70f-49c8-b0d6-9754ba3a16c6-dns-svc\") pod \"dnsmasq-dns-59fc74d969-mwlrv\" (UID: \"5254c758-f70f-49c8-b0d6-9754ba3a16c6\") " pod="openstack/dnsmasq-dns-59fc74d969-mwlrv" Jan 28 07:02:29 crc kubenswrapper[4642]: I0128 07:02:29.964150 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwsbv\" (UniqueName: \"kubernetes.io/projected/5254c758-f70f-49c8-b0d6-9754ba3a16c6-kube-api-access-kwsbv\") pod \"dnsmasq-dns-59fc74d969-mwlrv\" (UID: \"5254c758-f70f-49c8-b0d6-9754ba3a16c6\") " pod="openstack/dnsmasq-dns-59fc74d969-mwlrv" Jan 28 07:02:29 crc kubenswrapper[4642]: I0128 07:02:29.971967 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fc877b869-qgvmg"] Jan 28 07:02:29 crc kubenswrapper[4642]: I0128 07:02:29.992967 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-9c657788f-w96v7"] Jan 28 07:02:29 crc kubenswrapper[4642]: I0128 07:02:29.995009 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwsbv\" (UniqueName: \"kubernetes.io/projected/5254c758-f70f-49c8-b0d6-9754ba3a16c6-kube-api-access-kwsbv\") pod \"dnsmasq-dns-59fc74d969-mwlrv\" (UID: \"5254c758-f70f-49c8-b0d6-9754ba3a16c6\") " pod="openstack/dnsmasq-dns-59fc74d969-mwlrv" Jan 28 07:02:30 crc kubenswrapper[4642]: I0128 07:02:30.017510 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9c657788f-w96v7"] Jan 28 07:02:30 crc kubenswrapper[4642]: I0128 07:02:30.017592 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9c657788f-w96v7" Jan 28 07:02:30 crc kubenswrapper[4642]: I0128 07:02:30.065656 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0209bad5-4a93-4b5f-8643-5dd6c997debf-config\") pod \"dnsmasq-dns-9c657788f-w96v7\" (UID: \"0209bad5-4a93-4b5f-8643-5dd6c997debf\") " pod="openstack/dnsmasq-dns-9c657788f-w96v7" Jan 28 07:02:30 crc kubenswrapper[4642]: I0128 07:02:30.065693 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4spxn\" (UniqueName: \"kubernetes.io/projected/0209bad5-4a93-4b5f-8643-5dd6c997debf-kube-api-access-4spxn\") pod \"dnsmasq-dns-9c657788f-w96v7\" (UID: \"0209bad5-4a93-4b5f-8643-5dd6c997debf\") " pod="openstack/dnsmasq-dns-9c657788f-w96v7" Jan 28 07:02:30 crc kubenswrapper[4642]: I0128 07:02:30.065774 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0209bad5-4a93-4b5f-8643-5dd6c997debf-dns-svc\") pod \"dnsmasq-dns-9c657788f-w96v7\" (UID: \"0209bad5-4a93-4b5f-8643-5dd6c997debf\") " pod="openstack/dnsmasq-dns-9c657788f-w96v7" Jan 28 07:02:30 crc kubenswrapper[4642]: I0128 07:02:30.091740 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59fc74d969-mwlrv" Jan 28 07:02:30 crc kubenswrapper[4642]: I0128 07:02:30.166737 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0209bad5-4a93-4b5f-8643-5dd6c997debf-dns-svc\") pod \"dnsmasq-dns-9c657788f-w96v7\" (UID: \"0209bad5-4a93-4b5f-8643-5dd6c997debf\") " pod="openstack/dnsmasq-dns-9c657788f-w96v7" Jan 28 07:02:30 crc kubenswrapper[4642]: I0128 07:02:30.166996 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0209bad5-4a93-4b5f-8643-5dd6c997debf-config\") pod \"dnsmasq-dns-9c657788f-w96v7\" (UID: \"0209bad5-4a93-4b5f-8643-5dd6c997debf\") " pod="openstack/dnsmasq-dns-9c657788f-w96v7" Jan 28 07:02:30 crc kubenswrapper[4642]: I0128 07:02:30.167028 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4spxn\" (UniqueName: \"kubernetes.io/projected/0209bad5-4a93-4b5f-8643-5dd6c997debf-kube-api-access-4spxn\") pod \"dnsmasq-dns-9c657788f-w96v7\" (UID: \"0209bad5-4a93-4b5f-8643-5dd6c997debf\") " pod="openstack/dnsmasq-dns-9c657788f-w96v7" Jan 28 07:02:30 crc kubenswrapper[4642]: I0128 07:02:30.167502 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0209bad5-4a93-4b5f-8643-5dd6c997debf-dns-svc\") pod \"dnsmasq-dns-9c657788f-w96v7\" (UID: \"0209bad5-4a93-4b5f-8643-5dd6c997debf\") " pod="openstack/dnsmasq-dns-9c657788f-w96v7" Jan 28 07:02:30 crc kubenswrapper[4642]: I0128 07:02:30.167675 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0209bad5-4a93-4b5f-8643-5dd6c997debf-config\") pod \"dnsmasq-dns-9c657788f-w96v7\" (UID: \"0209bad5-4a93-4b5f-8643-5dd6c997debf\") " pod="openstack/dnsmasq-dns-9c657788f-w96v7" Jan 28 07:02:30 crc kubenswrapper[4642]: I0128 07:02:30.183047 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4spxn\" (UniqueName: \"kubernetes.io/projected/0209bad5-4a93-4b5f-8643-5dd6c997debf-kube-api-access-4spxn\") pod \"dnsmasq-dns-9c657788f-w96v7\" (UID: \"0209bad5-4a93-4b5f-8643-5dd6c997debf\") " pod="openstack/dnsmasq-dns-9c657788f-w96v7" Jan 28 07:02:30 crc kubenswrapper[4642]: I0128 07:02:30.333075 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9c657788f-w96v7" Jan 28 07:02:30 crc kubenswrapper[4642]: I0128 07:02:30.540466 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59fc74d969-mwlrv"] Jan 28 07:02:30 crc kubenswrapper[4642]: W0128 07:02:30.548614 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5254c758_f70f_49c8_b0d6_9754ba3a16c6.slice/crio-ba17194d93f4e843af34d443b17d9c564633d3f09a8365afed33e3a868c5f484 WatchSource:0}: Error finding container ba17194d93f4e843af34d443b17d9c564633d3f09a8365afed33e3a868c5f484: Status 404 returned error can't find the container with id ba17194d93f4e843af34d443b17d9c564633d3f09a8365afed33e3a868c5f484 Jan 28 07:02:30 crc kubenswrapper[4642]: I0128 07:02:30.823179 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9c657788f-w96v7"] Jan 28 07:02:30 crc kubenswrapper[4642]: W0128 07:02:30.828688 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0209bad5_4a93_4b5f_8643_5dd6c997debf.slice/crio-a356850a353708674eaec11eabf2d7d82173ebab5d7046a9feec77632bc20ff2 WatchSource:0}: Error finding container a356850a353708674eaec11eabf2d7d82173ebab5d7046a9feec77632bc20ff2: Status 404 returned error can't find the container with id a356850a353708674eaec11eabf2d7d82173ebab5d7046a9feec77632bc20ff2 Jan 28 07:02:30 crc kubenswrapper[4642]: I0128 07:02:30.886847 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 28 07:02:30 crc kubenswrapper[4642]: I0128 07:02:30.888012 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 28 07:02:30 crc kubenswrapper[4642]: I0128 07:02:30.891261 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 28 07:02:30 crc kubenswrapper[4642]: I0128 07:02:30.891416 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 28 07:02:30 crc kubenswrapper[4642]: I0128 07:02:30.891563 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-gjs64" Jan 28 07:02:30 crc kubenswrapper[4642]: I0128 07:02:30.891672 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 28 07:02:30 crc kubenswrapper[4642]: I0128 07:02:30.891770 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 28 07:02:30 crc kubenswrapper[4642]: I0128 07:02:30.891886 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 28 07:02:30 crc kubenswrapper[4642]: I0128 07:02:30.891980 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 28 07:02:30 crc kubenswrapper[4642]: I0128 07:02:30.895470 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 28 07:02:30 crc kubenswrapper[4642]: I0128 07:02:30.940501 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9c657788f-w96v7" event={"ID":"0209bad5-4a93-4b5f-8643-5dd6c997debf","Type":"ContainerStarted","Data":"a356850a353708674eaec11eabf2d7d82173ebab5d7046a9feec77632bc20ff2"} Jan 28 07:02:30 crc kubenswrapper[4642]: I0128 07:02:30.941503 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59fc74d969-mwlrv" event={"ID":"5254c758-f70f-49c8-b0d6-9754ba3a16c6","Type":"ContainerStarted","Data":"ba17194d93f4e843af34d443b17d9c564633d3f09a8365afed33e3a868c5f484"} Jan 28 07:02:30 crc kubenswrapper[4642]: I0128 07:02:30.981318 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80-server-conf\") pod \"rabbitmq-server-0\" (UID: \"ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80\") " pod="openstack/rabbitmq-server-0" Jan 28 07:02:30 crc kubenswrapper[4642]: I0128 07:02:30.981413 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80\") " pod="openstack/rabbitmq-server-0" Jan 28 07:02:30 crc kubenswrapper[4642]: I0128 07:02:30.981433 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80-config-data\") pod \"rabbitmq-server-0\" (UID: \"ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80\") " pod="openstack/rabbitmq-server-0" Jan 28 07:02:30 crc kubenswrapper[4642]: I0128 07:02:30.981446 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80\") " pod="openstack/rabbitmq-server-0" Jan 28 07:02:30 crc kubenswrapper[4642]: I0128 07:02:30.981517 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80\") " pod="openstack/rabbitmq-server-0" Jan 28 07:02:30 crc kubenswrapper[4642]: I0128 07:02:30.981541 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80\") " pod="openstack/rabbitmq-server-0" Jan 28 07:02:30 crc kubenswrapper[4642]: I0128 07:02:30.981558 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80\") " pod="openstack/rabbitmq-server-0" Jan 28 07:02:30 crc kubenswrapper[4642]: I0128 07:02:30.981596 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80\") " pod="openstack/rabbitmq-server-0" Jan 28 07:02:30 crc kubenswrapper[4642]: I0128 07:02:30.981651 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2gsm\" (UniqueName: \"kubernetes.io/projected/ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80-kube-api-access-w2gsm\") pod \"rabbitmq-server-0\" (UID: \"ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80\") " pod="openstack/rabbitmq-server-0" Jan 28 07:02:30 crc kubenswrapper[4642]: I0128 07:02:30.981669 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80-pod-info\") pod \"rabbitmq-server-0\" (UID: \"ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80\") " pod="openstack/rabbitmq-server-0" Jan 28 07:02:30 crc kubenswrapper[4642]: I0128 07:02:30.981697 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80\") " pod="openstack/rabbitmq-server-0" Jan 28 07:02:31 crc kubenswrapper[4642]: I0128 07:02:31.083206 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80-pod-info\") pod \"rabbitmq-server-0\" (UID: \"ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80\") " pod="openstack/rabbitmq-server-0" Jan 28 07:02:31 crc kubenswrapper[4642]: I0128 07:02:31.083253 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80\") " pod="openstack/rabbitmq-server-0" Jan 28 07:02:31 crc kubenswrapper[4642]: I0128 07:02:31.083283 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80-server-conf\") pod \"rabbitmq-server-0\" (UID: \"ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80\") " pod="openstack/rabbitmq-server-0" Jan 28 07:02:31 crc kubenswrapper[4642]: I0128 07:02:31.083318 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80\") " pod="openstack/rabbitmq-server-0" Jan 28 07:02:31 crc kubenswrapper[4642]: I0128 07:02:31.083335 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80-config-data\") pod \"rabbitmq-server-0\" (UID: \"ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80\") " pod="openstack/rabbitmq-server-0" Jan 28 07:02:31 crc kubenswrapper[4642]: I0128 07:02:31.083347 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80\") " pod="openstack/rabbitmq-server-0" Jan 28 07:02:31 crc kubenswrapper[4642]: I0128 07:02:31.083393 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80\") " pod="openstack/rabbitmq-server-0" Jan 28 07:02:31 crc kubenswrapper[4642]: I0128 07:02:31.083413 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80\") " pod="openstack/rabbitmq-server-0" Jan 28 07:02:31 crc kubenswrapper[4642]: I0128 07:02:31.083431 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80\") " pod="openstack/rabbitmq-server-0" Jan 28 07:02:31 crc kubenswrapper[4642]: I0128 07:02:31.083464 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80\") " pod="openstack/rabbitmq-server-0" Jan 28 07:02:31 crc kubenswrapper[4642]: I0128 07:02:31.083492 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2gsm\" (UniqueName: \"kubernetes.io/projected/ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80-kube-api-access-w2gsm\") pod \"rabbitmq-server-0\" (UID: \"ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80\") " pod="openstack/rabbitmq-server-0" Jan 28 07:02:31 crc kubenswrapper[4642]: I0128 07:02:31.083905 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80\") " pod="openstack/rabbitmq-server-0" Jan 28 07:02:31 crc kubenswrapper[4642]: I0128 07:02:31.083968 4642 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/rabbitmq-server-0" Jan 28 07:02:31 crc kubenswrapper[4642]: I0128 07:02:31.084467 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80-server-conf\") pod \"rabbitmq-server-0\" (UID: \"ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80\") " pod="openstack/rabbitmq-server-0" Jan 28 07:02:31 crc kubenswrapper[4642]: I0128 07:02:31.084478 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80\") " pod="openstack/rabbitmq-server-0" Jan 28 07:02:31 crc kubenswrapper[4642]: I0128 07:02:31.084480 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80\") " pod="openstack/rabbitmq-server-0" Jan 28 07:02:31 crc kubenswrapper[4642]: I0128 07:02:31.084644 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80-config-data\") pod \"rabbitmq-server-0\" (UID: \"ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80\") " pod="openstack/rabbitmq-server-0" Jan 28 07:02:31 crc kubenswrapper[4642]: I0128 07:02:31.087498 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80\") " pod="openstack/rabbitmq-server-0" Jan 28 07:02:31 crc kubenswrapper[4642]: I0128 07:02:31.087731 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80\") " pod="openstack/rabbitmq-server-0" Jan 28 07:02:31 crc kubenswrapper[4642]: I0128 07:02:31.088920 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80-pod-info\") pod \"rabbitmq-server-0\" (UID: \"ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80\") " pod="openstack/rabbitmq-server-0" Jan 28 07:02:31 crc kubenswrapper[4642]: I0128 07:02:31.089148 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80\") " pod="openstack/rabbitmq-server-0" Jan 28 07:02:31 crc kubenswrapper[4642]: I0128 07:02:31.097418 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2gsm\" (UniqueName: \"kubernetes.io/projected/ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80-kube-api-access-w2gsm\") pod \"rabbitmq-server-0\" (UID: \"ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80\") " pod="openstack/rabbitmq-server-0" Jan 28 07:02:31 crc kubenswrapper[4642]: I0128 07:02:31.109812 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80\") " pod="openstack/rabbitmq-server-0" Jan 28 07:02:31 crc kubenswrapper[4642]: I0128 07:02:31.110604 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 28 07:02:31 crc kubenswrapper[4642]: I0128 07:02:31.111639 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 28 07:02:31 crc kubenswrapper[4642]: I0128 07:02:31.117537 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 28 07:02:31 crc kubenswrapper[4642]: I0128 07:02:31.118163 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 28 07:02:31 crc kubenswrapper[4642]: I0128 07:02:31.118944 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-ntsp8" Jan 28 07:02:31 crc kubenswrapper[4642]: I0128 07:02:31.119549 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 28 07:02:31 crc kubenswrapper[4642]: I0128 07:02:31.119811 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 28 07:02:31 crc kubenswrapper[4642]: I0128 07:02:31.122386 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 28 07:02:31 crc kubenswrapper[4642]: I0128 07:02:31.123909 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 28 07:02:31 crc kubenswrapper[4642]: I0128 07:02:31.139274 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 28 07:02:31 crc kubenswrapper[4642]: I0128 07:02:31.185081 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/716da2e6-dc75-431b-aa9e-d22bb4e0f91b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"716da2e6-dc75-431b-aa9e-d22bb4e0f91b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 07:02:31 crc kubenswrapper[4642]: I0128 07:02:31.185128 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"716da2e6-dc75-431b-aa9e-d22bb4e0f91b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 07:02:31 crc kubenswrapper[4642]: I0128 07:02:31.185152 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/716da2e6-dc75-431b-aa9e-d22bb4e0f91b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"716da2e6-dc75-431b-aa9e-d22bb4e0f91b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 07:02:31 crc kubenswrapper[4642]: I0128 07:02:31.185204 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/716da2e6-dc75-431b-aa9e-d22bb4e0f91b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"716da2e6-dc75-431b-aa9e-d22bb4e0f91b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 07:02:31 crc kubenswrapper[4642]: I0128 07:02:31.185363 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/716da2e6-dc75-431b-aa9e-d22bb4e0f91b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"716da2e6-dc75-431b-aa9e-d22bb4e0f91b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 07:02:31 crc kubenswrapper[4642]: I0128 07:02:31.185405 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/716da2e6-dc75-431b-aa9e-d22bb4e0f91b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"716da2e6-dc75-431b-aa9e-d22bb4e0f91b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 07:02:31 crc kubenswrapper[4642]: I0128 07:02:31.185447 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/716da2e6-dc75-431b-aa9e-d22bb4e0f91b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"716da2e6-dc75-431b-aa9e-d22bb4e0f91b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 07:02:31 crc kubenswrapper[4642]: I0128 07:02:31.185556 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfgff\" (UniqueName: \"kubernetes.io/projected/716da2e6-dc75-431b-aa9e-d22bb4e0f91b-kube-api-access-tfgff\") pod \"rabbitmq-cell1-server-0\" (UID: \"716da2e6-dc75-431b-aa9e-d22bb4e0f91b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 07:02:31 crc kubenswrapper[4642]: I0128 07:02:31.185596 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/716da2e6-dc75-431b-aa9e-d22bb4e0f91b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"716da2e6-dc75-431b-aa9e-d22bb4e0f91b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 07:02:31 crc kubenswrapper[4642]: I0128 07:02:31.185649 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/716da2e6-dc75-431b-aa9e-d22bb4e0f91b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"716da2e6-dc75-431b-aa9e-d22bb4e0f91b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 07:02:31 crc kubenswrapper[4642]: I0128 07:02:31.185663 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/716da2e6-dc75-431b-aa9e-d22bb4e0f91b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"716da2e6-dc75-431b-aa9e-d22bb4e0f91b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 07:02:31 crc kubenswrapper[4642]: I0128 07:02:31.209280 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 28 07:02:31 crc kubenswrapper[4642]: I0128 07:02:31.286694 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/716da2e6-dc75-431b-aa9e-d22bb4e0f91b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"716da2e6-dc75-431b-aa9e-d22bb4e0f91b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 07:02:31 crc kubenswrapper[4642]: I0128 07:02:31.286720 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/716da2e6-dc75-431b-aa9e-d22bb4e0f91b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"716da2e6-dc75-431b-aa9e-d22bb4e0f91b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 07:02:31 crc kubenswrapper[4642]: I0128 07:02:31.286746 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/716da2e6-dc75-431b-aa9e-d22bb4e0f91b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"716da2e6-dc75-431b-aa9e-d22bb4e0f91b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 07:02:31 crc kubenswrapper[4642]: I0128 07:02:31.286772 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"716da2e6-dc75-431b-aa9e-d22bb4e0f91b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 07:02:31 crc kubenswrapper[4642]: I0128 07:02:31.286788 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/716da2e6-dc75-431b-aa9e-d22bb4e0f91b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"716da2e6-dc75-431b-aa9e-d22bb4e0f91b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 07:02:31 crc kubenswrapper[4642]: I0128 07:02:31.286816 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/716da2e6-dc75-431b-aa9e-d22bb4e0f91b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"716da2e6-dc75-431b-aa9e-d22bb4e0f91b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 07:02:31 crc kubenswrapper[4642]: I0128 07:02:31.286841 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/716da2e6-dc75-431b-aa9e-d22bb4e0f91b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"716da2e6-dc75-431b-aa9e-d22bb4e0f91b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 07:02:31 crc kubenswrapper[4642]: I0128 07:02:31.286855 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/716da2e6-dc75-431b-aa9e-d22bb4e0f91b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"716da2e6-dc75-431b-aa9e-d22bb4e0f91b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 07:02:31 crc kubenswrapper[4642]: I0128 07:02:31.286870 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/716da2e6-dc75-431b-aa9e-d22bb4e0f91b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"716da2e6-dc75-431b-aa9e-d22bb4e0f91b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 07:02:31 crc kubenswrapper[4642]: I0128 07:02:31.286886 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfgff\" (UniqueName: \"kubernetes.io/projected/716da2e6-dc75-431b-aa9e-d22bb4e0f91b-kube-api-access-tfgff\") pod \"rabbitmq-cell1-server-0\" (UID: \"716da2e6-dc75-431b-aa9e-d22bb4e0f91b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 07:02:31 crc kubenswrapper[4642]: I0128 07:02:31.286909 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/716da2e6-dc75-431b-aa9e-d22bb4e0f91b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"716da2e6-dc75-431b-aa9e-d22bb4e0f91b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 07:02:31 crc kubenswrapper[4642]: I0128 07:02:31.289152 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/716da2e6-dc75-431b-aa9e-d22bb4e0f91b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"716da2e6-dc75-431b-aa9e-d22bb4e0f91b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 07:02:31 crc kubenswrapper[4642]: I0128 07:02:31.290560 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/716da2e6-dc75-431b-aa9e-d22bb4e0f91b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"716da2e6-dc75-431b-aa9e-d22bb4e0f91b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 07:02:31 crc kubenswrapper[4642]: I0128 07:02:31.290799 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/716da2e6-dc75-431b-aa9e-d22bb4e0f91b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"716da2e6-dc75-431b-aa9e-d22bb4e0f91b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 07:02:31 crc kubenswrapper[4642]: I0128 07:02:31.291053 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/716da2e6-dc75-431b-aa9e-d22bb4e0f91b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"716da2e6-dc75-431b-aa9e-d22bb4e0f91b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 07:02:31 crc kubenswrapper[4642]: I0128 07:02:31.291141 4642 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"716da2e6-dc75-431b-aa9e-d22bb4e0f91b\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-cell1-server-0" Jan 28 07:02:31 crc kubenswrapper[4642]: I0128 07:02:31.292714 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/716da2e6-dc75-431b-aa9e-d22bb4e0f91b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"716da2e6-dc75-431b-aa9e-d22bb4e0f91b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 07:02:31 crc kubenswrapper[4642]: I0128 07:02:31.293128 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/716da2e6-dc75-431b-aa9e-d22bb4e0f91b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"716da2e6-dc75-431b-aa9e-d22bb4e0f91b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 07:02:31 crc kubenswrapper[4642]: I0128 07:02:31.293772 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/716da2e6-dc75-431b-aa9e-d22bb4e0f91b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"716da2e6-dc75-431b-aa9e-d22bb4e0f91b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 07:02:31 crc kubenswrapper[4642]: I0128 07:02:31.293916 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/716da2e6-dc75-431b-aa9e-d22bb4e0f91b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"716da2e6-dc75-431b-aa9e-d22bb4e0f91b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 07:02:31 crc kubenswrapper[4642]: I0128 07:02:31.295043 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/716da2e6-dc75-431b-aa9e-d22bb4e0f91b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"716da2e6-dc75-431b-aa9e-d22bb4e0f91b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 07:02:31 crc kubenswrapper[4642]: I0128 07:02:31.307536 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"716da2e6-dc75-431b-aa9e-d22bb4e0f91b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 07:02:31 crc kubenswrapper[4642]: I0128 07:02:31.308467 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfgff\" (UniqueName: \"kubernetes.io/projected/716da2e6-dc75-431b-aa9e-d22bb4e0f91b-kube-api-access-tfgff\") pod \"rabbitmq-cell1-server-0\" (UID: \"716da2e6-dc75-431b-aa9e-d22bb4e0f91b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 07:02:31 crc kubenswrapper[4642]: I0128 07:02:31.467784 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 28 07:02:31 crc kubenswrapper[4642]: I0128 07:02:31.624818 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 28 07:02:31 crc kubenswrapper[4642]: I0128 07:02:31.656020 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wgr5k"] Jan 28 07:02:31 crc kubenswrapper[4642]: I0128 07:02:31.657221 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wgr5k" Jan 28 07:02:31 crc kubenswrapper[4642]: I0128 07:02:31.668771 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wgr5k"] Jan 28 07:02:31 crc kubenswrapper[4642]: I0128 07:02:31.694453 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa646e79-c71d-48a5-8f8e-4d18c9614adf-catalog-content\") pod \"community-operators-wgr5k\" (UID: \"aa646e79-c71d-48a5-8f8e-4d18c9614adf\") " pod="openshift-marketplace/community-operators-wgr5k" Jan 28 07:02:31 crc kubenswrapper[4642]: I0128 07:02:31.694662 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa646e79-c71d-48a5-8f8e-4d18c9614adf-utilities\") pod \"community-operators-wgr5k\" (UID: \"aa646e79-c71d-48a5-8f8e-4d18c9614adf\") " pod="openshift-marketplace/community-operators-wgr5k" Jan 28 07:02:31 crc kubenswrapper[4642]: I0128 07:02:31.694801 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkr2m\" (UniqueName: \"kubernetes.io/projected/aa646e79-c71d-48a5-8f8e-4d18c9614adf-kube-api-access-fkr2m\") pod \"community-operators-wgr5k\" (UID: \"aa646e79-c71d-48a5-8f8e-4d18c9614adf\") " pod="openshift-marketplace/community-operators-wgr5k" Jan 28 07:02:31 crc kubenswrapper[4642]: I0128 07:02:31.797236 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa646e79-c71d-48a5-8f8e-4d18c9614adf-catalog-content\") pod \"community-operators-wgr5k\" (UID: \"aa646e79-c71d-48a5-8f8e-4d18c9614adf\") " pod="openshift-marketplace/community-operators-wgr5k" Jan 28 07:02:31 crc kubenswrapper[4642]: I0128 07:02:31.797684 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa646e79-c71d-48a5-8f8e-4d18c9614adf-catalog-content\") pod \"community-operators-wgr5k\" (UID: \"aa646e79-c71d-48a5-8f8e-4d18c9614adf\") " pod="openshift-marketplace/community-operators-wgr5k" Jan 28 07:02:31 crc kubenswrapper[4642]: I0128 07:02:31.797995 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa646e79-c71d-48a5-8f8e-4d18c9614adf-utilities\") pod \"community-operators-wgr5k\" (UID: \"aa646e79-c71d-48a5-8f8e-4d18c9614adf\") " pod="openshift-marketplace/community-operators-wgr5k" Jan 28 07:02:31 crc kubenswrapper[4642]: I0128 07:02:31.797277 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa646e79-c71d-48a5-8f8e-4d18c9614adf-utilities\") pod \"community-operators-wgr5k\" (UID: \"aa646e79-c71d-48a5-8f8e-4d18c9614adf\") " pod="openshift-marketplace/community-operators-wgr5k" Jan 28 07:02:31 crc kubenswrapper[4642]: I0128 07:02:31.803212 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkr2m\" (UniqueName: \"kubernetes.io/projected/aa646e79-c71d-48a5-8f8e-4d18c9614adf-kube-api-access-fkr2m\") pod \"community-operators-wgr5k\" (UID: \"aa646e79-c71d-48a5-8f8e-4d18c9614adf\") " pod="openshift-marketplace/community-operators-wgr5k" Jan 28 07:02:31 crc kubenswrapper[4642]: I0128 07:02:31.818480 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkr2m\" (UniqueName: \"kubernetes.io/projected/aa646e79-c71d-48a5-8f8e-4d18c9614adf-kube-api-access-fkr2m\") pod \"community-operators-wgr5k\" (UID: \"aa646e79-c71d-48a5-8f8e-4d18c9614adf\") " pod="openshift-marketplace/community-operators-wgr5k" Jan 28 07:02:31 crc kubenswrapper[4642]: I0128 07:02:31.950718 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80","Type":"ContainerStarted","Data":"f0ef3837e00efb888bf3dd13571ba78d0dc10a40f4512621065043fc4a8751b9"} Jan 28 07:02:31 crc kubenswrapper[4642]: I0128 07:02:31.951706 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 28 07:02:31 crc kubenswrapper[4642]: I0128 07:02:31.991090 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wgr5k" Jan 28 07:02:31 crc kubenswrapper[4642]: W0128 07:02:31.993173 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod716da2e6_dc75_431b_aa9e_d22bb4e0f91b.slice/crio-d193fd150436f5a279a12e20709d3c4bc0c3e2f8c94762370ec3731abbe81992 WatchSource:0}: Error finding container d193fd150436f5a279a12e20709d3c4bc0c3e2f8c94762370ec3731abbe81992: Status 404 returned error can't find the container with id d193fd150436f5a279a12e20709d3c4bc0c3e2f8c94762370ec3731abbe81992 Jan 28 07:02:32 crc kubenswrapper[4642]: I0128 07:02:32.519895 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wgr5k"] Jan 28 07:02:32 crc kubenswrapper[4642]: I0128 07:02:32.535737 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 28 07:02:32 crc kubenswrapper[4642]: I0128 07:02:32.536807 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 28 07:02:32 crc kubenswrapper[4642]: I0128 07:02:32.541995 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 28 07:02:32 crc kubenswrapper[4642]: I0128 07:02:32.542297 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 28 07:02:32 crc kubenswrapper[4642]: I0128 07:02:32.542676 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-tm8cg" Jan 28 07:02:32 crc kubenswrapper[4642]: I0128 07:02:32.543654 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 28 07:02:32 crc kubenswrapper[4642]: I0128 07:02:32.546756 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 28 07:02:32 crc kubenswrapper[4642]: I0128 07:02:32.547508 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 28 07:02:32 crc kubenswrapper[4642]: I0128 07:02:32.622816 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"602638e1-0a19-4a7f-a752-50b0e228a7da\") " pod="openstack/openstack-galera-0" Jan 28 07:02:32 crc kubenswrapper[4642]: I0128 07:02:32.622903 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/602638e1-0a19-4a7f-a752-50b0e228a7da-kolla-config\") pod \"openstack-galera-0\" (UID: \"602638e1-0a19-4a7f-a752-50b0e228a7da\") " pod="openstack/openstack-galera-0" Jan 28 07:02:32 crc kubenswrapper[4642]: I0128 07:02:32.622951 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkw49\" (UniqueName: \"kubernetes.io/projected/602638e1-0a19-4a7f-a752-50b0e228a7da-kube-api-access-tkw49\") pod \"openstack-galera-0\" (UID: \"602638e1-0a19-4a7f-a752-50b0e228a7da\") " pod="openstack/openstack-galera-0" Jan 28 07:02:32 crc kubenswrapper[4642]: I0128 07:02:32.622990 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/602638e1-0a19-4a7f-a752-50b0e228a7da-config-data-default\") pod \"openstack-galera-0\" (UID: \"602638e1-0a19-4a7f-a752-50b0e228a7da\") " pod="openstack/openstack-galera-0" Jan 28 07:02:32 crc kubenswrapper[4642]: I0128 07:02:32.623043 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/602638e1-0a19-4a7f-a752-50b0e228a7da-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"602638e1-0a19-4a7f-a752-50b0e228a7da\") " pod="openstack/openstack-galera-0" Jan 28 07:02:32 crc kubenswrapper[4642]: I0128 07:02:32.623076 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/602638e1-0a19-4a7f-a752-50b0e228a7da-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"602638e1-0a19-4a7f-a752-50b0e228a7da\") " pod="openstack/openstack-galera-0" Jan 28 07:02:32 crc kubenswrapper[4642]: I0128 07:02:32.623113 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/602638e1-0a19-4a7f-a752-50b0e228a7da-config-data-generated\") pod \"openstack-galera-0\" (UID: \"602638e1-0a19-4a7f-a752-50b0e228a7da\") " pod="openstack/openstack-galera-0" Jan 28 07:02:32 crc kubenswrapper[4642]: I0128 07:02:32.623127 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/602638e1-0a19-4a7f-a752-50b0e228a7da-operator-scripts\") pod \"openstack-galera-0\" (UID: \"602638e1-0a19-4a7f-a752-50b0e228a7da\") " pod="openstack/openstack-galera-0" Jan 28 07:02:32 crc kubenswrapper[4642]: I0128 07:02:32.724595 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/602638e1-0a19-4a7f-a752-50b0e228a7da-kolla-config\") pod \"openstack-galera-0\" (UID: \"602638e1-0a19-4a7f-a752-50b0e228a7da\") " pod="openstack/openstack-galera-0" Jan 28 07:02:32 crc kubenswrapper[4642]: I0128 07:02:32.724647 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkw49\" (UniqueName: \"kubernetes.io/projected/602638e1-0a19-4a7f-a752-50b0e228a7da-kube-api-access-tkw49\") pod \"openstack-galera-0\" (UID: \"602638e1-0a19-4a7f-a752-50b0e228a7da\") " pod="openstack/openstack-galera-0" Jan 28 07:02:32 crc kubenswrapper[4642]: I0128 07:02:32.724676 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/602638e1-0a19-4a7f-a752-50b0e228a7da-config-data-default\") pod \"openstack-galera-0\" (UID: \"602638e1-0a19-4a7f-a752-50b0e228a7da\") " pod="openstack/openstack-galera-0" Jan 28 07:02:32 crc kubenswrapper[4642]: I0128 07:02:32.724711 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/602638e1-0a19-4a7f-a752-50b0e228a7da-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"602638e1-0a19-4a7f-a752-50b0e228a7da\") " pod="openstack/openstack-galera-0" Jan 28 07:02:32 crc kubenswrapper[4642]: I0128 07:02:32.724734 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/602638e1-0a19-4a7f-a752-50b0e228a7da-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"602638e1-0a19-4a7f-a752-50b0e228a7da\") " pod="openstack/openstack-galera-0" Jan 28 07:02:32 crc kubenswrapper[4642]: I0128 07:02:32.724759 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/602638e1-0a19-4a7f-a752-50b0e228a7da-operator-scripts\") pod \"openstack-galera-0\" (UID: \"602638e1-0a19-4a7f-a752-50b0e228a7da\") " pod="openstack/openstack-galera-0" Jan 28 07:02:32 crc kubenswrapper[4642]: I0128 07:02:32.724774 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/602638e1-0a19-4a7f-a752-50b0e228a7da-config-data-generated\") pod \"openstack-galera-0\" (UID: \"602638e1-0a19-4a7f-a752-50b0e228a7da\") " pod="openstack/openstack-galera-0" Jan 28 07:02:32 crc kubenswrapper[4642]: I0128 07:02:32.724795 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"602638e1-0a19-4a7f-a752-50b0e228a7da\") " pod="openstack/openstack-galera-0" Jan 28 07:02:32 crc kubenswrapper[4642]: I0128 07:02:32.725016 4642 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"602638e1-0a19-4a7f-a752-50b0e228a7da\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/openstack-galera-0" Jan 28 07:02:32 crc kubenswrapper[4642]: I0128 07:02:32.725492 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/602638e1-0a19-4a7f-a752-50b0e228a7da-kolla-config\") pod \"openstack-galera-0\" (UID: \"602638e1-0a19-4a7f-a752-50b0e228a7da\") " pod="openstack/openstack-galera-0" Jan 28 07:02:32 crc kubenswrapper[4642]: I0128 07:02:32.725823 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/602638e1-0a19-4a7f-a752-50b0e228a7da-config-data-default\") pod \"openstack-galera-0\" (UID: \"602638e1-0a19-4a7f-a752-50b0e228a7da\") " pod="openstack/openstack-galera-0" Jan 28 07:02:32 crc kubenswrapper[4642]: I0128 07:02:32.726045 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/602638e1-0a19-4a7f-a752-50b0e228a7da-config-data-generated\") pod \"openstack-galera-0\" (UID: \"602638e1-0a19-4a7f-a752-50b0e228a7da\") " pod="openstack/openstack-galera-0" Jan 28 07:02:32 crc kubenswrapper[4642]: I0128 07:02:32.727180 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/602638e1-0a19-4a7f-a752-50b0e228a7da-operator-scripts\") pod \"openstack-galera-0\" (UID: \"602638e1-0a19-4a7f-a752-50b0e228a7da\") " pod="openstack/openstack-galera-0" Jan 28 07:02:32 crc kubenswrapper[4642]: I0128 07:02:32.730021 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/602638e1-0a19-4a7f-a752-50b0e228a7da-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"602638e1-0a19-4a7f-a752-50b0e228a7da\") " pod="openstack/openstack-galera-0" Jan 28 07:02:32 crc kubenswrapper[4642]: I0128 07:02:32.730045 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/602638e1-0a19-4a7f-a752-50b0e228a7da-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"602638e1-0a19-4a7f-a752-50b0e228a7da\") " pod="openstack/openstack-galera-0" Jan 28 07:02:32 crc kubenswrapper[4642]: I0128 07:02:32.737160 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkw49\" (UniqueName: \"kubernetes.io/projected/602638e1-0a19-4a7f-a752-50b0e228a7da-kube-api-access-tkw49\") pod \"openstack-galera-0\" (UID: \"602638e1-0a19-4a7f-a752-50b0e228a7da\") " pod="openstack/openstack-galera-0" Jan 28 07:02:32 crc kubenswrapper[4642]: I0128 07:02:32.739441 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"602638e1-0a19-4a7f-a752-50b0e228a7da\") " pod="openstack/openstack-galera-0" Jan 28 07:02:32 crc kubenswrapper[4642]: I0128 07:02:32.874473 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 28 07:02:32 crc kubenswrapper[4642]: I0128 07:02:32.987795 4642 generic.go:334] "Generic (PLEG): container finished" podID="aa646e79-c71d-48a5-8f8e-4d18c9614adf" containerID="f0f4d0a9745951a678135ce3de80cd379b10c677ecbeecd79127b17118e51996" exitCode=0 Jan 28 07:02:32 crc kubenswrapper[4642]: I0128 07:02:32.987868 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wgr5k" event={"ID":"aa646e79-c71d-48a5-8f8e-4d18c9614adf","Type":"ContainerDied","Data":"f0f4d0a9745951a678135ce3de80cd379b10c677ecbeecd79127b17118e51996"} Jan 28 07:02:32 crc kubenswrapper[4642]: I0128 07:02:32.987893 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wgr5k" event={"ID":"aa646e79-c71d-48a5-8f8e-4d18c9614adf","Type":"ContainerStarted","Data":"84c1c679009b99b48d3854ec8ec2fd5f9cea6f96079e0cad158d4e80cb58a08d"} Jan 28 07:02:32 crc kubenswrapper[4642]: I0128 07:02:32.990536 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"716da2e6-dc75-431b-aa9e-d22bb4e0f91b","Type":"ContainerStarted","Data":"d193fd150436f5a279a12e20709d3c4bc0c3e2f8c94762370ec3731abbe81992"} Jan 28 07:02:33 crc kubenswrapper[4642]: I0128 07:02:33.453860 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 28 07:02:34 crc kubenswrapper[4642]: I0128 07:02:34.008051 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wgr5k" event={"ID":"aa646e79-c71d-48a5-8f8e-4d18c9614adf","Type":"ContainerStarted","Data":"f9a624813fb041db9ed26dcc28eb7cd621dc6119c8b053ad8202e145adde4b11"} Jan 28 07:02:34 crc kubenswrapper[4642]: I0128 07:02:34.010747 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"602638e1-0a19-4a7f-a752-50b0e228a7da","Type":"ContainerStarted","Data":"aa8d13833a35a33d45dedbbef2d90484cb94bc1694dd9e6a9f0f0423c0525d99"} Jan 28 07:02:34 crc kubenswrapper[4642]: I0128 07:02:34.084776 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 28 07:02:34 crc kubenswrapper[4642]: I0128 07:02:34.085912 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 28 07:02:34 crc kubenswrapper[4642]: I0128 07:02:34.088229 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 28 07:02:34 crc kubenswrapper[4642]: I0128 07:02:34.090379 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 28 07:02:34 crc kubenswrapper[4642]: I0128 07:02:34.090421 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 28 07:02:34 crc kubenswrapper[4642]: I0128 07:02:34.090557 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-s8g6f" Jan 28 07:02:34 crc kubenswrapper[4642]: I0128 07:02:34.098389 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 28 07:02:34 crc kubenswrapper[4642]: I0128 07:02:34.147483 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c03a521e-dd32-4a74-b452-512fe8bdae8e-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"c03a521e-dd32-4a74-b452-512fe8bdae8e\") " pod="openstack/openstack-cell1-galera-0" Jan 28 07:02:34 crc kubenswrapper[4642]: I0128 07:02:34.147530 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdm9x\" (UniqueName: \"kubernetes.io/projected/c03a521e-dd32-4a74-b452-512fe8bdae8e-kube-api-access-kdm9x\") pod \"openstack-cell1-galera-0\" (UID: \"c03a521e-dd32-4a74-b452-512fe8bdae8e\") " pod="openstack/openstack-cell1-galera-0" Jan 28 07:02:34 crc kubenswrapper[4642]: I0128 07:02:34.147559 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c03a521e-dd32-4a74-b452-512fe8bdae8e-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"c03a521e-dd32-4a74-b452-512fe8bdae8e\") " pod="openstack/openstack-cell1-galera-0" Jan 28 07:02:34 crc kubenswrapper[4642]: I0128 07:02:34.147603 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c03a521e-dd32-4a74-b452-512fe8bdae8e-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"c03a521e-dd32-4a74-b452-512fe8bdae8e\") " pod="openstack/openstack-cell1-galera-0" Jan 28 07:02:34 crc kubenswrapper[4642]: I0128 07:02:34.147636 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"c03a521e-dd32-4a74-b452-512fe8bdae8e\") " pod="openstack/openstack-cell1-galera-0" Jan 28 07:02:34 crc kubenswrapper[4642]: I0128 07:02:34.147664 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c03a521e-dd32-4a74-b452-512fe8bdae8e-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"c03a521e-dd32-4a74-b452-512fe8bdae8e\") " pod="openstack/openstack-cell1-galera-0" Jan 28 07:02:34 crc kubenswrapper[4642]: I0128 07:02:34.147689 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c03a521e-dd32-4a74-b452-512fe8bdae8e-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"c03a521e-dd32-4a74-b452-512fe8bdae8e\") " pod="openstack/openstack-cell1-galera-0" Jan 28 07:02:34 crc kubenswrapper[4642]: I0128 07:02:34.147706 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c03a521e-dd32-4a74-b452-512fe8bdae8e-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"c03a521e-dd32-4a74-b452-512fe8bdae8e\") " pod="openstack/openstack-cell1-galera-0" Jan 28 07:02:34 crc kubenswrapper[4642]: I0128 07:02:34.249669 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c03a521e-dd32-4a74-b452-512fe8bdae8e-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"c03a521e-dd32-4a74-b452-512fe8bdae8e\") " pod="openstack/openstack-cell1-galera-0" Jan 28 07:02:34 crc kubenswrapper[4642]: I0128 07:02:34.249771 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"c03a521e-dd32-4a74-b452-512fe8bdae8e\") " pod="openstack/openstack-cell1-galera-0" Jan 28 07:02:34 crc kubenswrapper[4642]: I0128 07:02:34.249817 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c03a521e-dd32-4a74-b452-512fe8bdae8e-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"c03a521e-dd32-4a74-b452-512fe8bdae8e\") " pod="openstack/openstack-cell1-galera-0" Jan 28 07:02:34 crc kubenswrapper[4642]: I0128 07:02:34.249848 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c03a521e-dd32-4a74-b452-512fe8bdae8e-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"c03a521e-dd32-4a74-b452-512fe8bdae8e\") " pod="openstack/openstack-cell1-galera-0" Jan 28 07:02:34 crc kubenswrapper[4642]: I0128 07:02:34.249896 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c03a521e-dd32-4a74-b452-512fe8bdae8e-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"c03a521e-dd32-4a74-b452-512fe8bdae8e\") " pod="openstack/openstack-cell1-galera-0" Jan 28 07:02:34 crc kubenswrapper[4642]: I0128 07:02:34.249945 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c03a521e-dd32-4a74-b452-512fe8bdae8e-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"c03a521e-dd32-4a74-b452-512fe8bdae8e\") " pod="openstack/openstack-cell1-galera-0" Jan 28 07:02:34 crc kubenswrapper[4642]: I0128 07:02:34.249966 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdm9x\" (UniqueName: \"kubernetes.io/projected/c03a521e-dd32-4a74-b452-512fe8bdae8e-kube-api-access-kdm9x\") pod \"openstack-cell1-galera-0\" (UID: \"c03a521e-dd32-4a74-b452-512fe8bdae8e\") " pod="openstack/openstack-cell1-galera-0" Jan 28 07:02:34 crc kubenswrapper[4642]: I0128 07:02:34.250030 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c03a521e-dd32-4a74-b452-512fe8bdae8e-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"c03a521e-dd32-4a74-b452-512fe8bdae8e\") " pod="openstack/openstack-cell1-galera-0" Jan 28 07:02:34 crc kubenswrapper[4642]: I0128 07:02:34.250232 4642 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"c03a521e-dd32-4a74-b452-512fe8bdae8e\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/openstack-cell1-galera-0" Jan 28 07:02:34 crc kubenswrapper[4642]: I0128 07:02:34.250568 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c03a521e-dd32-4a74-b452-512fe8bdae8e-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"c03a521e-dd32-4a74-b452-512fe8bdae8e\") " pod="openstack/openstack-cell1-galera-0" Jan 28 07:02:34 crc kubenswrapper[4642]: I0128 07:02:34.250831 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c03a521e-dd32-4a74-b452-512fe8bdae8e-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"c03a521e-dd32-4a74-b452-512fe8bdae8e\") " pod="openstack/openstack-cell1-galera-0" Jan 28 07:02:34 crc kubenswrapper[4642]: I0128 07:02:34.251476 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c03a521e-dd32-4a74-b452-512fe8bdae8e-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"c03a521e-dd32-4a74-b452-512fe8bdae8e\") " pod="openstack/openstack-cell1-galera-0" Jan 28 07:02:34 crc kubenswrapper[4642]: I0128 07:02:34.252140 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c03a521e-dd32-4a74-b452-512fe8bdae8e-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"c03a521e-dd32-4a74-b452-512fe8bdae8e\") " pod="openstack/openstack-cell1-galera-0" Jan 28 07:02:34 crc kubenswrapper[4642]: I0128 07:02:34.258901 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c03a521e-dd32-4a74-b452-512fe8bdae8e-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"c03a521e-dd32-4a74-b452-512fe8bdae8e\") " pod="openstack/openstack-cell1-galera-0" Jan 28 07:02:34 crc kubenswrapper[4642]: I0128 07:02:34.258923 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c03a521e-dd32-4a74-b452-512fe8bdae8e-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"c03a521e-dd32-4a74-b452-512fe8bdae8e\") " pod="openstack/openstack-cell1-galera-0" Jan 28 07:02:34 crc kubenswrapper[4642]: I0128 07:02:34.267630 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdm9x\" (UniqueName: \"kubernetes.io/projected/c03a521e-dd32-4a74-b452-512fe8bdae8e-kube-api-access-kdm9x\") pod \"openstack-cell1-galera-0\" (UID: \"c03a521e-dd32-4a74-b452-512fe8bdae8e\") " pod="openstack/openstack-cell1-galera-0" Jan 28 07:02:34 crc kubenswrapper[4642]: I0128 07:02:34.275402 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"c03a521e-dd32-4a74-b452-512fe8bdae8e\") " pod="openstack/openstack-cell1-galera-0" Jan 28 07:02:34 crc kubenswrapper[4642]: I0128 07:02:34.346816 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 28 07:02:34 crc kubenswrapper[4642]: I0128 07:02:34.347736 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 28 07:02:34 crc kubenswrapper[4642]: I0128 07:02:34.353496 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 28 07:02:34 crc kubenswrapper[4642]: I0128 07:02:34.353642 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-kfvmw" Jan 28 07:02:34 crc kubenswrapper[4642]: I0128 07:02:34.353781 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 28 07:02:34 crc kubenswrapper[4642]: I0128 07:02:34.375242 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 28 07:02:34 crc kubenswrapper[4642]: I0128 07:02:34.422037 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 28 07:02:34 crc kubenswrapper[4642]: I0128 07:02:34.454551 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/59611c4a-ee6f-4f16-9804-aba66d47d908-kolla-config\") pod \"memcached-0\" (UID: \"59611c4a-ee6f-4f16-9804-aba66d47d908\") " pod="openstack/memcached-0" Jan 28 07:02:34 crc kubenswrapper[4642]: I0128 07:02:34.454661 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/59611c4a-ee6f-4f16-9804-aba66d47d908-memcached-tls-certs\") pod \"memcached-0\" (UID: \"59611c4a-ee6f-4f16-9804-aba66d47d908\") " pod="openstack/memcached-0" Jan 28 07:02:34 crc kubenswrapper[4642]: I0128 07:02:34.454695 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/59611c4a-ee6f-4f16-9804-aba66d47d908-config-data\") pod \"memcached-0\" (UID: \"59611c4a-ee6f-4f16-9804-aba66d47d908\") " pod="openstack/memcached-0" Jan 28 07:02:34 crc kubenswrapper[4642]: I0128 07:02:34.454755 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59611c4a-ee6f-4f16-9804-aba66d47d908-combined-ca-bundle\") pod \"memcached-0\" (UID: \"59611c4a-ee6f-4f16-9804-aba66d47d908\") " pod="openstack/memcached-0" Jan 28 07:02:34 crc kubenswrapper[4642]: I0128 07:02:34.454797 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5tbg\" (UniqueName: \"kubernetes.io/projected/59611c4a-ee6f-4f16-9804-aba66d47d908-kube-api-access-m5tbg\") pod \"memcached-0\" (UID: \"59611c4a-ee6f-4f16-9804-aba66d47d908\") " pod="openstack/memcached-0" Jan 28 07:02:34 crc kubenswrapper[4642]: I0128 07:02:34.556208 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/59611c4a-ee6f-4f16-9804-aba66d47d908-config-data\") pod \"memcached-0\" (UID: \"59611c4a-ee6f-4f16-9804-aba66d47d908\") " pod="openstack/memcached-0" Jan 28 07:02:34 crc kubenswrapper[4642]: I0128 07:02:34.556314 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59611c4a-ee6f-4f16-9804-aba66d47d908-combined-ca-bundle\") pod \"memcached-0\" (UID: \"59611c4a-ee6f-4f16-9804-aba66d47d908\") " pod="openstack/memcached-0" Jan 28 07:02:34 crc kubenswrapper[4642]: I0128 07:02:34.556369 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5tbg\" (UniqueName: \"kubernetes.io/projected/59611c4a-ee6f-4f16-9804-aba66d47d908-kube-api-access-m5tbg\") pod \"memcached-0\" (UID: \"59611c4a-ee6f-4f16-9804-aba66d47d908\") " pod="openstack/memcached-0" Jan 28 07:02:34 crc kubenswrapper[4642]: I0128 07:02:34.556420 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/59611c4a-ee6f-4f16-9804-aba66d47d908-kolla-config\") pod \"memcached-0\" (UID: \"59611c4a-ee6f-4f16-9804-aba66d47d908\") " pod="openstack/memcached-0" Jan 28 07:02:34 crc kubenswrapper[4642]: I0128 07:02:34.556539 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/59611c4a-ee6f-4f16-9804-aba66d47d908-memcached-tls-certs\") pod \"memcached-0\" (UID: \"59611c4a-ee6f-4f16-9804-aba66d47d908\") " pod="openstack/memcached-0" Jan 28 07:02:34 crc kubenswrapper[4642]: I0128 07:02:34.557501 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/59611c4a-ee6f-4f16-9804-aba66d47d908-kolla-config\") pod \"memcached-0\" (UID: \"59611c4a-ee6f-4f16-9804-aba66d47d908\") " pod="openstack/memcached-0" Jan 28 07:02:34 crc kubenswrapper[4642]: I0128 07:02:34.557973 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/59611c4a-ee6f-4f16-9804-aba66d47d908-config-data\") pod \"memcached-0\" (UID: \"59611c4a-ee6f-4f16-9804-aba66d47d908\") " pod="openstack/memcached-0" Jan 28 07:02:34 crc kubenswrapper[4642]: I0128 07:02:34.569174 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59611c4a-ee6f-4f16-9804-aba66d47d908-combined-ca-bundle\") pod \"memcached-0\" (UID: \"59611c4a-ee6f-4f16-9804-aba66d47d908\") " pod="openstack/memcached-0" Jan 28 07:02:34 crc kubenswrapper[4642]: I0128 07:02:34.574081 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5tbg\" (UniqueName: \"kubernetes.io/projected/59611c4a-ee6f-4f16-9804-aba66d47d908-kube-api-access-m5tbg\") pod \"memcached-0\" (UID: \"59611c4a-ee6f-4f16-9804-aba66d47d908\") " pod="openstack/memcached-0" Jan 28 07:02:34 crc kubenswrapper[4642]: I0128 07:02:34.580400 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/59611c4a-ee6f-4f16-9804-aba66d47d908-memcached-tls-certs\") pod \"memcached-0\" (UID: \"59611c4a-ee6f-4f16-9804-aba66d47d908\") " pod="openstack/memcached-0" Jan 28 07:02:34 crc kubenswrapper[4642]: I0128 07:02:34.668875 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 28 07:02:34 crc kubenswrapper[4642]: I0128 07:02:34.873231 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 28 07:02:35 crc kubenswrapper[4642]: I0128 07:02:35.022065 4642 generic.go:334] "Generic (PLEG): container finished" podID="aa646e79-c71d-48a5-8f8e-4d18c9614adf" containerID="f9a624813fb041db9ed26dcc28eb7cd621dc6119c8b053ad8202e145adde4b11" exitCode=0 Jan 28 07:02:35 crc kubenswrapper[4642]: I0128 07:02:35.022112 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wgr5k" event={"ID":"aa646e79-c71d-48a5-8f8e-4d18c9614adf","Type":"ContainerDied","Data":"f9a624813fb041db9ed26dcc28eb7cd621dc6119c8b053ad8202e145adde4b11"} Jan 28 07:02:35 crc kubenswrapper[4642]: I0128 07:02:35.024502 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"c03a521e-dd32-4a74-b452-512fe8bdae8e","Type":"ContainerStarted","Data":"8c6e60f22ca57c2918095394ed48d68f067530063d23b7e64850cabf6a23a566"} Jan 28 07:02:35 crc kubenswrapper[4642]: I0128 07:02:35.038426 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 28 07:02:35 crc kubenswrapper[4642]: W0128 07:02:35.051356 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod59611c4a_ee6f_4f16_9804_aba66d47d908.slice/crio-f66e0b19b77c3f058570a815d5c6ff13c1265a384027f00b2a9f27ccd317d8cf WatchSource:0}: Error finding container f66e0b19b77c3f058570a815d5c6ff13c1265a384027f00b2a9f27ccd317d8cf: Status 404 returned error can't find the container with id f66e0b19b77c3f058570a815d5c6ff13c1265a384027f00b2a9f27ccd317d8cf Jan 28 07:02:36 crc kubenswrapper[4642]: I0128 07:02:36.033084 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wgr5k" event={"ID":"aa646e79-c71d-48a5-8f8e-4d18c9614adf","Type":"ContainerStarted","Data":"02f21ff9b3b4dcebed169ff2dfddea77299db669fba8fd9b710f7f51714a3e22"} Jan 28 07:02:36 crc kubenswrapper[4642]: I0128 07:02:36.036052 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"59611c4a-ee6f-4f16-9804-aba66d47d908","Type":"ContainerStarted","Data":"f66e0b19b77c3f058570a815d5c6ff13c1265a384027f00b2a9f27ccd317d8cf"} Jan 28 07:02:36 crc kubenswrapper[4642]: I0128 07:02:36.238003 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wgr5k" podStartSLOduration=2.737541135 podStartE2EDuration="5.237988801s" podCreationTimestamp="2026-01-28 07:02:31 +0000 UTC" firstStartedPulling="2026-01-28 07:02:33.027639047 +0000 UTC m=+876.259727856" lastFinishedPulling="2026-01-28 07:02:35.528086713 +0000 UTC m=+878.760175522" observedRunningTime="2026-01-28 07:02:36.052633791 +0000 UTC m=+879.284722600" watchObservedRunningTime="2026-01-28 07:02:36.237988801 +0000 UTC m=+879.470077610" Jan 28 07:02:36 crc kubenswrapper[4642]: I0128 07:02:36.239821 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 07:02:36 crc kubenswrapper[4642]: I0128 07:02:36.242446 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 28 07:02:36 crc kubenswrapper[4642]: I0128 07:02:36.245512 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-mg6wz" Jan 28 07:02:36 crc kubenswrapper[4642]: I0128 07:02:36.247158 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 07:02:36 crc kubenswrapper[4642]: I0128 07:02:36.285769 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9f9jv\" (UniqueName: \"kubernetes.io/projected/9f7f3800-a8e9-4ff6-9c02-dd125daac158-kube-api-access-9f9jv\") pod \"kube-state-metrics-0\" (UID: \"9f7f3800-a8e9-4ff6-9c02-dd125daac158\") " pod="openstack/kube-state-metrics-0" Jan 28 07:02:36 crc kubenswrapper[4642]: I0128 07:02:36.387312 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9f9jv\" (UniqueName: \"kubernetes.io/projected/9f7f3800-a8e9-4ff6-9c02-dd125daac158-kube-api-access-9f9jv\") pod \"kube-state-metrics-0\" (UID: \"9f7f3800-a8e9-4ff6-9c02-dd125daac158\") " pod="openstack/kube-state-metrics-0" Jan 28 07:02:36 crc kubenswrapper[4642]: I0128 07:02:36.405028 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9f9jv\" (UniqueName: \"kubernetes.io/projected/9f7f3800-a8e9-4ff6-9c02-dd125daac158-kube-api-access-9f9jv\") pod \"kube-state-metrics-0\" (UID: \"9f7f3800-a8e9-4ff6-9c02-dd125daac158\") " pod="openstack/kube-state-metrics-0" Jan 28 07:02:36 crc kubenswrapper[4642]: I0128 07:02:36.560101 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 28 07:02:37 crc kubenswrapper[4642]: I0128 07:02:37.014611 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 07:02:37 crc kubenswrapper[4642]: W0128 07:02:37.024174 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9f7f3800_a8e9_4ff6_9c02_dd125daac158.slice/crio-9da5c1c2acc46452661f92f9647a7ee2249c2eb3742590443b3d7cd83fd6508b WatchSource:0}: Error finding container 9da5c1c2acc46452661f92f9647a7ee2249c2eb3742590443b3d7cd83fd6508b: Status 404 returned error can't find the container with id 9da5c1c2acc46452661f92f9647a7ee2249c2eb3742590443b3d7cd83fd6508b Jan 28 07:02:37 crc kubenswrapper[4642]: I0128 07:02:37.044981 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"9f7f3800-a8e9-4ff6-9c02-dd125daac158","Type":"ContainerStarted","Data":"9da5c1c2acc46452661f92f9647a7ee2249c2eb3742590443b3d7cd83fd6508b"} Jan 28 07:02:38 crc kubenswrapper[4642]: I0128 07:02:38.199239 4642 patch_prober.go:28] interesting pod/machine-config-daemon-hdsmf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 07:02:38 crc kubenswrapper[4642]: I0128 07:02:38.199449 4642 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 07:02:39 crc kubenswrapper[4642]: I0128 07:02:39.075517 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"9f7f3800-a8e9-4ff6-9c02-dd125daac158","Type":"ContainerStarted","Data":"afcb16d17f78b93690ef9378c1edabadab1b03ec3344e7a610ad9e38cbc99f28"} Jan 28 07:02:39 crc kubenswrapper[4642]: I0128 07:02:39.075896 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 28 07:02:39 crc kubenswrapper[4642]: I0128 07:02:39.096439 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.327739032 podStartE2EDuration="3.09642712s" podCreationTimestamp="2026-01-28 07:02:36 +0000 UTC" firstStartedPulling="2026-01-28 07:02:37.025741648 +0000 UTC m=+880.257830457" lastFinishedPulling="2026-01-28 07:02:38.794429736 +0000 UTC m=+882.026518545" observedRunningTime="2026-01-28 07:02:39.090262293 +0000 UTC m=+882.322351102" watchObservedRunningTime="2026-01-28 07:02:39.09642712 +0000 UTC m=+882.328515930" Jan 28 07:02:40 crc kubenswrapper[4642]: I0128 07:02:40.213392 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-9d4kj"] Jan 28 07:02:40 crc kubenswrapper[4642]: I0128 07:02:40.214110 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9d4kj" Jan 28 07:02:40 crc kubenswrapper[4642]: I0128 07:02:40.219710 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-5x4pd" Jan 28 07:02:40 crc kubenswrapper[4642]: I0128 07:02:40.219837 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 28 07:02:40 crc kubenswrapper[4642]: I0128 07:02:40.220093 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 28 07:02:40 crc kubenswrapper[4642]: I0128 07:02:40.235438 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-9d4kj"] Jan 28 07:02:40 crc kubenswrapper[4642]: I0128 07:02:40.253952 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/29b93c34-de22-48ac-80da-b79048401506-var-run-ovn\") pod \"ovn-controller-9d4kj\" (UID: \"29b93c34-de22-48ac-80da-b79048401506\") " pod="openstack/ovn-controller-9d4kj" Jan 28 07:02:40 crc kubenswrapper[4642]: I0128 07:02:40.254007 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/29b93c34-de22-48ac-80da-b79048401506-ovn-controller-tls-certs\") pod \"ovn-controller-9d4kj\" (UID: \"29b93c34-de22-48ac-80da-b79048401506\") " pod="openstack/ovn-controller-9d4kj" Jan 28 07:02:40 crc kubenswrapper[4642]: I0128 07:02:40.254053 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/29b93c34-de22-48ac-80da-b79048401506-scripts\") pod \"ovn-controller-9d4kj\" (UID: \"29b93c34-de22-48ac-80da-b79048401506\") " pod="openstack/ovn-controller-9d4kj" Jan 28 07:02:40 crc kubenswrapper[4642]: I0128 07:02:40.254070 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29b93c34-de22-48ac-80da-b79048401506-combined-ca-bundle\") pod \"ovn-controller-9d4kj\" (UID: \"29b93c34-de22-48ac-80da-b79048401506\") " pod="openstack/ovn-controller-9d4kj" Jan 28 07:02:40 crc kubenswrapper[4642]: I0128 07:02:40.254115 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4grzr\" (UniqueName: \"kubernetes.io/projected/29b93c34-de22-48ac-80da-b79048401506-kube-api-access-4grzr\") pod \"ovn-controller-9d4kj\" (UID: \"29b93c34-de22-48ac-80da-b79048401506\") " pod="openstack/ovn-controller-9d4kj" Jan 28 07:02:40 crc kubenswrapper[4642]: I0128 07:02:40.254142 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/29b93c34-de22-48ac-80da-b79048401506-var-run\") pod \"ovn-controller-9d4kj\" (UID: \"29b93c34-de22-48ac-80da-b79048401506\") " pod="openstack/ovn-controller-9d4kj" Jan 28 07:02:40 crc kubenswrapper[4642]: I0128 07:02:40.254165 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/29b93c34-de22-48ac-80da-b79048401506-var-log-ovn\") pod \"ovn-controller-9d4kj\" (UID: \"29b93c34-de22-48ac-80da-b79048401506\") " pod="openstack/ovn-controller-9d4kj" Jan 28 07:02:40 crc kubenswrapper[4642]: I0128 07:02:40.264920 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-nvphd"] Jan 28 07:02:40 crc kubenswrapper[4642]: I0128 07:02:40.266368 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-nvphd" Jan 28 07:02:40 crc kubenswrapper[4642]: I0128 07:02:40.290065 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-nvphd"] Jan 28 07:02:40 crc kubenswrapper[4642]: I0128 07:02:40.355168 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/29b93c34-de22-48ac-80da-b79048401506-ovn-controller-tls-certs\") pod \"ovn-controller-9d4kj\" (UID: \"29b93c34-de22-48ac-80da-b79048401506\") " pod="openstack/ovn-controller-9d4kj" Jan 28 07:02:40 crc kubenswrapper[4642]: I0128 07:02:40.355237 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/e0307f10-0ff0-4421-91a1-34ff47b17d16-var-log\") pod \"ovn-controller-ovs-nvphd\" (UID: \"e0307f10-0ff0-4421-91a1-34ff47b17d16\") " pod="openstack/ovn-controller-ovs-nvphd" Jan 28 07:02:40 crc kubenswrapper[4642]: I0128 07:02:40.355279 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/e0307f10-0ff0-4421-91a1-34ff47b17d16-etc-ovs\") pod \"ovn-controller-ovs-nvphd\" (UID: \"e0307f10-0ff0-4421-91a1-34ff47b17d16\") " pod="openstack/ovn-controller-ovs-nvphd" Jan 28 07:02:40 crc kubenswrapper[4642]: I0128 07:02:40.355300 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/29b93c34-de22-48ac-80da-b79048401506-scripts\") pod \"ovn-controller-9d4kj\" (UID: \"29b93c34-de22-48ac-80da-b79048401506\") " pod="openstack/ovn-controller-9d4kj" Jan 28 07:02:40 crc kubenswrapper[4642]: I0128 07:02:40.355323 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29b93c34-de22-48ac-80da-b79048401506-combined-ca-bundle\") pod \"ovn-controller-9d4kj\" (UID: \"29b93c34-de22-48ac-80da-b79048401506\") " pod="openstack/ovn-controller-9d4kj" Jan 28 07:02:40 crc kubenswrapper[4642]: I0128 07:02:40.355342 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/e0307f10-0ff0-4421-91a1-34ff47b17d16-var-lib\") pod \"ovn-controller-ovs-nvphd\" (UID: \"e0307f10-0ff0-4421-91a1-34ff47b17d16\") " pod="openstack/ovn-controller-ovs-nvphd" Jan 28 07:02:40 crc kubenswrapper[4642]: I0128 07:02:40.355373 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e0307f10-0ff0-4421-91a1-34ff47b17d16-var-run\") pod \"ovn-controller-ovs-nvphd\" (UID: \"e0307f10-0ff0-4421-91a1-34ff47b17d16\") " pod="openstack/ovn-controller-ovs-nvphd" Jan 28 07:02:40 crc kubenswrapper[4642]: I0128 07:02:40.355443 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4grzr\" (UniqueName: \"kubernetes.io/projected/29b93c34-de22-48ac-80da-b79048401506-kube-api-access-4grzr\") pod \"ovn-controller-9d4kj\" (UID: \"29b93c34-de22-48ac-80da-b79048401506\") " pod="openstack/ovn-controller-9d4kj" Jan 28 07:02:40 crc kubenswrapper[4642]: I0128 07:02:40.355464 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e0307f10-0ff0-4421-91a1-34ff47b17d16-scripts\") pod \"ovn-controller-ovs-nvphd\" (UID: \"e0307f10-0ff0-4421-91a1-34ff47b17d16\") " pod="openstack/ovn-controller-ovs-nvphd" Jan 28 07:02:40 crc kubenswrapper[4642]: I0128 07:02:40.355493 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbjrj\" (UniqueName: \"kubernetes.io/projected/e0307f10-0ff0-4421-91a1-34ff47b17d16-kube-api-access-tbjrj\") pod \"ovn-controller-ovs-nvphd\" (UID: \"e0307f10-0ff0-4421-91a1-34ff47b17d16\") " pod="openstack/ovn-controller-ovs-nvphd" Jan 28 07:02:40 crc kubenswrapper[4642]: I0128 07:02:40.355515 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/29b93c34-de22-48ac-80da-b79048401506-var-run\") pod \"ovn-controller-9d4kj\" (UID: \"29b93c34-de22-48ac-80da-b79048401506\") " pod="openstack/ovn-controller-9d4kj" Jan 28 07:02:40 crc kubenswrapper[4642]: I0128 07:02:40.355550 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/29b93c34-de22-48ac-80da-b79048401506-var-log-ovn\") pod \"ovn-controller-9d4kj\" (UID: \"29b93c34-de22-48ac-80da-b79048401506\") " pod="openstack/ovn-controller-9d4kj" Jan 28 07:02:40 crc kubenswrapper[4642]: I0128 07:02:40.355583 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/29b93c34-de22-48ac-80da-b79048401506-var-run-ovn\") pod \"ovn-controller-9d4kj\" (UID: \"29b93c34-de22-48ac-80da-b79048401506\") " pod="openstack/ovn-controller-9d4kj" Jan 28 07:02:40 crc kubenswrapper[4642]: I0128 07:02:40.355988 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/29b93c34-de22-48ac-80da-b79048401506-var-run-ovn\") pod \"ovn-controller-9d4kj\" (UID: \"29b93c34-de22-48ac-80da-b79048401506\") " pod="openstack/ovn-controller-9d4kj" Jan 28 07:02:40 crc kubenswrapper[4642]: I0128 07:02:40.356358 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/29b93c34-de22-48ac-80da-b79048401506-var-run\") pod \"ovn-controller-9d4kj\" (UID: \"29b93c34-de22-48ac-80da-b79048401506\") " pod="openstack/ovn-controller-9d4kj" Jan 28 07:02:40 crc kubenswrapper[4642]: I0128 07:02:40.356508 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/29b93c34-de22-48ac-80da-b79048401506-var-log-ovn\") pod \"ovn-controller-9d4kj\" (UID: \"29b93c34-de22-48ac-80da-b79048401506\") " pod="openstack/ovn-controller-9d4kj" Jan 28 07:02:40 crc kubenswrapper[4642]: I0128 07:02:40.357734 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/29b93c34-de22-48ac-80da-b79048401506-scripts\") pod \"ovn-controller-9d4kj\" (UID: \"29b93c34-de22-48ac-80da-b79048401506\") " pod="openstack/ovn-controller-9d4kj" Jan 28 07:02:40 crc kubenswrapper[4642]: I0128 07:02:40.367328 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/29b93c34-de22-48ac-80da-b79048401506-ovn-controller-tls-certs\") pod \"ovn-controller-9d4kj\" (UID: \"29b93c34-de22-48ac-80da-b79048401506\") " pod="openstack/ovn-controller-9d4kj" Jan 28 07:02:40 crc kubenswrapper[4642]: I0128 07:02:40.367790 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29b93c34-de22-48ac-80da-b79048401506-combined-ca-bundle\") pod \"ovn-controller-9d4kj\" (UID: \"29b93c34-de22-48ac-80da-b79048401506\") " pod="openstack/ovn-controller-9d4kj" Jan 28 07:02:40 crc kubenswrapper[4642]: I0128 07:02:40.375117 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4grzr\" (UniqueName: \"kubernetes.io/projected/29b93c34-de22-48ac-80da-b79048401506-kube-api-access-4grzr\") pod \"ovn-controller-9d4kj\" (UID: \"29b93c34-de22-48ac-80da-b79048401506\") " pod="openstack/ovn-controller-9d4kj" Jan 28 07:02:40 crc kubenswrapper[4642]: I0128 07:02:40.466144 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e0307f10-0ff0-4421-91a1-34ff47b17d16-var-run\") pod \"ovn-controller-ovs-nvphd\" (UID: \"e0307f10-0ff0-4421-91a1-34ff47b17d16\") " pod="openstack/ovn-controller-ovs-nvphd" Jan 28 07:02:40 crc kubenswrapper[4642]: I0128 07:02:40.466232 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e0307f10-0ff0-4421-91a1-34ff47b17d16-scripts\") pod \"ovn-controller-ovs-nvphd\" (UID: \"e0307f10-0ff0-4421-91a1-34ff47b17d16\") " pod="openstack/ovn-controller-ovs-nvphd" Jan 28 07:02:40 crc kubenswrapper[4642]: I0128 07:02:40.466266 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbjrj\" (UniqueName: \"kubernetes.io/projected/e0307f10-0ff0-4421-91a1-34ff47b17d16-kube-api-access-tbjrj\") pod \"ovn-controller-ovs-nvphd\" (UID: \"e0307f10-0ff0-4421-91a1-34ff47b17d16\") " pod="openstack/ovn-controller-ovs-nvphd" Jan 28 07:02:40 crc kubenswrapper[4642]: I0128 07:02:40.466350 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/e0307f10-0ff0-4421-91a1-34ff47b17d16-var-log\") pod \"ovn-controller-ovs-nvphd\" (UID: \"e0307f10-0ff0-4421-91a1-34ff47b17d16\") " pod="openstack/ovn-controller-ovs-nvphd" Jan 28 07:02:40 crc kubenswrapper[4642]: I0128 07:02:40.466392 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/e0307f10-0ff0-4421-91a1-34ff47b17d16-etc-ovs\") pod \"ovn-controller-ovs-nvphd\" (UID: \"e0307f10-0ff0-4421-91a1-34ff47b17d16\") " pod="openstack/ovn-controller-ovs-nvphd" Jan 28 07:02:40 crc kubenswrapper[4642]: I0128 07:02:40.466415 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/e0307f10-0ff0-4421-91a1-34ff47b17d16-var-lib\") pod \"ovn-controller-ovs-nvphd\" (UID: \"e0307f10-0ff0-4421-91a1-34ff47b17d16\") " pod="openstack/ovn-controller-ovs-nvphd" Jan 28 07:02:40 crc kubenswrapper[4642]: I0128 07:02:40.466591 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/e0307f10-0ff0-4421-91a1-34ff47b17d16-var-lib\") pod \"ovn-controller-ovs-nvphd\" (UID: \"e0307f10-0ff0-4421-91a1-34ff47b17d16\") " pod="openstack/ovn-controller-ovs-nvphd" Jan 28 07:02:40 crc kubenswrapper[4642]: I0128 07:02:40.466638 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e0307f10-0ff0-4421-91a1-34ff47b17d16-var-run\") pod \"ovn-controller-ovs-nvphd\" (UID: \"e0307f10-0ff0-4421-91a1-34ff47b17d16\") " pod="openstack/ovn-controller-ovs-nvphd" Jan 28 07:02:40 crc kubenswrapper[4642]: I0128 07:02:40.467313 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/e0307f10-0ff0-4421-91a1-34ff47b17d16-var-log\") pod \"ovn-controller-ovs-nvphd\" (UID: \"e0307f10-0ff0-4421-91a1-34ff47b17d16\") " pod="openstack/ovn-controller-ovs-nvphd" Jan 28 07:02:40 crc kubenswrapper[4642]: I0128 07:02:40.467695 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/e0307f10-0ff0-4421-91a1-34ff47b17d16-etc-ovs\") pod \"ovn-controller-ovs-nvphd\" (UID: \"e0307f10-0ff0-4421-91a1-34ff47b17d16\") " pod="openstack/ovn-controller-ovs-nvphd" Jan 28 07:02:40 crc kubenswrapper[4642]: I0128 07:02:40.468233 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e0307f10-0ff0-4421-91a1-34ff47b17d16-scripts\") pod \"ovn-controller-ovs-nvphd\" (UID: \"e0307f10-0ff0-4421-91a1-34ff47b17d16\") " pod="openstack/ovn-controller-ovs-nvphd" Jan 28 07:02:40 crc kubenswrapper[4642]: I0128 07:02:40.496729 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbjrj\" (UniqueName: \"kubernetes.io/projected/e0307f10-0ff0-4421-91a1-34ff47b17d16-kube-api-access-tbjrj\") pod \"ovn-controller-ovs-nvphd\" (UID: \"e0307f10-0ff0-4421-91a1-34ff47b17d16\") " pod="openstack/ovn-controller-ovs-nvphd" Jan 28 07:02:40 crc kubenswrapper[4642]: I0128 07:02:40.537434 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9d4kj" Jan 28 07:02:40 crc kubenswrapper[4642]: I0128 07:02:40.605425 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-nvphd" Jan 28 07:02:41 crc kubenswrapper[4642]: I0128 07:02:41.035241 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-9d4kj"] Jan 28 07:02:41 crc kubenswrapper[4642]: I0128 07:02:41.109568 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-9d4kj" event={"ID":"29b93c34-de22-48ac-80da-b79048401506","Type":"ContainerStarted","Data":"7b5fe33c090cce60c54d70ded8d5be258a572091d745d49029e8cf2e4d010976"} Jan 28 07:02:41 crc kubenswrapper[4642]: I0128 07:02:41.172233 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-nvphd"] Jan 28 07:02:41 crc kubenswrapper[4642]: W0128 07:02:41.179321 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode0307f10_0ff0_4421_91a1_34ff47b17d16.slice/crio-0741a94f2d83a9e4ff1d1ca8c2f5702ecb781dcad0beee94e06f112c31f00a13 WatchSource:0}: Error finding container 0741a94f2d83a9e4ff1d1ca8c2f5702ecb781dcad0beee94e06f112c31f00a13: Status 404 returned error can't find the container with id 0741a94f2d83a9e4ff1d1ca8c2f5702ecb781dcad0beee94e06f112c31f00a13 Jan 28 07:02:41 crc kubenswrapper[4642]: I0128 07:02:41.953605 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-xmwjf"] Jan 28 07:02:41 crc kubenswrapper[4642]: I0128 07:02:41.954899 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-xmwjf" Jan 28 07:02:41 crc kubenswrapper[4642]: I0128 07:02:41.956240 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 28 07:02:41 crc kubenswrapper[4642]: I0128 07:02:41.956473 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 28 07:02:41 crc kubenswrapper[4642]: I0128 07:02:41.960015 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-xmwjf"] Jan 28 07:02:41 crc kubenswrapper[4642]: I0128 07:02:41.991465 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wgr5k" Jan 28 07:02:41 crc kubenswrapper[4642]: I0128 07:02:41.991699 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wgr5k" Jan 28 07:02:42 crc kubenswrapper[4642]: I0128 07:02:42.003374 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/9c18095f-18c4-435f-a2cc-216a62127faa-ovs-rundir\") pod \"ovn-controller-metrics-xmwjf\" (UID: \"9c18095f-18c4-435f-a2cc-216a62127faa\") " pod="openstack/ovn-controller-metrics-xmwjf" Jan 28 07:02:42 crc kubenswrapper[4642]: I0128 07:02:42.003439 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fc6wg\" (UniqueName: \"kubernetes.io/projected/9c18095f-18c4-435f-a2cc-216a62127faa-kube-api-access-fc6wg\") pod \"ovn-controller-metrics-xmwjf\" (UID: \"9c18095f-18c4-435f-a2cc-216a62127faa\") " pod="openstack/ovn-controller-metrics-xmwjf" Jan 28 07:02:42 crc kubenswrapper[4642]: I0128 07:02:42.003492 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/9c18095f-18c4-435f-a2cc-216a62127faa-ovn-rundir\") pod \"ovn-controller-metrics-xmwjf\" (UID: \"9c18095f-18c4-435f-a2cc-216a62127faa\") " pod="openstack/ovn-controller-metrics-xmwjf" Jan 28 07:02:42 crc kubenswrapper[4642]: I0128 07:02:42.003531 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c18095f-18c4-435f-a2cc-216a62127faa-config\") pod \"ovn-controller-metrics-xmwjf\" (UID: \"9c18095f-18c4-435f-a2cc-216a62127faa\") " pod="openstack/ovn-controller-metrics-xmwjf" Jan 28 07:02:42 crc kubenswrapper[4642]: I0128 07:02:42.003550 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c18095f-18c4-435f-a2cc-216a62127faa-combined-ca-bundle\") pod \"ovn-controller-metrics-xmwjf\" (UID: \"9c18095f-18c4-435f-a2cc-216a62127faa\") " pod="openstack/ovn-controller-metrics-xmwjf" Jan 28 07:02:42 crc kubenswrapper[4642]: I0128 07:02:42.003578 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c18095f-18c4-435f-a2cc-216a62127faa-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-xmwjf\" (UID: \"9c18095f-18c4-435f-a2cc-216a62127faa\") " pod="openstack/ovn-controller-metrics-xmwjf" Jan 28 07:02:42 crc kubenswrapper[4642]: I0128 07:02:42.055333 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 28 07:02:42 crc kubenswrapper[4642]: I0128 07:02:42.056850 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 28 07:02:42 crc kubenswrapper[4642]: I0128 07:02:42.059949 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 28 07:02:42 crc kubenswrapper[4642]: I0128 07:02:42.059982 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-zcpgr" Jan 28 07:02:42 crc kubenswrapper[4642]: I0128 07:02:42.061676 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 28 07:02:42 crc kubenswrapper[4642]: I0128 07:02:42.058951 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 28 07:02:42 crc kubenswrapper[4642]: I0128 07:02:42.069912 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 28 07:02:42 crc kubenswrapper[4642]: I0128 07:02:42.078002 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wgr5k" Jan 28 07:02:42 crc kubenswrapper[4642]: I0128 07:02:42.109924 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7f96797-56a8-4fc5-a520-cfaecf44c4a0-config\") pod \"ovsdbserver-nb-0\" (UID: \"f7f96797-56a8-4fc5-a520-cfaecf44c4a0\") " pod="openstack/ovsdbserver-nb-0" Jan 28 07:02:42 crc kubenswrapper[4642]: I0128 07:02:42.109977 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f7f96797-56a8-4fc5-a520-cfaecf44c4a0-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"f7f96797-56a8-4fc5-a520-cfaecf44c4a0\") " pod="openstack/ovsdbserver-nb-0" Jan 28 07:02:42 crc kubenswrapper[4642]: I0128 07:02:42.110008 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"f7f96797-56a8-4fc5-a520-cfaecf44c4a0\") " pod="openstack/ovsdbserver-nb-0" Jan 28 07:02:42 crc kubenswrapper[4642]: I0128 07:02:42.110108 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7f96797-56a8-4fc5-a520-cfaecf44c4a0-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"f7f96797-56a8-4fc5-a520-cfaecf44c4a0\") " pod="openstack/ovsdbserver-nb-0" Jan 28 07:02:42 crc kubenswrapper[4642]: I0128 07:02:42.110203 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fc6wg\" (UniqueName: \"kubernetes.io/projected/9c18095f-18c4-435f-a2cc-216a62127faa-kube-api-access-fc6wg\") pod \"ovn-controller-metrics-xmwjf\" (UID: \"9c18095f-18c4-435f-a2cc-216a62127faa\") " pod="openstack/ovn-controller-metrics-xmwjf" Jan 28 07:02:42 crc kubenswrapper[4642]: I0128 07:02:42.110256 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mz6r\" (UniqueName: \"kubernetes.io/projected/f7f96797-56a8-4fc5-a520-cfaecf44c4a0-kube-api-access-6mz6r\") pod \"ovsdbserver-nb-0\" (UID: \"f7f96797-56a8-4fc5-a520-cfaecf44c4a0\") " pod="openstack/ovsdbserver-nb-0" Jan 28 07:02:42 crc kubenswrapper[4642]: I0128 07:02:42.110294 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/9c18095f-18c4-435f-a2cc-216a62127faa-ovn-rundir\") pod \"ovn-controller-metrics-xmwjf\" (UID: \"9c18095f-18c4-435f-a2cc-216a62127faa\") " pod="openstack/ovn-controller-metrics-xmwjf" Jan 28 07:02:42 crc kubenswrapper[4642]: I0128 07:02:42.110360 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c18095f-18c4-435f-a2cc-216a62127faa-combined-ca-bundle\") pod \"ovn-controller-metrics-xmwjf\" (UID: \"9c18095f-18c4-435f-a2cc-216a62127faa\") " pod="openstack/ovn-controller-metrics-xmwjf" Jan 28 07:02:42 crc kubenswrapper[4642]: I0128 07:02:42.110527 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7f96797-56a8-4fc5-a520-cfaecf44c4a0-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"f7f96797-56a8-4fc5-a520-cfaecf44c4a0\") " pod="openstack/ovsdbserver-nb-0" Jan 28 07:02:42 crc kubenswrapper[4642]: I0128 07:02:42.110596 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7f96797-56a8-4fc5-a520-cfaecf44c4a0-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"f7f96797-56a8-4fc5-a520-cfaecf44c4a0\") " pod="openstack/ovsdbserver-nb-0" Jan 28 07:02:42 crc kubenswrapper[4642]: I0128 07:02:42.110624 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/9c18095f-18c4-435f-a2cc-216a62127faa-ovs-rundir\") pod \"ovn-controller-metrics-xmwjf\" (UID: \"9c18095f-18c4-435f-a2cc-216a62127faa\") " pod="openstack/ovn-controller-metrics-xmwjf" Jan 28 07:02:42 crc kubenswrapper[4642]: I0128 07:02:42.110649 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f7f96797-56a8-4fc5-a520-cfaecf44c4a0-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"f7f96797-56a8-4fc5-a520-cfaecf44c4a0\") " pod="openstack/ovsdbserver-nb-0" Jan 28 07:02:42 crc kubenswrapper[4642]: I0128 07:02:42.110789 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c18095f-18c4-435f-a2cc-216a62127faa-config\") pod \"ovn-controller-metrics-xmwjf\" (UID: \"9c18095f-18c4-435f-a2cc-216a62127faa\") " pod="openstack/ovn-controller-metrics-xmwjf" Jan 28 07:02:42 crc kubenswrapper[4642]: I0128 07:02:42.110845 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c18095f-18c4-435f-a2cc-216a62127faa-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-xmwjf\" (UID: \"9c18095f-18c4-435f-a2cc-216a62127faa\") " pod="openstack/ovn-controller-metrics-xmwjf" Jan 28 07:02:42 crc kubenswrapper[4642]: I0128 07:02:42.111291 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/9c18095f-18c4-435f-a2cc-216a62127faa-ovn-rundir\") pod \"ovn-controller-metrics-xmwjf\" (UID: \"9c18095f-18c4-435f-a2cc-216a62127faa\") " pod="openstack/ovn-controller-metrics-xmwjf" Jan 28 07:02:42 crc kubenswrapper[4642]: I0128 07:02:42.113037 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/9c18095f-18c4-435f-a2cc-216a62127faa-ovs-rundir\") pod \"ovn-controller-metrics-xmwjf\" (UID: \"9c18095f-18c4-435f-a2cc-216a62127faa\") " pod="openstack/ovn-controller-metrics-xmwjf" Jan 28 07:02:42 crc kubenswrapper[4642]: I0128 07:02:42.114125 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c18095f-18c4-435f-a2cc-216a62127faa-config\") pod \"ovn-controller-metrics-xmwjf\" (UID: \"9c18095f-18c4-435f-a2cc-216a62127faa\") " pod="openstack/ovn-controller-metrics-xmwjf" Jan 28 07:02:42 crc kubenswrapper[4642]: I0128 07:02:42.119773 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c18095f-18c4-435f-a2cc-216a62127faa-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-xmwjf\" (UID: \"9c18095f-18c4-435f-a2cc-216a62127faa\") " pod="openstack/ovn-controller-metrics-xmwjf" Jan 28 07:02:42 crc kubenswrapper[4642]: I0128 07:02:42.119910 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c18095f-18c4-435f-a2cc-216a62127faa-combined-ca-bundle\") pod \"ovn-controller-metrics-xmwjf\" (UID: \"9c18095f-18c4-435f-a2cc-216a62127faa\") " pod="openstack/ovn-controller-metrics-xmwjf" Jan 28 07:02:42 crc kubenswrapper[4642]: I0128 07:02:42.129902 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fc6wg\" (UniqueName: \"kubernetes.io/projected/9c18095f-18c4-435f-a2cc-216a62127faa-kube-api-access-fc6wg\") pod \"ovn-controller-metrics-xmwjf\" (UID: \"9c18095f-18c4-435f-a2cc-216a62127faa\") " pod="openstack/ovn-controller-metrics-xmwjf" Jan 28 07:02:42 crc kubenswrapper[4642]: I0128 07:02:42.134736 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-nvphd" event={"ID":"e0307f10-0ff0-4421-91a1-34ff47b17d16","Type":"ContainerStarted","Data":"0741a94f2d83a9e4ff1d1ca8c2f5702ecb781dcad0beee94e06f112c31f00a13"} Jan 28 07:02:42 crc kubenswrapper[4642]: I0128 07:02:42.173763 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wgr5k" Jan 28 07:02:42 crc kubenswrapper[4642]: I0128 07:02:42.212066 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7f96797-56a8-4fc5-a520-cfaecf44c4a0-config\") pod \"ovsdbserver-nb-0\" (UID: \"f7f96797-56a8-4fc5-a520-cfaecf44c4a0\") " pod="openstack/ovsdbserver-nb-0" Jan 28 07:02:42 crc kubenswrapper[4642]: I0128 07:02:42.212101 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f7f96797-56a8-4fc5-a520-cfaecf44c4a0-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"f7f96797-56a8-4fc5-a520-cfaecf44c4a0\") " pod="openstack/ovsdbserver-nb-0" Jan 28 07:02:42 crc kubenswrapper[4642]: I0128 07:02:42.212123 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"f7f96797-56a8-4fc5-a520-cfaecf44c4a0\") " pod="openstack/ovsdbserver-nb-0" Jan 28 07:02:42 crc kubenswrapper[4642]: I0128 07:02:42.212148 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7f96797-56a8-4fc5-a520-cfaecf44c4a0-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"f7f96797-56a8-4fc5-a520-cfaecf44c4a0\") " pod="openstack/ovsdbserver-nb-0" Jan 28 07:02:42 crc kubenswrapper[4642]: I0128 07:02:42.212302 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mz6r\" (UniqueName: \"kubernetes.io/projected/f7f96797-56a8-4fc5-a520-cfaecf44c4a0-kube-api-access-6mz6r\") pod \"ovsdbserver-nb-0\" (UID: \"f7f96797-56a8-4fc5-a520-cfaecf44c4a0\") " pod="openstack/ovsdbserver-nb-0" Jan 28 07:02:42 crc kubenswrapper[4642]: I0128 07:02:42.212484 4642 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"f7f96797-56a8-4fc5-a520-cfaecf44c4a0\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/ovsdbserver-nb-0" Jan 28 07:02:42 crc kubenswrapper[4642]: I0128 07:02:42.212875 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7f96797-56a8-4fc5-a520-cfaecf44c4a0-config\") pod \"ovsdbserver-nb-0\" (UID: \"f7f96797-56a8-4fc5-a520-cfaecf44c4a0\") " pod="openstack/ovsdbserver-nb-0" Jan 28 07:02:42 crc kubenswrapper[4642]: I0128 07:02:42.213257 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f7f96797-56a8-4fc5-a520-cfaecf44c4a0-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"f7f96797-56a8-4fc5-a520-cfaecf44c4a0\") " pod="openstack/ovsdbserver-nb-0" Jan 28 07:02:42 crc kubenswrapper[4642]: I0128 07:02:42.213268 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7f96797-56a8-4fc5-a520-cfaecf44c4a0-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"f7f96797-56a8-4fc5-a520-cfaecf44c4a0\") " pod="openstack/ovsdbserver-nb-0" Jan 28 07:02:42 crc kubenswrapper[4642]: I0128 07:02:42.213366 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7f96797-56a8-4fc5-a520-cfaecf44c4a0-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"f7f96797-56a8-4fc5-a520-cfaecf44c4a0\") " pod="openstack/ovsdbserver-nb-0" Jan 28 07:02:42 crc kubenswrapper[4642]: I0128 07:02:42.213410 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f7f96797-56a8-4fc5-a520-cfaecf44c4a0-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"f7f96797-56a8-4fc5-a520-cfaecf44c4a0\") " pod="openstack/ovsdbserver-nb-0" Jan 28 07:02:42 crc kubenswrapper[4642]: I0128 07:02:42.213836 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f7f96797-56a8-4fc5-a520-cfaecf44c4a0-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"f7f96797-56a8-4fc5-a520-cfaecf44c4a0\") " pod="openstack/ovsdbserver-nb-0" Jan 28 07:02:42 crc kubenswrapper[4642]: I0128 07:02:42.215790 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7f96797-56a8-4fc5-a520-cfaecf44c4a0-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"f7f96797-56a8-4fc5-a520-cfaecf44c4a0\") " pod="openstack/ovsdbserver-nb-0" Jan 28 07:02:42 crc kubenswrapper[4642]: I0128 07:02:42.217859 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7f96797-56a8-4fc5-a520-cfaecf44c4a0-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"f7f96797-56a8-4fc5-a520-cfaecf44c4a0\") " pod="openstack/ovsdbserver-nb-0" Jan 28 07:02:42 crc kubenswrapper[4642]: I0128 07:02:42.218794 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7f96797-56a8-4fc5-a520-cfaecf44c4a0-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"f7f96797-56a8-4fc5-a520-cfaecf44c4a0\") " pod="openstack/ovsdbserver-nb-0" Jan 28 07:02:42 crc kubenswrapper[4642]: I0128 07:02:42.225945 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mz6r\" (UniqueName: \"kubernetes.io/projected/f7f96797-56a8-4fc5-a520-cfaecf44c4a0-kube-api-access-6mz6r\") pod \"ovsdbserver-nb-0\" (UID: \"f7f96797-56a8-4fc5-a520-cfaecf44c4a0\") " pod="openstack/ovsdbserver-nb-0" Jan 28 07:02:42 crc kubenswrapper[4642]: I0128 07:02:42.237259 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"f7f96797-56a8-4fc5-a520-cfaecf44c4a0\") " pod="openstack/ovsdbserver-nb-0" Jan 28 07:02:42 crc kubenswrapper[4642]: I0128 07:02:42.272683 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-xmwjf" Jan 28 07:02:42 crc kubenswrapper[4642]: I0128 07:02:42.305652 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wgr5k"] Jan 28 07:02:42 crc kubenswrapper[4642]: I0128 07:02:42.384690 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 28 07:02:42 crc kubenswrapper[4642]: I0128 07:02:42.661714 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-xmwjf"] Jan 28 07:02:42 crc kubenswrapper[4642]: W0128 07:02:42.674352 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c18095f_18c4_435f_a2cc_216a62127faa.slice/crio-a37338d7dd42c03b9ca44f333d9712a2ce6aebf6eccaaf75f4e7909001819ef7 WatchSource:0}: Error finding container a37338d7dd42c03b9ca44f333d9712a2ce6aebf6eccaaf75f4e7909001819ef7: Status 404 returned error can't find the container with id a37338d7dd42c03b9ca44f333d9712a2ce6aebf6eccaaf75f4e7909001819ef7 Jan 28 07:02:42 crc kubenswrapper[4642]: I0128 07:02:42.946605 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 28 07:02:43 crc kubenswrapper[4642]: I0128 07:02:43.076859 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 28 07:02:43 crc kubenswrapper[4642]: I0128 07:02:43.081564 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 28 07:02:43 crc kubenswrapper[4642]: I0128 07:02:43.083739 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-48nkh" Jan 28 07:02:43 crc kubenswrapper[4642]: I0128 07:02:43.084021 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 28 07:02:43 crc kubenswrapper[4642]: I0128 07:02:43.084129 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 28 07:02:43 crc kubenswrapper[4642]: I0128 07:02:43.084290 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 28 07:02:43 crc kubenswrapper[4642]: I0128 07:02:43.085868 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 28 07:02:43 crc kubenswrapper[4642]: I0128 07:02:43.142487 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-xmwjf" event={"ID":"9c18095f-18c4-435f-a2cc-216a62127faa","Type":"ContainerStarted","Data":"a37338d7dd42c03b9ca44f333d9712a2ce6aebf6eccaaf75f4e7909001819ef7"} Jan 28 07:02:43 crc kubenswrapper[4642]: I0128 07:02:43.148244 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"f7f96797-56a8-4fc5-a520-cfaecf44c4a0","Type":"ContainerStarted","Data":"4ed6b911cbe79693975c324c407c03efd845b26d6a17a2e1985153d078fbad8b"} Jan 28 07:02:43 crc kubenswrapper[4642]: I0128 07:02:43.230283 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/08e72283-7898-4b33-a2ef-5ebe2a319fe8-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"08e72283-7898-4b33-a2ef-5ebe2a319fe8\") " pod="openstack/ovsdbserver-sb-0" Jan 28 07:02:43 crc kubenswrapper[4642]: I0128 07:02:43.230321 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/08e72283-7898-4b33-a2ef-5ebe2a319fe8-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"08e72283-7898-4b33-a2ef-5ebe2a319fe8\") " pod="openstack/ovsdbserver-sb-0" Jan 28 07:02:43 crc kubenswrapper[4642]: I0128 07:02:43.230352 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/08e72283-7898-4b33-a2ef-5ebe2a319fe8-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"08e72283-7898-4b33-a2ef-5ebe2a319fe8\") " pod="openstack/ovsdbserver-sb-0" Jan 28 07:02:43 crc kubenswrapper[4642]: I0128 07:02:43.230369 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9p7fs\" (UniqueName: \"kubernetes.io/projected/08e72283-7898-4b33-a2ef-5ebe2a319fe8-kube-api-access-9p7fs\") pod \"ovsdbserver-sb-0\" (UID: \"08e72283-7898-4b33-a2ef-5ebe2a319fe8\") " pod="openstack/ovsdbserver-sb-0" Jan 28 07:02:43 crc kubenswrapper[4642]: I0128 07:02:43.230432 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"08e72283-7898-4b33-a2ef-5ebe2a319fe8\") " pod="openstack/ovsdbserver-sb-0" Jan 28 07:02:43 crc kubenswrapper[4642]: I0128 07:02:43.231287 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/08e72283-7898-4b33-a2ef-5ebe2a319fe8-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"08e72283-7898-4b33-a2ef-5ebe2a319fe8\") " pod="openstack/ovsdbserver-sb-0" Jan 28 07:02:43 crc kubenswrapper[4642]: I0128 07:02:43.231339 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08e72283-7898-4b33-a2ef-5ebe2a319fe8-config\") pod \"ovsdbserver-sb-0\" (UID: \"08e72283-7898-4b33-a2ef-5ebe2a319fe8\") " pod="openstack/ovsdbserver-sb-0" Jan 28 07:02:43 crc kubenswrapper[4642]: I0128 07:02:43.231452 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08e72283-7898-4b33-a2ef-5ebe2a319fe8-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"08e72283-7898-4b33-a2ef-5ebe2a319fe8\") " pod="openstack/ovsdbserver-sb-0" Jan 28 07:02:43 crc kubenswrapper[4642]: I0128 07:02:43.333684 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/08e72283-7898-4b33-a2ef-5ebe2a319fe8-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"08e72283-7898-4b33-a2ef-5ebe2a319fe8\") " pod="openstack/ovsdbserver-sb-0" Jan 28 07:02:43 crc kubenswrapper[4642]: I0128 07:02:43.333723 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/08e72283-7898-4b33-a2ef-5ebe2a319fe8-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"08e72283-7898-4b33-a2ef-5ebe2a319fe8\") " pod="openstack/ovsdbserver-sb-0" Jan 28 07:02:43 crc kubenswrapper[4642]: I0128 07:02:43.333759 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/08e72283-7898-4b33-a2ef-5ebe2a319fe8-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"08e72283-7898-4b33-a2ef-5ebe2a319fe8\") " pod="openstack/ovsdbserver-sb-0" Jan 28 07:02:43 crc kubenswrapper[4642]: I0128 07:02:43.333780 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9p7fs\" (UniqueName: \"kubernetes.io/projected/08e72283-7898-4b33-a2ef-5ebe2a319fe8-kube-api-access-9p7fs\") pod \"ovsdbserver-sb-0\" (UID: \"08e72283-7898-4b33-a2ef-5ebe2a319fe8\") " pod="openstack/ovsdbserver-sb-0" Jan 28 07:02:43 crc kubenswrapper[4642]: I0128 07:02:43.333849 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"08e72283-7898-4b33-a2ef-5ebe2a319fe8\") " pod="openstack/ovsdbserver-sb-0" Jan 28 07:02:43 crc kubenswrapper[4642]: I0128 07:02:43.333885 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/08e72283-7898-4b33-a2ef-5ebe2a319fe8-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"08e72283-7898-4b33-a2ef-5ebe2a319fe8\") " pod="openstack/ovsdbserver-sb-0" Jan 28 07:02:43 crc kubenswrapper[4642]: I0128 07:02:43.333906 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08e72283-7898-4b33-a2ef-5ebe2a319fe8-config\") pod \"ovsdbserver-sb-0\" (UID: \"08e72283-7898-4b33-a2ef-5ebe2a319fe8\") " pod="openstack/ovsdbserver-sb-0" Jan 28 07:02:43 crc kubenswrapper[4642]: I0128 07:02:43.333937 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08e72283-7898-4b33-a2ef-5ebe2a319fe8-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"08e72283-7898-4b33-a2ef-5ebe2a319fe8\") " pod="openstack/ovsdbserver-sb-0" Jan 28 07:02:43 crc kubenswrapper[4642]: I0128 07:02:43.334259 4642 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"08e72283-7898-4b33-a2ef-5ebe2a319fe8\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/ovsdbserver-sb-0" Jan 28 07:02:43 crc kubenswrapper[4642]: I0128 07:02:43.335166 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08e72283-7898-4b33-a2ef-5ebe2a319fe8-config\") pod \"ovsdbserver-sb-0\" (UID: \"08e72283-7898-4b33-a2ef-5ebe2a319fe8\") " pod="openstack/ovsdbserver-sb-0" Jan 28 07:02:43 crc kubenswrapper[4642]: I0128 07:02:43.335539 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/08e72283-7898-4b33-a2ef-5ebe2a319fe8-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"08e72283-7898-4b33-a2ef-5ebe2a319fe8\") " pod="openstack/ovsdbserver-sb-0" Jan 28 07:02:43 crc kubenswrapper[4642]: I0128 07:02:43.335789 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/08e72283-7898-4b33-a2ef-5ebe2a319fe8-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"08e72283-7898-4b33-a2ef-5ebe2a319fe8\") " pod="openstack/ovsdbserver-sb-0" Jan 28 07:02:43 crc kubenswrapper[4642]: I0128 07:02:43.341898 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08e72283-7898-4b33-a2ef-5ebe2a319fe8-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"08e72283-7898-4b33-a2ef-5ebe2a319fe8\") " pod="openstack/ovsdbserver-sb-0" Jan 28 07:02:43 crc kubenswrapper[4642]: I0128 07:02:43.341941 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/08e72283-7898-4b33-a2ef-5ebe2a319fe8-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"08e72283-7898-4b33-a2ef-5ebe2a319fe8\") " pod="openstack/ovsdbserver-sb-0" Jan 28 07:02:43 crc kubenswrapper[4642]: I0128 07:02:43.343151 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/08e72283-7898-4b33-a2ef-5ebe2a319fe8-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"08e72283-7898-4b33-a2ef-5ebe2a319fe8\") " pod="openstack/ovsdbserver-sb-0" Jan 28 07:02:43 crc kubenswrapper[4642]: I0128 07:02:43.347878 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9p7fs\" (UniqueName: \"kubernetes.io/projected/08e72283-7898-4b33-a2ef-5ebe2a319fe8-kube-api-access-9p7fs\") pod \"ovsdbserver-sb-0\" (UID: \"08e72283-7898-4b33-a2ef-5ebe2a319fe8\") " pod="openstack/ovsdbserver-sb-0" Jan 28 07:02:43 crc kubenswrapper[4642]: I0128 07:02:43.355057 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"08e72283-7898-4b33-a2ef-5ebe2a319fe8\") " pod="openstack/ovsdbserver-sb-0" Jan 28 07:02:43 crc kubenswrapper[4642]: I0128 07:02:43.394414 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 28 07:02:43 crc kubenswrapper[4642]: I0128 07:02:43.846492 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 28 07:02:44 crc kubenswrapper[4642]: I0128 07:02:44.154534 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wgr5k" podUID="aa646e79-c71d-48a5-8f8e-4d18c9614adf" containerName="registry-server" containerID="cri-o://02f21ff9b3b4dcebed169ff2dfddea77299db669fba8fd9b710f7f51714a3e22" gracePeriod=2 Jan 28 07:02:44 crc kubenswrapper[4642]: I0128 07:02:44.154898 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"08e72283-7898-4b33-a2ef-5ebe2a319fe8","Type":"ContainerStarted","Data":"25a67d711f6829305a6eb6716f3196d7e5834ce5990a95df5abbfe5c2fb22da3"} Jan 28 07:02:44 crc kubenswrapper[4642]: I0128 07:02:44.533496 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wgr5k" Jan 28 07:02:44 crc kubenswrapper[4642]: I0128 07:02:44.655824 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa646e79-c71d-48a5-8f8e-4d18c9614adf-utilities\") pod \"aa646e79-c71d-48a5-8f8e-4d18c9614adf\" (UID: \"aa646e79-c71d-48a5-8f8e-4d18c9614adf\") " Jan 28 07:02:44 crc kubenswrapper[4642]: I0128 07:02:44.655949 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa646e79-c71d-48a5-8f8e-4d18c9614adf-catalog-content\") pod \"aa646e79-c71d-48a5-8f8e-4d18c9614adf\" (UID: \"aa646e79-c71d-48a5-8f8e-4d18c9614adf\") " Jan 28 07:02:44 crc kubenswrapper[4642]: I0128 07:02:44.656091 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fkr2m\" (UniqueName: \"kubernetes.io/projected/aa646e79-c71d-48a5-8f8e-4d18c9614adf-kube-api-access-fkr2m\") pod \"aa646e79-c71d-48a5-8f8e-4d18c9614adf\" (UID: \"aa646e79-c71d-48a5-8f8e-4d18c9614adf\") " Jan 28 07:02:44 crc kubenswrapper[4642]: I0128 07:02:44.656587 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa646e79-c71d-48a5-8f8e-4d18c9614adf-utilities" (OuterVolumeSpecName: "utilities") pod "aa646e79-c71d-48a5-8f8e-4d18c9614adf" (UID: "aa646e79-c71d-48a5-8f8e-4d18c9614adf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 07:02:44 crc kubenswrapper[4642]: I0128 07:02:44.658258 4642 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa646e79-c71d-48a5-8f8e-4d18c9614adf-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 07:02:44 crc kubenswrapper[4642]: I0128 07:02:44.674986 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa646e79-c71d-48a5-8f8e-4d18c9614adf-kube-api-access-fkr2m" (OuterVolumeSpecName: "kube-api-access-fkr2m") pod "aa646e79-c71d-48a5-8f8e-4d18c9614adf" (UID: "aa646e79-c71d-48a5-8f8e-4d18c9614adf"). InnerVolumeSpecName "kube-api-access-fkr2m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:02:44 crc kubenswrapper[4642]: I0128 07:02:44.706232 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa646e79-c71d-48a5-8f8e-4d18c9614adf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aa646e79-c71d-48a5-8f8e-4d18c9614adf" (UID: "aa646e79-c71d-48a5-8f8e-4d18c9614adf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 07:02:44 crc kubenswrapper[4642]: I0128 07:02:44.759249 4642 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa646e79-c71d-48a5-8f8e-4d18c9614adf-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 07:02:44 crc kubenswrapper[4642]: I0128 07:02:44.759272 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fkr2m\" (UniqueName: \"kubernetes.io/projected/aa646e79-c71d-48a5-8f8e-4d18c9614adf-kube-api-access-fkr2m\") on node \"crc\" DevicePath \"\"" Jan 28 07:02:45 crc kubenswrapper[4642]: I0128 07:02:45.168343 4642 generic.go:334] "Generic (PLEG): container finished" podID="aa646e79-c71d-48a5-8f8e-4d18c9614adf" containerID="02f21ff9b3b4dcebed169ff2dfddea77299db669fba8fd9b710f7f51714a3e22" exitCode=0 Jan 28 07:02:45 crc kubenswrapper[4642]: I0128 07:02:45.168477 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wgr5k" Jan 28 07:02:45 crc kubenswrapper[4642]: I0128 07:02:45.168497 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wgr5k" event={"ID":"aa646e79-c71d-48a5-8f8e-4d18c9614adf","Type":"ContainerDied","Data":"02f21ff9b3b4dcebed169ff2dfddea77299db669fba8fd9b710f7f51714a3e22"} Jan 28 07:02:45 crc kubenswrapper[4642]: I0128 07:02:45.168855 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wgr5k" event={"ID":"aa646e79-c71d-48a5-8f8e-4d18c9614adf","Type":"ContainerDied","Data":"84c1c679009b99b48d3854ec8ec2fd5f9cea6f96079e0cad158d4e80cb58a08d"} Jan 28 07:02:45 crc kubenswrapper[4642]: I0128 07:02:45.168888 4642 scope.go:117] "RemoveContainer" containerID="02f21ff9b3b4dcebed169ff2dfddea77299db669fba8fd9b710f7f51714a3e22" Jan 28 07:02:45 crc kubenswrapper[4642]: I0128 07:02:45.198116 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wgr5k"] Jan 28 07:02:45 crc kubenswrapper[4642]: I0128 07:02:45.205124 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wgr5k"] Jan 28 07:02:46 crc kubenswrapper[4642]: I0128 07:02:46.345937 4642 scope.go:117] "RemoveContainer" containerID="f9a624813fb041db9ed26dcc28eb7cd621dc6119c8b053ad8202e145adde4b11" Jan 28 07:02:46 crc kubenswrapper[4642]: I0128 07:02:46.541584 4642 scope.go:117] "RemoveContainer" containerID="f0f4d0a9745951a678135ce3de80cd379b10c677ecbeecd79127b17118e51996" Jan 28 07:02:46 crc kubenswrapper[4642]: I0128 07:02:46.566426 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 28 07:02:47 crc kubenswrapper[4642]: I0128 07:02:47.110704 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa646e79-c71d-48a5-8f8e-4d18c9614adf" path="/var/lib/kubelet/pods/aa646e79-c71d-48a5-8f8e-4d18c9614adf/volumes" Jan 28 07:02:49 crc kubenswrapper[4642]: I0128 07:02:49.146644 4642 scope.go:117] "RemoveContainer" containerID="02f21ff9b3b4dcebed169ff2dfddea77299db669fba8fd9b710f7f51714a3e22" Jan 28 07:02:49 crc kubenswrapper[4642]: E0128 07:02:49.147571 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02f21ff9b3b4dcebed169ff2dfddea77299db669fba8fd9b710f7f51714a3e22\": container with ID starting with 02f21ff9b3b4dcebed169ff2dfddea77299db669fba8fd9b710f7f51714a3e22 not found: ID does not exist" containerID="02f21ff9b3b4dcebed169ff2dfddea77299db669fba8fd9b710f7f51714a3e22" Jan 28 07:02:49 crc kubenswrapper[4642]: I0128 07:02:49.147642 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02f21ff9b3b4dcebed169ff2dfddea77299db669fba8fd9b710f7f51714a3e22"} err="failed to get container status \"02f21ff9b3b4dcebed169ff2dfddea77299db669fba8fd9b710f7f51714a3e22\": rpc error: code = NotFound desc = could not find container \"02f21ff9b3b4dcebed169ff2dfddea77299db669fba8fd9b710f7f51714a3e22\": container with ID starting with 02f21ff9b3b4dcebed169ff2dfddea77299db669fba8fd9b710f7f51714a3e22 not found: ID does not exist" Jan 28 07:02:49 crc kubenswrapper[4642]: I0128 07:02:49.147688 4642 scope.go:117] "RemoveContainer" containerID="f9a624813fb041db9ed26dcc28eb7cd621dc6119c8b053ad8202e145adde4b11" Jan 28 07:02:49 crc kubenswrapper[4642]: E0128 07:02:49.148160 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9a624813fb041db9ed26dcc28eb7cd621dc6119c8b053ad8202e145adde4b11\": container with ID starting with f9a624813fb041db9ed26dcc28eb7cd621dc6119c8b053ad8202e145adde4b11 not found: ID does not exist" containerID="f9a624813fb041db9ed26dcc28eb7cd621dc6119c8b053ad8202e145adde4b11" Jan 28 07:02:49 crc kubenswrapper[4642]: I0128 07:02:49.148217 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9a624813fb041db9ed26dcc28eb7cd621dc6119c8b053ad8202e145adde4b11"} err="failed to get container status \"f9a624813fb041db9ed26dcc28eb7cd621dc6119c8b053ad8202e145adde4b11\": rpc error: code = NotFound desc = could not find container \"f9a624813fb041db9ed26dcc28eb7cd621dc6119c8b053ad8202e145adde4b11\": container with ID starting with f9a624813fb041db9ed26dcc28eb7cd621dc6119c8b053ad8202e145adde4b11 not found: ID does not exist" Jan 28 07:02:49 crc kubenswrapper[4642]: I0128 07:02:49.148237 4642 scope.go:117] "RemoveContainer" containerID="f0f4d0a9745951a678135ce3de80cd379b10c677ecbeecd79127b17118e51996" Jan 28 07:02:49 crc kubenswrapper[4642]: E0128 07:02:49.149176 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0f4d0a9745951a678135ce3de80cd379b10c677ecbeecd79127b17118e51996\": container with ID starting with f0f4d0a9745951a678135ce3de80cd379b10c677ecbeecd79127b17118e51996 not found: ID does not exist" containerID="f0f4d0a9745951a678135ce3de80cd379b10c677ecbeecd79127b17118e51996" Jan 28 07:02:49 crc kubenswrapper[4642]: I0128 07:02:49.149241 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0f4d0a9745951a678135ce3de80cd379b10c677ecbeecd79127b17118e51996"} err="failed to get container status \"f0f4d0a9745951a678135ce3de80cd379b10c677ecbeecd79127b17118e51996\": rpc error: code = NotFound desc = could not find container \"f0f4d0a9745951a678135ce3de80cd379b10c677ecbeecd79127b17118e51996\": container with ID starting with f0f4d0a9745951a678135ce3de80cd379b10c677ecbeecd79127b17118e51996 not found: ID does not exist" Jan 28 07:02:56 crc kubenswrapper[4642]: E0128 07:02:56.167333 4642 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:178ac55eec45150f6e175400f28ac55b" Jan 28 07:02:56 crc kubenswrapper[4642]: E0128 07:02:56.168306 4642 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:178ac55eec45150f6e175400f28ac55b" Jan 28 07:02:56 crc kubenswrapper[4642]: E0128 07:02:56.168537 4642 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:178ac55eec45150f6e175400f28ac55b,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nlr5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-869584cbbf-mcgrs_openstack(f0e67125-c7dc-452b-a629-c2b24eda8144): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 07:02:56 crc kubenswrapper[4642]: E0128 07:02:56.169873 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-869584cbbf-mcgrs" podUID="f0e67125-c7dc-452b-a629-c2b24eda8144" Jan 28 07:02:56 crc kubenswrapper[4642]: I0128 07:02:56.476855 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-869584cbbf-mcgrs" Jan 28 07:02:56 crc kubenswrapper[4642]: I0128 07:02:56.566826 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nlr5d\" (UniqueName: \"kubernetes.io/projected/f0e67125-c7dc-452b-a629-c2b24eda8144-kube-api-access-nlr5d\") pod \"f0e67125-c7dc-452b-a629-c2b24eda8144\" (UID: \"f0e67125-c7dc-452b-a629-c2b24eda8144\") " Jan 28 07:02:56 crc kubenswrapper[4642]: I0128 07:02:56.567025 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0e67125-c7dc-452b-a629-c2b24eda8144-config\") pod \"f0e67125-c7dc-452b-a629-c2b24eda8144\" (UID: \"f0e67125-c7dc-452b-a629-c2b24eda8144\") " Jan 28 07:02:56 crc kubenswrapper[4642]: I0128 07:02:56.567819 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0e67125-c7dc-452b-a629-c2b24eda8144-config" (OuterVolumeSpecName: "config") pod "f0e67125-c7dc-452b-a629-c2b24eda8144" (UID: "f0e67125-c7dc-452b-a629-c2b24eda8144"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:02:56 crc kubenswrapper[4642]: I0128 07:02:56.571945 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0e67125-c7dc-452b-a629-c2b24eda8144-kube-api-access-nlr5d" (OuterVolumeSpecName: "kube-api-access-nlr5d") pod "f0e67125-c7dc-452b-a629-c2b24eda8144" (UID: "f0e67125-c7dc-452b-a629-c2b24eda8144"). InnerVolumeSpecName "kube-api-access-nlr5d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:02:56 crc kubenswrapper[4642]: I0128 07:02:56.668450 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nlr5d\" (UniqueName: \"kubernetes.io/projected/f0e67125-c7dc-452b-a629-c2b24eda8144-kube-api-access-nlr5d\") on node \"crc\" DevicePath \"\"" Jan 28 07:02:56 crc kubenswrapper[4642]: I0128 07:02:56.668483 4642 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0e67125-c7dc-452b-a629-c2b24eda8144-config\") on node \"crc\" DevicePath \"\"" Jan 28 07:02:57 crc kubenswrapper[4642]: I0128 07:02:57.254831 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-xmwjf" event={"ID":"9c18095f-18c4-435f-a2cc-216a62127faa","Type":"ContainerStarted","Data":"23e2f983107db6f8910c8eb8d2d31c5ecbd63b937ad19a2a8127533956c1d8b1"} Jan 28 07:02:57 crc kubenswrapper[4642]: I0128 07:02:57.256715 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-869584cbbf-mcgrs" event={"ID":"f0e67125-c7dc-452b-a629-c2b24eda8144","Type":"ContainerDied","Data":"9ba3cc7e3c955f1b4037ccf407f250a5eca7fc19ae7c1a2c21f7ec0f84aff1f9"} Jan 28 07:02:57 crc kubenswrapper[4642]: I0128 07:02:57.256793 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-869584cbbf-mcgrs" Jan 28 07:02:57 crc kubenswrapper[4642]: I0128 07:02:57.259346 4642 generic.go:334] "Generic (PLEG): container finished" podID="0209bad5-4a93-4b5f-8643-5dd6c997debf" containerID="ddba4a9159ccca2a76ef978cb3a3231262eb9c774c679fdbe3f8148cdfa36835" exitCode=0 Jan 28 07:02:57 crc kubenswrapper[4642]: I0128 07:02:57.259388 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9c657788f-w96v7" event={"ID":"0209bad5-4a93-4b5f-8643-5dd6c997debf","Type":"ContainerDied","Data":"ddba4a9159ccca2a76ef978cb3a3231262eb9c774c679fdbe3f8148cdfa36835"} Jan 28 07:02:57 crc kubenswrapper[4642]: I0128 07:02:57.268632 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-xmwjf" podStartSLOduration=2.811165458 podStartE2EDuration="16.268620328s" podCreationTimestamp="2026-01-28 07:02:41 +0000 UTC" firstStartedPulling="2026-01-28 07:02:42.676709033 +0000 UTC m=+885.908797842" lastFinishedPulling="2026-01-28 07:02:56.134163902 +0000 UTC m=+899.366252712" observedRunningTime="2026-01-28 07:02:57.266742895 +0000 UTC m=+900.498831705" watchObservedRunningTime="2026-01-28 07:02:57.268620328 +0000 UTC m=+900.500709137" Jan 28 07:02:57 crc kubenswrapper[4642]: I0128 07:02:57.304046 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-869584cbbf-mcgrs"] Jan 28 07:02:57 crc kubenswrapper[4642]: I0128 07:02:57.310223 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-869584cbbf-mcgrs"] Jan 28 07:02:57 crc kubenswrapper[4642]: I0128 07:02:57.523058 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59fc74d969-mwlrv"] Jan 28 07:02:57 crc kubenswrapper[4642]: I0128 07:02:57.554915 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55db57d86f-sd9sb"] Jan 28 07:02:57 crc kubenswrapper[4642]: E0128 07:02:57.555255 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa646e79-c71d-48a5-8f8e-4d18c9614adf" containerName="extract-content" Jan 28 07:02:57 crc kubenswrapper[4642]: I0128 07:02:57.555268 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa646e79-c71d-48a5-8f8e-4d18c9614adf" containerName="extract-content" Jan 28 07:02:57 crc kubenswrapper[4642]: E0128 07:02:57.555290 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa646e79-c71d-48a5-8f8e-4d18c9614adf" containerName="registry-server" Jan 28 07:02:57 crc kubenswrapper[4642]: I0128 07:02:57.555296 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa646e79-c71d-48a5-8f8e-4d18c9614adf" containerName="registry-server" Jan 28 07:02:57 crc kubenswrapper[4642]: E0128 07:02:57.555306 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa646e79-c71d-48a5-8f8e-4d18c9614adf" containerName="extract-utilities" Jan 28 07:02:57 crc kubenswrapper[4642]: I0128 07:02:57.555311 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa646e79-c71d-48a5-8f8e-4d18c9614adf" containerName="extract-utilities" Jan 28 07:02:57 crc kubenswrapper[4642]: I0128 07:02:57.555473 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa646e79-c71d-48a5-8f8e-4d18c9614adf" containerName="registry-server" Jan 28 07:02:57 crc kubenswrapper[4642]: I0128 07:02:57.556152 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55db57d86f-sd9sb" Jan 28 07:02:57 crc kubenswrapper[4642]: I0128 07:02:57.557873 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 28 07:02:57 crc kubenswrapper[4642]: I0128 07:02:57.560866 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55db57d86f-sd9sb"] Jan 28 07:02:57 crc kubenswrapper[4642]: I0128 07:02:57.584839 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d6799e5a-9c37-43f6-abe1-365221fe2249-dns-svc\") pod \"dnsmasq-dns-55db57d86f-sd9sb\" (UID: \"d6799e5a-9c37-43f6-abe1-365221fe2249\") " pod="openstack/dnsmasq-dns-55db57d86f-sd9sb" Jan 28 07:02:57 crc kubenswrapper[4642]: I0128 07:02:57.584955 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d6799e5a-9c37-43f6-abe1-365221fe2249-ovsdbserver-nb\") pod \"dnsmasq-dns-55db57d86f-sd9sb\" (UID: \"d6799e5a-9c37-43f6-abe1-365221fe2249\") " pod="openstack/dnsmasq-dns-55db57d86f-sd9sb" Jan 28 07:02:57 crc kubenswrapper[4642]: I0128 07:02:57.585010 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6799e5a-9c37-43f6-abe1-365221fe2249-config\") pod \"dnsmasq-dns-55db57d86f-sd9sb\" (UID: \"d6799e5a-9c37-43f6-abe1-365221fe2249\") " pod="openstack/dnsmasq-dns-55db57d86f-sd9sb" Jan 28 07:02:57 crc kubenswrapper[4642]: I0128 07:02:57.585087 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdcj4\" (UniqueName: \"kubernetes.io/projected/d6799e5a-9c37-43f6-abe1-365221fe2249-kube-api-access-kdcj4\") pod \"dnsmasq-dns-55db57d86f-sd9sb\" (UID: \"d6799e5a-9c37-43f6-abe1-365221fe2249\") " pod="openstack/dnsmasq-dns-55db57d86f-sd9sb" Jan 28 07:02:57 crc kubenswrapper[4642]: I0128 07:02:57.658041 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9c657788f-w96v7"] Jan 28 07:02:57 crc kubenswrapper[4642]: I0128 07:02:57.679141 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7685d4bf9-pr5j7"] Jan 28 07:02:57 crc kubenswrapper[4642]: I0128 07:02:57.682260 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7685d4bf9-pr5j7" Jan 28 07:02:57 crc kubenswrapper[4642]: I0128 07:02:57.684738 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 28 07:02:57 crc kubenswrapper[4642]: I0128 07:02:57.687549 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdcj4\" (UniqueName: \"kubernetes.io/projected/d6799e5a-9c37-43f6-abe1-365221fe2249-kube-api-access-kdcj4\") pod \"dnsmasq-dns-55db57d86f-sd9sb\" (UID: \"d6799e5a-9c37-43f6-abe1-365221fe2249\") " pod="openstack/dnsmasq-dns-55db57d86f-sd9sb" Jan 28 07:02:57 crc kubenswrapper[4642]: I0128 07:02:57.687648 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d6799e5a-9c37-43f6-abe1-365221fe2249-dns-svc\") pod \"dnsmasq-dns-55db57d86f-sd9sb\" (UID: \"d6799e5a-9c37-43f6-abe1-365221fe2249\") " pod="openstack/dnsmasq-dns-55db57d86f-sd9sb" Jan 28 07:02:57 crc kubenswrapper[4642]: I0128 07:02:57.687744 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d6799e5a-9c37-43f6-abe1-365221fe2249-ovsdbserver-nb\") pod \"dnsmasq-dns-55db57d86f-sd9sb\" (UID: \"d6799e5a-9c37-43f6-abe1-365221fe2249\") " pod="openstack/dnsmasq-dns-55db57d86f-sd9sb" Jan 28 07:02:57 crc kubenswrapper[4642]: I0128 07:02:57.687797 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6799e5a-9c37-43f6-abe1-365221fe2249-config\") pod \"dnsmasq-dns-55db57d86f-sd9sb\" (UID: \"d6799e5a-9c37-43f6-abe1-365221fe2249\") " pod="openstack/dnsmasq-dns-55db57d86f-sd9sb" Jan 28 07:02:57 crc kubenswrapper[4642]: I0128 07:02:57.688564 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d6799e5a-9c37-43f6-abe1-365221fe2249-ovsdbserver-nb\") pod \"dnsmasq-dns-55db57d86f-sd9sb\" (UID: \"d6799e5a-9c37-43f6-abe1-365221fe2249\") " pod="openstack/dnsmasq-dns-55db57d86f-sd9sb" Jan 28 07:02:57 crc kubenswrapper[4642]: I0128 07:02:57.688782 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d6799e5a-9c37-43f6-abe1-365221fe2249-dns-svc\") pod \"dnsmasq-dns-55db57d86f-sd9sb\" (UID: \"d6799e5a-9c37-43f6-abe1-365221fe2249\") " pod="openstack/dnsmasq-dns-55db57d86f-sd9sb" Jan 28 07:02:57 crc kubenswrapper[4642]: I0128 07:02:57.689569 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6799e5a-9c37-43f6-abe1-365221fe2249-config\") pod \"dnsmasq-dns-55db57d86f-sd9sb\" (UID: \"d6799e5a-9c37-43f6-abe1-365221fe2249\") " pod="openstack/dnsmasq-dns-55db57d86f-sd9sb" Jan 28 07:02:57 crc kubenswrapper[4642]: I0128 07:02:57.691957 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7685d4bf9-pr5j7"] Jan 28 07:02:57 crc kubenswrapper[4642]: I0128 07:02:57.713829 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdcj4\" (UniqueName: \"kubernetes.io/projected/d6799e5a-9c37-43f6-abe1-365221fe2249-kube-api-access-kdcj4\") pod \"dnsmasq-dns-55db57d86f-sd9sb\" (UID: \"d6799e5a-9c37-43f6-abe1-365221fe2249\") " pod="openstack/dnsmasq-dns-55db57d86f-sd9sb" Jan 28 07:02:57 crc kubenswrapper[4642]: I0128 07:02:57.789631 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9mgk\" (UniqueName: \"kubernetes.io/projected/25db2b80-2d19-4f8f-9ea8-0c0c7b3688e0-kube-api-access-n9mgk\") pod \"dnsmasq-dns-7685d4bf9-pr5j7\" (UID: \"25db2b80-2d19-4f8f-9ea8-0c0c7b3688e0\") " pod="openstack/dnsmasq-dns-7685d4bf9-pr5j7" Jan 28 07:02:57 crc kubenswrapper[4642]: I0128 07:02:57.789783 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/25db2b80-2d19-4f8f-9ea8-0c0c7b3688e0-ovsdbserver-sb\") pod \"dnsmasq-dns-7685d4bf9-pr5j7\" (UID: \"25db2b80-2d19-4f8f-9ea8-0c0c7b3688e0\") " pod="openstack/dnsmasq-dns-7685d4bf9-pr5j7" Jan 28 07:02:57 crc kubenswrapper[4642]: I0128 07:02:57.789836 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/25db2b80-2d19-4f8f-9ea8-0c0c7b3688e0-ovsdbserver-nb\") pod \"dnsmasq-dns-7685d4bf9-pr5j7\" (UID: \"25db2b80-2d19-4f8f-9ea8-0c0c7b3688e0\") " pod="openstack/dnsmasq-dns-7685d4bf9-pr5j7" Jan 28 07:02:57 crc kubenswrapper[4642]: I0128 07:02:57.789867 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25db2b80-2d19-4f8f-9ea8-0c0c7b3688e0-config\") pod \"dnsmasq-dns-7685d4bf9-pr5j7\" (UID: \"25db2b80-2d19-4f8f-9ea8-0c0c7b3688e0\") " pod="openstack/dnsmasq-dns-7685d4bf9-pr5j7" Jan 28 07:02:57 crc kubenswrapper[4642]: I0128 07:02:57.790089 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/25db2b80-2d19-4f8f-9ea8-0c0c7b3688e0-dns-svc\") pod \"dnsmasq-dns-7685d4bf9-pr5j7\" (UID: \"25db2b80-2d19-4f8f-9ea8-0c0c7b3688e0\") " pod="openstack/dnsmasq-dns-7685d4bf9-pr5j7" Jan 28 07:02:57 crc kubenswrapper[4642]: I0128 07:02:57.891805 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/25db2b80-2d19-4f8f-9ea8-0c0c7b3688e0-dns-svc\") pod \"dnsmasq-dns-7685d4bf9-pr5j7\" (UID: \"25db2b80-2d19-4f8f-9ea8-0c0c7b3688e0\") " pod="openstack/dnsmasq-dns-7685d4bf9-pr5j7" Jan 28 07:02:57 crc kubenswrapper[4642]: I0128 07:02:57.891914 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9mgk\" (UniqueName: \"kubernetes.io/projected/25db2b80-2d19-4f8f-9ea8-0c0c7b3688e0-kube-api-access-n9mgk\") pod \"dnsmasq-dns-7685d4bf9-pr5j7\" (UID: \"25db2b80-2d19-4f8f-9ea8-0c0c7b3688e0\") " pod="openstack/dnsmasq-dns-7685d4bf9-pr5j7" Jan 28 07:02:57 crc kubenswrapper[4642]: I0128 07:02:57.891975 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/25db2b80-2d19-4f8f-9ea8-0c0c7b3688e0-ovsdbserver-sb\") pod \"dnsmasq-dns-7685d4bf9-pr5j7\" (UID: \"25db2b80-2d19-4f8f-9ea8-0c0c7b3688e0\") " pod="openstack/dnsmasq-dns-7685d4bf9-pr5j7" Jan 28 07:02:57 crc kubenswrapper[4642]: I0128 07:02:57.891999 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/25db2b80-2d19-4f8f-9ea8-0c0c7b3688e0-ovsdbserver-nb\") pod \"dnsmasq-dns-7685d4bf9-pr5j7\" (UID: \"25db2b80-2d19-4f8f-9ea8-0c0c7b3688e0\") " pod="openstack/dnsmasq-dns-7685d4bf9-pr5j7" Jan 28 07:02:57 crc kubenswrapper[4642]: I0128 07:02:57.892020 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25db2b80-2d19-4f8f-9ea8-0c0c7b3688e0-config\") pod \"dnsmasq-dns-7685d4bf9-pr5j7\" (UID: \"25db2b80-2d19-4f8f-9ea8-0c0c7b3688e0\") " pod="openstack/dnsmasq-dns-7685d4bf9-pr5j7" Jan 28 07:02:57 crc kubenswrapper[4642]: I0128 07:02:57.892693 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/25db2b80-2d19-4f8f-9ea8-0c0c7b3688e0-dns-svc\") pod \"dnsmasq-dns-7685d4bf9-pr5j7\" (UID: \"25db2b80-2d19-4f8f-9ea8-0c0c7b3688e0\") " pod="openstack/dnsmasq-dns-7685d4bf9-pr5j7" Jan 28 07:02:57 crc kubenswrapper[4642]: I0128 07:02:57.892946 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25db2b80-2d19-4f8f-9ea8-0c0c7b3688e0-config\") pod \"dnsmasq-dns-7685d4bf9-pr5j7\" (UID: \"25db2b80-2d19-4f8f-9ea8-0c0c7b3688e0\") " pod="openstack/dnsmasq-dns-7685d4bf9-pr5j7" Jan 28 07:02:57 crc kubenswrapper[4642]: I0128 07:02:57.892948 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/25db2b80-2d19-4f8f-9ea8-0c0c7b3688e0-ovsdbserver-sb\") pod \"dnsmasq-dns-7685d4bf9-pr5j7\" (UID: \"25db2b80-2d19-4f8f-9ea8-0c0c7b3688e0\") " pod="openstack/dnsmasq-dns-7685d4bf9-pr5j7" Jan 28 07:02:57 crc kubenswrapper[4642]: I0128 07:02:57.893309 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/25db2b80-2d19-4f8f-9ea8-0c0c7b3688e0-ovsdbserver-nb\") pod \"dnsmasq-dns-7685d4bf9-pr5j7\" (UID: \"25db2b80-2d19-4f8f-9ea8-0c0c7b3688e0\") " pod="openstack/dnsmasq-dns-7685d4bf9-pr5j7" Jan 28 07:02:57 crc kubenswrapper[4642]: I0128 07:02:57.906803 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9mgk\" (UniqueName: \"kubernetes.io/projected/25db2b80-2d19-4f8f-9ea8-0c0c7b3688e0-kube-api-access-n9mgk\") pod \"dnsmasq-dns-7685d4bf9-pr5j7\" (UID: \"25db2b80-2d19-4f8f-9ea8-0c0c7b3688e0\") " pod="openstack/dnsmasq-dns-7685d4bf9-pr5j7" Jan 28 07:02:57 crc kubenswrapper[4642]: I0128 07:02:57.926016 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55db57d86f-sd9sb" Jan 28 07:02:58 crc kubenswrapper[4642]: I0128 07:02:58.011378 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7685d4bf9-pr5j7" Jan 28 07:02:58 crc kubenswrapper[4642]: I0128 07:02:58.267747 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9c657788f-w96v7" event={"ID":"0209bad5-4a93-4b5f-8643-5dd6c997debf","Type":"ContainerStarted","Data":"35e0f0b6a6d741f97eb4c8c5e79decad7ade66b34b3c9e171e949412d7115205"} Jan 28 07:02:58 crc kubenswrapper[4642]: I0128 07:02:58.267989 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-9c657788f-w96v7" podUID="0209bad5-4a93-4b5f-8643-5dd6c997debf" containerName="dnsmasq-dns" containerID="cri-o://35e0f0b6a6d741f97eb4c8c5e79decad7ade66b34b3c9e171e949412d7115205" gracePeriod=10 Jan 28 07:02:58 crc kubenswrapper[4642]: I0128 07:02:58.286271 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-9c657788f-w96v7" podStartSLOduration=3.900957777 podStartE2EDuration="29.286249631s" podCreationTimestamp="2026-01-28 07:02:29 +0000 UTC" firstStartedPulling="2026-01-28 07:02:30.830845372 +0000 UTC m=+874.062934181" lastFinishedPulling="2026-01-28 07:02:56.216137226 +0000 UTC m=+899.448226035" observedRunningTime="2026-01-28 07:02:58.279779779 +0000 UTC m=+901.511868588" watchObservedRunningTime="2026-01-28 07:02:58.286249631 +0000 UTC m=+901.518338439" Jan 28 07:02:58 crc kubenswrapper[4642]: I0128 07:02:58.321803 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55db57d86f-sd9sb"] Jan 28 07:02:58 crc kubenswrapper[4642]: W0128 07:02:58.329168 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd6799e5a_9c37_43f6_abe1_365221fe2249.slice/crio-b2f6440ff4addde61925a03308f9531155b59a73b5b8b87d54b3e4f045f6c18d WatchSource:0}: Error finding container b2f6440ff4addde61925a03308f9531155b59a73b5b8b87d54b3e4f045f6c18d: Status 404 returned error can't find the container with id b2f6440ff4addde61925a03308f9531155b59a73b5b8b87d54b3e4f045f6c18d Jan 28 07:02:58 crc kubenswrapper[4642]: I0128 07:02:58.415866 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7685d4bf9-pr5j7"] Jan 28 07:02:58 crc kubenswrapper[4642]: I0128 07:02:58.743767 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9c657788f-w96v7" Jan 28 07:02:58 crc kubenswrapper[4642]: I0128 07:02:58.810429 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0209bad5-4a93-4b5f-8643-5dd6c997debf-config\") pod \"0209bad5-4a93-4b5f-8643-5dd6c997debf\" (UID: \"0209bad5-4a93-4b5f-8643-5dd6c997debf\") " Jan 28 07:02:58 crc kubenswrapper[4642]: I0128 07:02:58.810498 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4spxn\" (UniqueName: \"kubernetes.io/projected/0209bad5-4a93-4b5f-8643-5dd6c997debf-kube-api-access-4spxn\") pod \"0209bad5-4a93-4b5f-8643-5dd6c997debf\" (UID: \"0209bad5-4a93-4b5f-8643-5dd6c997debf\") " Jan 28 07:02:58 crc kubenswrapper[4642]: I0128 07:02:58.810605 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0209bad5-4a93-4b5f-8643-5dd6c997debf-dns-svc\") pod \"0209bad5-4a93-4b5f-8643-5dd6c997debf\" (UID: \"0209bad5-4a93-4b5f-8643-5dd6c997debf\") " Jan 28 07:02:58 crc kubenswrapper[4642]: I0128 07:02:58.815471 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0209bad5-4a93-4b5f-8643-5dd6c997debf-kube-api-access-4spxn" (OuterVolumeSpecName: "kube-api-access-4spxn") pod "0209bad5-4a93-4b5f-8643-5dd6c997debf" (UID: "0209bad5-4a93-4b5f-8643-5dd6c997debf"). InnerVolumeSpecName "kube-api-access-4spxn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:02:58 crc kubenswrapper[4642]: I0128 07:02:58.838460 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0209bad5-4a93-4b5f-8643-5dd6c997debf-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0209bad5-4a93-4b5f-8643-5dd6c997debf" (UID: "0209bad5-4a93-4b5f-8643-5dd6c997debf"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:02:58 crc kubenswrapper[4642]: I0128 07:02:58.838906 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0209bad5-4a93-4b5f-8643-5dd6c997debf-config" (OuterVolumeSpecName: "config") pod "0209bad5-4a93-4b5f-8643-5dd6c997debf" (UID: "0209bad5-4a93-4b5f-8643-5dd6c997debf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:02:58 crc kubenswrapper[4642]: I0128 07:02:58.912794 4642 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0209bad5-4a93-4b5f-8643-5dd6c997debf-config\") on node \"crc\" DevicePath \"\"" Jan 28 07:02:58 crc kubenswrapper[4642]: I0128 07:02:58.913036 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4spxn\" (UniqueName: \"kubernetes.io/projected/0209bad5-4a93-4b5f-8643-5dd6c997debf-kube-api-access-4spxn\") on node \"crc\" DevicePath \"\"" Jan 28 07:02:58 crc kubenswrapper[4642]: I0128 07:02:58.913096 4642 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0209bad5-4a93-4b5f-8643-5dd6c997debf-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 07:02:59 crc kubenswrapper[4642]: I0128 07:02:59.107145 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0e67125-c7dc-452b-a629-c2b24eda8144" path="/var/lib/kubelet/pods/f0e67125-c7dc-452b-a629-c2b24eda8144/volumes" Jan 28 07:02:59 crc kubenswrapper[4642]: I0128 07:02:59.277681 4642 generic.go:334] "Generic (PLEG): container finished" podID="25db2b80-2d19-4f8f-9ea8-0c0c7b3688e0" containerID="b14b59f44e215b13b5a7d2a260930af3d60e06590ea1cacd6cbbe26f275b7baf" exitCode=0 Jan 28 07:02:59 crc kubenswrapper[4642]: I0128 07:02:59.277753 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7685d4bf9-pr5j7" event={"ID":"25db2b80-2d19-4f8f-9ea8-0c0c7b3688e0","Type":"ContainerDied","Data":"b14b59f44e215b13b5a7d2a260930af3d60e06590ea1cacd6cbbe26f275b7baf"} Jan 28 07:02:59 crc kubenswrapper[4642]: I0128 07:02:59.278085 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7685d4bf9-pr5j7" event={"ID":"25db2b80-2d19-4f8f-9ea8-0c0c7b3688e0","Type":"ContainerStarted","Data":"a23cd79df42fbf6321d479eea64006c2469c371fbc72dc26a5ebe71f0d6bb2e4"} Jan 28 07:02:59 crc kubenswrapper[4642]: I0128 07:02:59.283664 4642 generic.go:334] "Generic (PLEG): container finished" podID="0209bad5-4a93-4b5f-8643-5dd6c997debf" containerID="35e0f0b6a6d741f97eb4c8c5e79decad7ade66b34b3c9e171e949412d7115205" exitCode=0 Jan 28 07:02:59 crc kubenswrapper[4642]: I0128 07:02:59.283776 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9c657788f-w96v7" event={"ID":"0209bad5-4a93-4b5f-8643-5dd6c997debf","Type":"ContainerDied","Data":"35e0f0b6a6d741f97eb4c8c5e79decad7ade66b34b3c9e171e949412d7115205"} Jan 28 07:02:59 crc kubenswrapper[4642]: I0128 07:02:59.283855 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9c657788f-w96v7" event={"ID":"0209bad5-4a93-4b5f-8643-5dd6c997debf","Type":"ContainerDied","Data":"a356850a353708674eaec11eabf2d7d82173ebab5d7046a9feec77632bc20ff2"} Jan 28 07:02:59 crc kubenswrapper[4642]: I0128 07:02:59.283851 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9c657788f-w96v7" Jan 28 07:02:59 crc kubenswrapper[4642]: I0128 07:02:59.283884 4642 scope.go:117] "RemoveContainer" containerID="35e0f0b6a6d741f97eb4c8c5e79decad7ade66b34b3c9e171e949412d7115205" Jan 28 07:02:59 crc kubenswrapper[4642]: I0128 07:02:59.285379 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"716da2e6-dc75-431b-aa9e-d22bb4e0f91b","Type":"ContainerStarted","Data":"083b2e1874e89fce6b3baf1494aef28230adc9faaa23cf7b76895400a29413b9"} Jan 28 07:02:59 crc kubenswrapper[4642]: I0128 07:02:59.290072 4642 generic.go:334] "Generic (PLEG): container finished" podID="d6799e5a-9c37-43f6-abe1-365221fe2249" containerID="607a2ce4058d5b02128c81257b1747f1f70fedc678e5e6a1a97cebe1e13bbb0e" exitCode=0 Jan 28 07:02:59 crc kubenswrapper[4642]: I0128 07:02:59.290106 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55db57d86f-sd9sb" event={"ID":"d6799e5a-9c37-43f6-abe1-365221fe2249","Type":"ContainerDied","Data":"607a2ce4058d5b02128c81257b1747f1f70fedc678e5e6a1a97cebe1e13bbb0e"} Jan 28 07:02:59 crc kubenswrapper[4642]: I0128 07:02:59.290125 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55db57d86f-sd9sb" event={"ID":"d6799e5a-9c37-43f6-abe1-365221fe2249","Type":"ContainerStarted","Data":"b2f6440ff4addde61925a03308f9531155b59a73b5b8b87d54b3e4f045f6c18d"} Jan 28 07:02:59 crc kubenswrapper[4642]: I0128 07:02:59.429890 4642 scope.go:117] "RemoveContainer" containerID="ddba4a9159ccca2a76ef978cb3a3231262eb9c774c679fdbe3f8148cdfa36835" Jan 28 07:02:59 crc kubenswrapper[4642]: I0128 07:02:59.463737 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9c657788f-w96v7"] Jan 28 07:02:59 crc kubenswrapper[4642]: I0128 07:02:59.463842 4642 scope.go:117] "RemoveContainer" containerID="35e0f0b6a6d741f97eb4c8c5e79decad7ade66b34b3c9e171e949412d7115205" Jan 28 07:02:59 crc kubenswrapper[4642]: E0128 07:02:59.464455 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35e0f0b6a6d741f97eb4c8c5e79decad7ade66b34b3c9e171e949412d7115205\": container with ID starting with 35e0f0b6a6d741f97eb4c8c5e79decad7ade66b34b3c9e171e949412d7115205 not found: ID does not exist" containerID="35e0f0b6a6d741f97eb4c8c5e79decad7ade66b34b3c9e171e949412d7115205" Jan 28 07:02:59 crc kubenswrapper[4642]: I0128 07:02:59.464498 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35e0f0b6a6d741f97eb4c8c5e79decad7ade66b34b3c9e171e949412d7115205"} err="failed to get container status \"35e0f0b6a6d741f97eb4c8c5e79decad7ade66b34b3c9e171e949412d7115205\": rpc error: code = NotFound desc = could not find container \"35e0f0b6a6d741f97eb4c8c5e79decad7ade66b34b3c9e171e949412d7115205\": container with ID starting with 35e0f0b6a6d741f97eb4c8c5e79decad7ade66b34b3c9e171e949412d7115205 not found: ID does not exist" Jan 28 07:02:59 crc kubenswrapper[4642]: I0128 07:02:59.464526 4642 scope.go:117] "RemoveContainer" containerID="ddba4a9159ccca2a76ef978cb3a3231262eb9c774c679fdbe3f8148cdfa36835" Jan 28 07:02:59 crc kubenswrapper[4642]: E0128 07:02:59.465028 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ddba4a9159ccca2a76ef978cb3a3231262eb9c774c679fdbe3f8148cdfa36835\": container with ID starting with ddba4a9159ccca2a76ef978cb3a3231262eb9c774c679fdbe3f8148cdfa36835 not found: ID does not exist" containerID="ddba4a9159ccca2a76ef978cb3a3231262eb9c774c679fdbe3f8148cdfa36835" Jan 28 07:02:59 crc kubenswrapper[4642]: I0128 07:02:59.465072 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddba4a9159ccca2a76ef978cb3a3231262eb9c774c679fdbe3f8148cdfa36835"} err="failed to get container status \"ddba4a9159ccca2a76ef978cb3a3231262eb9c774c679fdbe3f8148cdfa36835\": rpc error: code = NotFound desc = could not find container \"ddba4a9159ccca2a76ef978cb3a3231262eb9c774c679fdbe3f8148cdfa36835\": container with ID starting with ddba4a9159ccca2a76ef978cb3a3231262eb9c774c679fdbe3f8148cdfa36835 not found: ID does not exist" Jan 28 07:02:59 crc kubenswrapper[4642]: I0128 07:02:59.469268 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-9c657788f-w96v7"] Jan 28 07:03:00 crc kubenswrapper[4642]: I0128 07:03:00.298455 4642 generic.go:334] "Generic (PLEG): container finished" podID="5254c758-f70f-49c8-b0d6-9754ba3a16c6" containerID="46a410a2183e9ae1c89df795af5ad6331f45b002687d8912be780d40846aa719" exitCode=0 Jan 28 07:03:00 crc kubenswrapper[4642]: I0128 07:03:00.298559 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59fc74d969-mwlrv" event={"ID":"5254c758-f70f-49c8-b0d6-9754ba3a16c6","Type":"ContainerDied","Data":"46a410a2183e9ae1c89df795af5ad6331f45b002687d8912be780d40846aa719"} Jan 28 07:03:00 crc kubenswrapper[4642]: I0128 07:03:00.301035 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80","Type":"ContainerStarted","Data":"73f34878b20e4eb14d546def2579541109d36943f72b1b052629caa985ad90b5"} Jan 28 07:03:00 crc kubenswrapper[4642]: I0128 07:03:00.304575 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55db57d86f-sd9sb" event={"ID":"d6799e5a-9c37-43f6-abe1-365221fe2249","Type":"ContainerStarted","Data":"73f4cfee9e27c66b506dac7b56bffd9dcd8a5b610f491b9e28e89767f91d6803"} Jan 28 07:03:00 crc kubenswrapper[4642]: I0128 07:03:00.304680 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55db57d86f-sd9sb" Jan 28 07:03:00 crc kubenswrapper[4642]: I0128 07:03:00.308176 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7685d4bf9-pr5j7" event={"ID":"25db2b80-2d19-4f8f-9ea8-0c0c7b3688e0","Type":"ContainerStarted","Data":"3e02c9c379e85e04650bc1f84fb2398264edb0c901b2237e7cb5db9b24559df5"} Jan 28 07:03:00 crc kubenswrapper[4642]: I0128 07:03:00.351815 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7685d4bf9-pr5j7" podStartSLOduration=3.351797238 podStartE2EDuration="3.351797238s" podCreationTimestamp="2026-01-28 07:02:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 07:03:00.346215878 +0000 UTC m=+903.578304686" watchObservedRunningTime="2026-01-28 07:03:00.351797238 +0000 UTC m=+903.583886047" Jan 28 07:03:00 crc kubenswrapper[4642]: I0128 07:03:00.592175 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59fc74d969-mwlrv" Jan 28 07:03:00 crc kubenswrapper[4642]: I0128 07:03:00.605843 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55db57d86f-sd9sb" podStartSLOduration=3.605806534 podStartE2EDuration="3.605806534s" podCreationTimestamp="2026-01-28 07:02:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 07:03:00.369896376 +0000 UTC m=+903.601985185" watchObservedRunningTime="2026-01-28 07:03:00.605806534 +0000 UTC m=+903.837895343" Jan 28 07:03:00 crc kubenswrapper[4642]: I0128 07:03:00.743386 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5254c758-f70f-49c8-b0d6-9754ba3a16c6-dns-svc\") pod \"5254c758-f70f-49c8-b0d6-9754ba3a16c6\" (UID: \"5254c758-f70f-49c8-b0d6-9754ba3a16c6\") " Jan 28 07:03:00 crc kubenswrapper[4642]: I0128 07:03:00.743829 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwsbv\" (UniqueName: \"kubernetes.io/projected/5254c758-f70f-49c8-b0d6-9754ba3a16c6-kube-api-access-kwsbv\") pod \"5254c758-f70f-49c8-b0d6-9754ba3a16c6\" (UID: \"5254c758-f70f-49c8-b0d6-9754ba3a16c6\") " Jan 28 07:03:00 crc kubenswrapper[4642]: I0128 07:03:00.743933 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5254c758-f70f-49c8-b0d6-9754ba3a16c6-config\") pod \"5254c758-f70f-49c8-b0d6-9754ba3a16c6\" (UID: \"5254c758-f70f-49c8-b0d6-9754ba3a16c6\") " Jan 28 07:03:00 crc kubenswrapper[4642]: I0128 07:03:00.749941 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5254c758-f70f-49c8-b0d6-9754ba3a16c6-kube-api-access-kwsbv" (OuterVolumeSpecName: "kube-api-access-kwsbv") pod "5254c758-f70f-49c8-b0d6-9754ba3a16c6" (UID: "5254c758-f70f-49c8-b0d6-9754ba3a16c6"). InnerVolumeSpecName "kube-api-access-kwsbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:03:00 crc kubenswrapper[4642]: I0128 07:03:00.761993 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5254c758-f70f-49c8-b0d6-9754ba3a16c6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5254c758-f70f-49c8-b0d6-9754ba3a16c6" (UID: "5254c758-f70f-49c8-b0d6-9754ba3a16c6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:03:00 crc kubenswrapper[4642]: I0128 07:03:00.766925 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5254c758-f70f-49c8-b0d6-9754ba3a16c6-config" (OuterVolumeSpecName: "config") pod "5254c758-f70f-49c8-b0d6-9754ba3a16c6" (UID: "5254c758-f70f-49c8-b0d6-9754ba3a16c6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:03:00 crc kubenswrapper[4642]: I0128 07:03:00.846102 4642 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5254c758-f70f-49c8-b0d6-9754ba3a16c6-config\") on node \"crc\" DevicePath \"\"" Jan 28 07:03:00 crc kubenswrapper[4642]: I0128 07:03:00.846128 4642 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5254c758-f70f-49c8-b0d6-9754ba3a16c6-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 07:03:00 crc kubenswrapper[4642]: I0128 07:03:00.846138 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kwsbv\" (UniqueName: \"kubernetes.io/projected/5254c758-f70f-49c8-b0d6-9754ba3a16c6-kube-api-access-kwsbv\") on node \"crc\" DevicePath \"\"" Jan 28 07:03:01 crc kubenswrapper[4642]: I0128 07:03:01.106455 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0209bad5-4a93-4b5f-8643-5dd6c997debf" path="/var/lib/kubelet/pods/0209bad5-4a93-4b5f-8643-5dd6c997debf/volumes" Jan 28 07:03:01 crc kubenswrapper[4642]: I0128 07:03:01.317044 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59fc74d969-mwlrv" event={"ID":"5254c758-f70f-49c8-b0d6-9754ba3a16c6","Type":"ContainerDied","Data":"ba17194d93f4e843af34d443b17d9c564633d3f09a8365afed33e3a868c5f484"} Jan 28 07:03:01 crc kubenswrapper[4642]: I0128 07:03:01.317108 4642 scope.go:117] "RemoveContainer" containerID="46a410a2183e9ae1c89df795af5ad6331f45b002687d8912be780d40846aa719" Jan 28 07:03:01 crc kubenswrapper[4642]: I0128 07:03:01.317172 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59fc74d969-mwlrv" Jan 28 07:03:01 crc kubenswrapper[4642]: I0128 07:03:01.317548 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7685d4bf9-pr5j7" Jan 28 07:03:01 crc kubenswrapper[4642]: I0128 07:03:01.351767 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59fc74d969-mwlrv"] Jan 28 07:03:01 crc kubenswrapper[4642]: I0128 07:03:01.359292 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-59fc74d969-mwlrv"] Jan 28 07:03:03 crc kubenswrapper[4642]: I0128 07:03:03.105694 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5254c758-f70f-49c8-b0d6-9754ba3a16c6" path="/var/lib/kubelet/pods/5254c758-f70f-49c8-b0d6-9754ba3a16c6/volumes" Jan 28 07:03:03 crc kubenswrapper[4642]: I0128 07:03:03.335117 4642 generic.go:334] "Generic (PLEG): container finished" podID="558fcc2b-c367-48b5-a3b6-de01fca006f3" containerID="c9270eb13fc079ba33889ebcef2b8087e2dc296c6f2115a9bfbc1e89d4e838e0" exitCode=0 Jan 28 07:03:03 crc kubenswrapper[4642]: I0128 07:03:03.335219 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fc877b869-qgvmg" event={"ID":"558fcc2b-c367-48b5-a3b6-de01fca006f3","Type":"ContainerDied","Data":"c9270eb13fc079ba33889ebcef2b8087e2dc296c6f2115a9bfbc1e89d4e838e0"} Jan 28 07:03:03 crc kubenswrapper[4642]: I0128 07:03:03.339039 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"602638e1-0a19-4a7f-a752-50b0e228a7da","Type":"ContainerStarted","Data":"e59afe6d01d96d19593caffd3a3d15fff1dc1f84f199729368fe82d995b85cad"} Jan 28 07:03:03 crc kubenswrapper[4642]: I0128 07:03:03.581629 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fc877b869-qgvmg" Jan 28 07:03:03 crc kubenswrapper[4642]: I0128 07:03:03.694034 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mq75n\" (UniqueName: \"kubernetes.io/projected/558fcc2b-c367-48b5-a3b6-de01fca006f3-kube-api-access-mq75n\") pod \"558fcc2b-c367-48b5-a3b6-de01fca006f3\" (UID: \"558fcc2b-c367-48b5-a3b6-de01fca006f3\") " Jan 28 07:03:03 crc kubenswrapper[4642]: I0128 07:03:03.694107 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/558fcc2b-c367-48b5-a3b6-de01fca006f3-dns-svc\") pod \"558fcc2b-c367-48b5-a3b6-de01fca006f3\" (UID: \"558fcc2b-c367-48b5-a3b6-de01fca006f3\") " Jan 28 07:03:03 crc kubenswrapper[4642]: I0128 07:03:03.694236 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/558fcc2b-c367-48b5-a3b6-de01fca006f3-config\") pod \"558fcc2b-c367-48b5-a3b6-de01fca006f3\" (UID: \"558fcc2b-c367-48b5-a3b6-de01fca006f3\") " Jan 28 07:03:03 crc kubenswrapper[4642]: I0128 07:03:03.698392 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/558fcc2b-c367-48b5-a3b6-de01fca006f3-kube-api-access-mq75n" (OuterVolumeSpecName: "kube-api-access-mq75n") pod "558fcc2b-c367-48b5-a3b6-de01fca006f3" (UID: "558fcc2b-c367-48b5-a3b6-de01fca006f3"). InnerVolumeSpecName "kube-api-access-mq75n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:03:03 crc kubenswrapper[4642]: I0128 07:03:03.708570 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/558fcc2b-c367-48b5-a3b6-de01fca006f3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "558fcc2b-c367-48b5-a3b6-de01fca006f3" (UID: "558fcc2b-c367-48b5-a3b6-de01fca006f3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:03:03 crc kubenswrapper[4642]: I0128 07:03:03.710880 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/558fcc2b-c367-48b5-a3b6-de01fca006f3-config" (OuterVolumeSpecName: "config") pod "558fcc2b-c367-48b5-a3b6-de01fca006f3" (UID: "558fcc2b-c367-48b5-a3b6-de01fca006f3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:03:03 crc kubenswrapper[4642]: I0128 07:03:03.796369 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mq75n\" (UniqueName: \"kubernetes.io/projected/558fcc2b-c367-48b5-a3b6-de01fca006f3-kube-api-access-mq75n\") on node \"crc\" DevicePath \"\"" Jan 28 07:03:03 crc kubenswrapper[4642]: I0128 07:03:03.796418 4642 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/558fcc2b-c367-48b5-a3b6-de01fca006f3-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 07:03:03 crc kubenswrapper[4642]: I0128 07:03:03.796428 4642 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/558fcc2b-c367-48b5-a3b6-de01fca006f3-config\") on node \"crc\" DevicePath \"\"" Jan 28 07:03:04 crc kubenswrapper[4642]: I0128 07:03:04.363523 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fc877b869-qgvmg" event={"ID":"558fcc2b-c367-48b5-a3b6-de01fca006f3","Type":"ContainerDied","Data":"23e9a110534c3fc8ac4368604e5249abaa62a3b1383861500a3806f74cb654f9"} Jan 28 07:03:04 crc kubenswrapper[4642]: I0128 07:03:04.363586 4642 scope.go:117] "RemoveContainer" containerID="c9270eb13fc079ba33889ebcef2b8087e2dc296c6f2115a9bfbc1e89d4e838e0" Jan 28 07:03:04 crc kubenswrapper[4642]: I0128 07:03:04.363545 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fc877b869-qgvmg" Jan 28 07:03:04 crc kubenswrapper[4642]: I0128 07:03:04.433532 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fc877b869-qgvmg"] Jan 28 07:03:04 crc kubenswrapper[4642]: I0128 07:03:04.439253 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-fc877b869-qgvmg"] Jan 28 07:03:05 crc kubenswrapper[4642]: I0128 07:03:05.106567 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="558fcc2b-c367-48b5-a3b6-de01fca006f3" path="/var/lib/kubelet/pods/558fcc2b-c367-48b5-a3b6-de01fca006f3/volumes" Jan 28 07:03:06 crc kubenswrapper[4642]: I0128 07:03:06.380880 4642 generic.go:334] "Generic (PLEG): container finished" podID="602638e1-0a19-4a7f-a752-50b0e228a7da" containerID="e59afe6d01d96d19593caffd3a3d15fff1dc1f84f199729368fe82d995b85cad" exitCode=0 Jan 28 07:03:06 crc kubenswrapper[4642]: I0128 07:03:06.380999 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"602638e1-0a19-4a7f-a752-50b0e228a7da","Type":"ContainerDied","Data":"e59afe6d01d96d19593caffd3a3d15fff1dc1f84f199729368fe82d995b85cad"} Jan 28 07:03:07 crc kubenswrapper[4642]: I0128 07:03:07.386883 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"602638e1-0a19-4a7f-a752-50b0e228a7da","Type":"ContainerStarted","Data":"b4259e00fcca732c5a9b536dc578e316c90509c2cbe73176fd347f4c68894399"} Jan 28 07:03:07 crc kubenswrapper[4642]: I0128 07:03:07.405255 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=7.192705237 podStartE2EDuration="36.405242585s" podCreationTimestamp="2026-01-28 07:02:31 +0000 UTC" firstStartedPulling="2026-01-28 07:02:33.466024092 +0000 UTC m=+876.698112902" lastFinishedPulling="2026-01-28 07:03:02.678561441 +0000 UTC m=+905.910650250" observedRunningTime="2026-01-28 07:03:07.402063896 +0000 UTC m=+910.634152705" watchObservedRunningTime="2026-01-28 07:03:07.405242585 +0000 UTC m=+910.637331395" Jan 28 07:03:07 crc kubenswrapper[4642]: I0128 07:03:07.927335 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55db57d86f-sd9sb" Jan 28 07:03:08 crc kubenswrapper[4642]: I0128 07:03:08.012879 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7685d4bf9-pr5j7" Jan 28 07:03:08 crc kubenswrapper[4642]: I0128 07:03:08.054480 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55db57d86f-sd9sb"] Jan 28 07:03:08 crc kubenswrapper[4642]: I0128 07:03:08.199388 4642 patch_prober.go:28] interesting pod/machine-config-daemon-hdsmf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 07:03:08 crc kubenswrapper[4642]: I0128 07:03:08.199468 4642 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 07:03:08 crc kubenswrapper[4642]: I0128 07:03:08.392124 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55db57d86f-sd9sb" podUID="d6799e5a-9c37-43f6-abe1-365221fe2249" containerName="dnsmasq-dns" containerID="cri-o://73f4cfee9e27c66b506dac7b56bffd9dcd8a5b610f491b9e28e89767f91d6803" gracePeriod=10 Jan 28 07:03:08 crc kubenswrapper[4642]: I0128 07:03:08.955286 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55db57d86f-sd9sb" Jan 28 07:03:08 crc kubenswrapper[4642]: I0128 07:03:08.990094 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d6799e5a-9c37-43f6-abe1-365221fe2249-dns-svc\") pod \"d6799e5a-9c37-43f6-abe1-365221fe2249\" (UID: \"d6799e5a-9c37-43f6-abe1-365221fe2249\") " Jan 28 07:03:08 crc kubenswrapper[4642]: I0128 07:03:08.990494 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kdcj4\" (UniqueName: \"kubernetes.io/projected/d6799e5a-9c37-43f6-abe1-365221fe2249-kube-api-access-kdcj4\") pod \"d6799e5a-9c37-43f6-abe1-365221fe2249\" (UID: \"d6799e5a-9c37-43f6-abe1-365221fe2249\") " Jan 28 07:03:08 crc kubenswrapper[4642]: I0128 07:03:08.990553 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d6799e5a-9c37-43f6-abe1-365221fe2249-ovsdbserver-nb\") pod \"d6799e5a-9c37-43f6-abe1-365221fe2249\" (UID: \"d6799e5a-9c37-43f6-abe1-365221fe2249\") " Jan 28 07:03:08 crc kubenswrapper[4642]: I0128 07:03:08.990573 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6799e5a-9c37-43f6-abe1-365221fe2249-config\") pod \"d6799e5a-9c37-43f6-abe1-365221fe2249\" (UID: \"d6799e5a-9c37-43f6-abe1-365221fe2249\") " Jan 28 07:03:09 crc kubenswrapper[4642]: I0128 07:03:09.003171 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6799e5a-9c37-43f6-abe1-365221fe2249-kube-api-access-kdcj4" (OuterVolumeSpecName: "kube-api-access-kdcj4") pod "d6799e5a-9c37-43f6-abe1-365221fe2249" (UID: "d6799e5a-9c37-43f6-abe1-365221fe2249"). InnerVolumeSpecName "kube-api-access-kdcj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:03:09 crc kubenswrapper[4642]: I0128 07:03:09.036562 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6799e5a-9c37-43f6-abe1-365221fe2249-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d6799e5a-9c37-43f6-abe1-365221fe2249" (UID: "d6799e5a-9c37-43f6-abe1-365221fe2249"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:03:09 crc kubenswrapper[4642]: I0128 07:03:09.057040 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6799e5a-9c37-43f6-abe1-365221fe2249-config" (OuterVolumeSpecName: "config") pod "d6799e5a-9c37-43f6-abe1-365221fe2249" (UID: "d6799e5a-9c37-43f6-abe1-365221fe2249"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:03:09 crc kubenswrapper[4642]: I0128 07:03:09.059926 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6799e5a-9c37-43f6-abe1-365221fe2249-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d6799e5a-9c37-43f6-abe1-365221fe2249" (UID: "d6799e5a-9c37-43f6-abe1-365221fe2249"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:03:09 crc kubenswrapper[4642]: I0128 07:03:09.091259 4642 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d6799e5a-9c37-43f6-abe1-365221fe2249-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 07:03:09 crc kubenswrapper[4642]: I0128 07:03:09.091332 4642 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6799e5a-9c37-43f6-abe1-365221fe2249-config\") on node \"crc\" DevicePath \"\"" Jan 28 07:03:09 crc kubenswrapper[4642]: I0128 07:03:09.091386 4642 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d6799e5a-9c37-43f6-abe1-365221fe2249-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 07:03:09 crc kubenswrapper[4642]: I0128 07:03:09.091454 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kdcj4\" (UniqueName: \"kubernetes.io/projected/d6799e5a-9c37-43f6-abe1-365221fe2249-kube-api-access-kdcj4\") on node \"crc\" DevicePath \"\"" Jan 28 07:03:09 crc kubenswrapper[4642]: I0128 07:03:09.399915 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"59611c4a-ee6f-4f16-9804-aba66d47d908","Type":"ContainerStarted","Data":"54a5a7a5182a906b55f752ef74011c37fc52e13e76da3adbc185b32ae49b017a"} Jan 28 07:03:09 crc kubenswrapper[4642]: I0128 07:03:09.400053 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 28 07:03:09 crc kubenswrapper[4642]: I0128 07:03:09.400991 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"c03a521e-dd32-4a74-b452-512fe8bdae8e","Type":"ContainerStarted","Data":"cd3c7403651b59e70504dfa1a86a84722456d5444171a5af028a061e9e956990"} Jan 28 07:03:09 crc kubenswrapper[4642]: I0128 07:03:09.402628 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"f7f96797-56a8-4fc5-a520-cfaecf44c4a0","Type":"ContainerStarted","Data":"684dc3563f667d7bb79960f0655df924b8879f0ecdaa7b1e65d2447476d75353"} Jan 28 07:03:09 crc kubenswrapper[4642]: I0128 07:03:09.402671 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"f7f96797-56a8-4fc5-a520-cfaecf44c4a0","Type":"ContainerStarted","Data":"68b4ea0ec1ddf69909fb69ea6d90f6e594d08b7daa25c9b4d230963d1e2780bf"} Jan 28 07:03:09 crc kubenswrapper[4642]: I0128 07:03:09.404005 4642 generic.go:334] "Generic (PLEG): container finished" podID="d6799e5a-9c37-43f6-abe1-365221fe2249" containerID="73f4cfee9e27c66b506dac7b56bffd9dcd8a5b610f491b9e28e89767f91d6803" exitCode=0 Jan 28 07:03:09 crc kubenswrapper[4642]: I0128 07:03:09.404091 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55db57d86f-sd9sb" event={"ID":"d6799e5a-9c37-43f6-abe1-365221fe2249","Type":"ContainerDied","Data":"73f4cfee9e27c66b506dac7b56bffd9dcd8a5b610f491b9e28e89767f91d6803"} Jan 28 07:03:09 crc kubenswrapper[4642]: I0128 07:03:09.404093 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55db57d86f-sd9sb" Jan 28 07:03:09 crc kubenswrapper[4642]: I0128 07:03:09.404123 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55db57d86f-sd9sb" event={"ID":"d6799e5a-9c37-43f6-abe1-365221fe2249","Type":"ContainerDied","Data":"b2f6440ff4addde61925a03308f9531155b59a73b5b8b87d54b3e4f045f6c18d"} Jan 28 07:03:09 crc kubenswrapper[4642]: I0128 07:03:09.404145 4642 scope.go:117] "RemoveContainer" containerID="73f4cfee9e27c66b506dac7b56bffd9dcd8a5b610f491b9e28e89767f91d6803" Jan 28 07:03:09 crc kubenswrapper[4642]: I0128 07:03:09.406292 4642 generic.go:334] "Generic (PLEG): container finished" podID="e0307f10-0ff0-4421-91a1-34ff47b17d16" containerID="4e2ef0ffe19fd5c936ec04e904fc3db06be64e875dd6034c182124ccf6fb9bf6" exitCode=0 Jan 28 07:03:09 crc kubenswrapper[4642]: I0128 07:03:09.406328 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-nvphd" event={"ID":"e0307f10-0ff0-4421-91a1-34ff47b17d16","Type":"ContainerDied","Data":"4e2ef0ffe19fd5c936ec04e904fc3db06be64e875dd6034c182124ccf6fb9bf6"} Jan 28 07:03:09 crc kubenswrapper[4642]: I0128 07:03:09.411987 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-9d4kj" event={"ID":"29b93c34-de22-48ac-80da-b79048401506","Type":"ContainerStarted","Data":"006e1c3c3ede1a569186e517990e2a251251a68b3541cfd05462701e6202902b"} Jan 28 07:03:09 crc kubenswrapper[4642]: I0128 07:03:09.412102 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-9d4kj" Jan 28 07:03:09 crc kubenswrapper[4642]: I0128 07:03:09.422717 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"08e72283-7898-4b33-a2ef-5ebe2a319fe8","Type":"ContainerStarted","Data":"494c2d075271f5b9f0849e7cb3691b013241ad9bf4360a2096e388a212449ee0"} Jan 28 07:03:09 crc kubenswrapper[4642]: I0128 07:03:09.422763 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"08e72283-7898-4b33-a2ef-5ebe2a319fe8","Type":"ContainerStarted","Data":"85e3ae7b8e15b658e8903bb600a93005d155b242ed1727312681ec004a84d923"} Jan 28 07:03:09 crc kubenswrapper[4642]: I0128 07:03:09.428923 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=1.787466072 podStartE2EDuration="35.428906559s" podCreationTimestamp="2026-01-28 07:02:34 +0000 UTC" firstStartedPulling="2026-01-28 07:02:35.05385453 +0000 UTC m=+878.285943339" lastFinishedPulling="2026-01-28 07:03:08.695295016 +0000 UTC m=+911.927383826" observedRunningTime="2026-01-28 07:03:09.425964805 +0000 UTC m=+912.658053614" watchObservedRunningTime="2026-01-28 07:03:09.428906559 +0000 UTC m=+912.660995368" Jan 28 07:03:09 crc kubenswrapper[4642]: I0128 07:03:09.448810 4642 scope.go:117] "RemoveContainer" containerID="607a2ce4058d5b02128c81257b1747f1f70fedc678e5e6a1a97cebe1e13bbb0e" Jan 28 07:03:09 crc kubenswrapper[4642]: I0128 07:03:09.470310 4642 scope.go:117] "RemoveContainer" containerID="73f4cfee9e27c66b506dac7b56bffd9dcd8a5b610f491b9e28e89767f91d6803" Jan 28 07:03:09 crc kubenswrapper[4642]: E0128 07:03:09.473351 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73f4cfee9e27c66b506dac7b56bffd9dcd8a5b610f491b9e28e89767f91d6803\": container with ID starting with 73f4cfee9e27c66b506dac7b56bffd9dcd8a5b610f491b9e28e89767f91d6803 not found: ID does not exist" containerID="73f4cfee9e27c66b506dac7b56bffd9dcd8a5b610f491b9e28e89767f91d6803" Jan 28 07:03:09 crc kubenswrapper[4642]: I0128 07:03:09.473392 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73f4cfee9e27c66b506dac7b56bffd9dcd8a5b610f491b9e28e89767f91d6803"} err="failed to get container status \"73f4cfee9e27c66b506dac7b56bffd9dcd8a5b610f491b9e28e89767f91d6803\": rpc error: code = NotFound desc = could not find container \"73f4cfee9e27c66b506dac7b56bffd9dcd8a5b610f491b9e28e89767f91d6803\": container with ID starting with 73f4cfee9e27c66b506dac7b56bffd9dcd8a5b610f491b9e28e89767f91d6803 not found: ID does not exist" Jan 28 07:03:09 crc kubenswrapper[4642]: I0128 07:03:09.473430 4642 scope.go:117] "RemoveContainer" containerID="607a2ce4058d5b02128c81257b1747f1f70fedc678e5e6a1a97cebe1e13bbb0e" Jan 28 07:03:09 crc kubenswrapper[4642]: E0128 07:03:09.473812 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"607a2ce4058d5b02128c81257b1747f1f70fedc678e5e6a1a97cebe1e13bbb0e\": container with ID starting with 607a2ce4058d5b02128c81257b1747f1f70fedc678e5e6a1a97cebe1e13bbb0e not found: ID does not exist" containerID="607a2ce4058d5b02128c81257b1747f1f70fedc678e5e6a1a97cebe1e13bbb0e" Jan 28 07:03:09 crc kubenswrapper[4642]: I0128 07:03:09.473848 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"607a2ce4058d5b02128c81257b1747f1f70fedc678e5e6a1a97cebe1e13bbb0e"} err="failed to get container status \"607a2ce4058d5b02128c81257b1747f1f70fedc678e5e6a1a97cebe1e13bbb0e\": rpc error: code = NotFound desc = could not find container \"607a2ce4058d5b02128c81257b1747f1f70fedc678e5e6a1a97cebe1e13bbb0e\": container with ID starting with 607a2ce4058d5b02128c81257b1747f1f70fedc678e5e6a1a97cebe1e13bbb0e not found: ID does not exist" Jan 28 07:03:09 crc kubenswrapper[4642]: I0128 07:03:09.475771 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=2.715845516 podStartE2EDuration="28.4757586s" podCreationTimestamp="2026-01-28 07:02:41 +0000 UTC" firstStartedPulling="2026-01-28 07:02:42.986573117 +0000 UTC m=+886.218661926" lastFinishedPulling="2026-01-28 07:03:08.7464862 +0000 UTC m=+911.978575010" observedRunningTime="2026-01-28 07:03:09.47488216 +0000 UTC m=+912.706970969" watchObservedRunningTime="2026-01-28 07:03:09.4757586 +0000 UTC m=+912.707847409" Jan 28 07:03:09 crc kubenswrapper[4642]: I0128 07:03:09.475870 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=2.612339833 podStartE2EDuration="27.475866602s" podCreationTimestamp="2026-01-28 07:02:42 +0000 UTC" firstStartedPulling="2026-01-28 07:02:43.859911381 +0000 UTC m=+887.092000191" lastFinishedPulling="2026-01-28 07:03:08.723438151 +0000 UTC m=+911.955526960" observedRunningTime="2026-01-28 07:03:09.462370302 +0000 UTC m=+912.694459112" watchObservedRunningTime="2026-01-28 07:03:09.475866602 +0000 UTC m=+912.707955412" Jan 28 07:03:09 crc kubenswrapper[4642]: I0128 07:03:09.498683 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-9d4kj" podStartSLOduration=1.780418961 podStartE2EDuration="29.498661686s" podCreationTimestamp="2026-01-28 07:02:40 +0000 UTC" firstStartedPulling="2026-01-28 07:02:41.042207846 +0000 UTC m=+884.274296655" lastFinishedPulling="2026-01-28 07:03:08.76045057 +0000 UTC m=+911.992539380" observedRunningTime="2026-01-28 07:03:09.488132308 +0000 UTC m=+912.720221106" watchObservedRunningTime="2026-01-28 07:03:09.498661686 +0000 UTC m=+912.730750495" Jan 28 07:03:09 crc kubenswrapper[4642]: I0128 07:03:09.507546 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55db57d86f-sd9sb"] Jan 28 07:03:09 crc kubenswrapper[4642]: I0128 07:03:09.528266 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55db57d86f-sd9sb"] Jan 28 07:03:10 crc kubenswrapper[4642]: I0128 07:03:10.395068 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 28 07:03:10 crc kubenswrapper[4642]: I0128 07:03:10.431989 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-nvphd" event={"ID":"e0307f10-0ff0-4421-91a1-34ff47b17d16","Type":"ContainerStarted","Data":"e128c387bcd21c89fadd0967e830b24f71459e63f2bcaf7e273d288bfab1d707"} Jan 28 07:03:10 crc kubenswrapper[4642]: I0128 07:03:10.432798 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-nvphd" Jan 28 07:03:10 crc kubenswrapper[4642]: I0128 07:03:10.432902 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-nvphd" Jan 28 07:03:10 crc kubenswrapper[4642]: I0128 07:03:10.432914 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-nvphd" event={"ID":"e0307f10-0ff0-4421-91a1-34ff47b17d16","Type":"ContainerStarted","Data":"119afa89f3d235a55a21d1abfb9d930410cb60825e2ee11516971fbdfb55cfbe"} Jan 28 07:03:10 crc kubenswrapper[4642]: I0128 07:03:10.449029 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-nvphd" podStartSLOduration=2.9359481389999997 podStartE2EDuration="30.449004837s" podCreationTimestamp="2026-01-28 07:02:40 +0000 UTC" firstStartedPulling="2026-01-28 07:02:41.180992064 +0000 UTC m=+884.413080874" lastFinishedPulling="2026-01-28 07:03:08.694048762 +0000 UTC m=+911.926137572" observedRunningTime="2026-01-28 07:03:10.447504103 +0000 UTC m=+913.679592913" watchObservedRunningTime="2026-01-28 07:03:10.449004837 +0000 UTC m=+913.681093646" Jan 28 07:03:11 crc kubenswrapper[4642]: I0128 07:03:11.105665 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6799e5a-9c37-43f6-abe1-365221fe2249" path="/var/lib/kubelet/pods/d6799e5a-9c37-43f6-abe1-365221fe2249/volumes" Jan 28 07:03:12 crc kubenswrapper[4642]: I0128 07:03:12.385491 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 28 07:03:12 crc kubenswrapper[4642]: I0128 07:03:12.385539 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 28 07:03:12 crc kubenswrapper[4642]: I0128 07:03:12.414065 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 28 07:03:12 crc kubenswrapper[4642]: I0128 07:03:12.448275 4642 generic.go:334] "Generic (PLEG): container finished" podID="c03a521e-dd32-4a74-b452-512fe8bdae8e" containerID="cd3c7403651b59e70504dfa1a86a84722456d5444171a5af028a061e9e956990" exitCode=0 Jan 28 07:03:12 crc kubenswrapper[4642]: I0128 07:03:12.448337 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"c03a521e-dd32-4a74-b452-512fe8bdae8e","Type":"ContainerDied","Data":"cd3c7403651b59e70504dfa1a86a84722456d5444171a5af028a061e9e956990"} Jan 28 07:03:12 crc kubenswrapper[4642]: I0128 07:03:12.875370 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 28 07:03:12 crc kubenswrapper[4642]: I0128 07:03:12.875585 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 28 07:03:12 crc kubenswrapper[4642]: I0128 07:03:12.929408 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 28 07:03:13 crc kubenswrapper[4642]: I0128 07:03:13.395502 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 28 07:03:13 crc kubenswrapper[4642]: I0128 07:03:13.421935 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 28 07:03:13 crc kubenswrapper[4642]: I0128 07:03:13.480774 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 28 07:03:13 crc kubenswrapper[4642]: I0128 07:03:13.512442 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 28 07:03:14 crc kubenswrapper[4642]: I0128 07:03:14.318441 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-64da-account-create-update-z85gw"] Jan 28 07:03:14 crc kubenswrapper[4642]: E0128 07:03:14.318937 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0209bad5-4a93-4b5f-8643-5dd6c997debf" containerName="init" Jan 28 07:03:14 crc kubenswrapper[4642]: I0128 07:03:14.318956 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="0209bad5-4a93-4b5f-8643-5dd6c997debf" containerName="init" Jan 28 07:03:14 crc kubenswrapper[4642]: E0128 07:03:14.318982 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6799e5a-9c37-43f6-abe1-365221fe2249" containerName="init" Jan 28 07:03:14 crc kubenswrapper[4642]: I0128 07:03:14.318989 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6799e5a-9c37-43f6-abe1-365221fe2249" containerName="init" Jan 28 07:03:14 crc kubenswrapper[4642]: E0128 07:03:14.319008 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5254c758-f70f-49c8-b0d6-9754ba3a16c6" containerName="init" Jan 28 07:03:14 crc kubenswrapper[4642]: I0128 07:03:14.319014 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="5254c758-f70f-49c8-b0d6-9754ba3a16c6" containerName="init" Jan 28 07:03:14 crc kubenswrapper[4642]: E0128 07:03:14.319026 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="558fcc2b-c367-48b5-a3b6-de01fca006f3" containerName="init" Jan 28 07:03:14 crc kubenswrapper[4642]: I0128 07:03:14.319030 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="558fcc2b-c367-48b5-a3b6-de01fca006f3" containerName="init" Jan 28 07:03:14 crc kubenswrapper[4642]: E0128 07:03:14.319038 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6799e5a-9c37-43f6-abe1-365221fe2249" containerName="dnsmasq-dns" Jan 28 07:03:14 crc kubenswrapper[4642]: I0128 07:03:14.319045 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6799e5a-9c37-43f6-abe1-365221fe2249" containerName="dnsmasq-dns" Jan 28 07:03:14 crc kubenswrapper[4642]: E0128 07:03:14.319054 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0209bad5-4a93-4b5f-8643-5dd6c997debf" containerName="dnsmasq-dns" Jan 28 07:03:14 crc kubenswrapper[4642]: I0128 07:03:14.319059 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="0209bad5-4a93-4b5f-8643-5dd6c997debf" containerName="dnsmasq-dns" Jan 28 07:03:14 crc kubenswrapper[4642]: I0128 07:03:14.319212 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="5254c758-f70f-49c8-b0d6-9754ba3a16c6" containerName="init" Jan 28 07:03:14 crc kubenswrapper[4642]: I0128 07:03:14.319226 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="558fcc2b-c367-48b5-a3b6-de01fca006f3" containerName="init" Jan 28 07:03:14 crc kubenswrapper[4642]: I0128 07:03:14.319234 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="0209bad5-4a93-4b5f-8643-5dd6c997debf" containerName="dnsmasq-dns" Jan 28 07:03:14 crc kubenswrapper[4642]: I0128 07:03:14.319240 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6799e5a-9c37-43f6-abe1-365221fe2249" containerName="dnsmasq-dns" Jan 28 07:03:14 crc kubenswrapper[4642]: I0128 07:03:14.319660 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-64da-account-create-update-z85gw" Jan 28 07:03:14 crc kubenswrapper[4642]: I0128 07:03:14.321981 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 28 07:03:14 crc kubenswrapper[4642]: I0128 07:03:14.326029 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-64da-account-create-update-z85gw"] Jan 28 07:03:14 crc kubenswrapper[4642]: I0128 07:03:14.365528 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-98zhc"] Jan 28 07:03:14 crc kubenswrapper[4642]: I0128 07:03:14.366335 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-98zhc" Jan 28 07:03:14 crc kubenswrapper[4642]: I0128 07:03:14.371857 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-98zhc"] Jan 28 07:03:14 crc kubenswrapper[4642]: I0128 07:03:14.376684 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c10693d6-4ced-4a1b-a821-887fac229b90-operator-scripts\") pod \"keystone-64da-account-create-update-z85gw\" (UID: \"c10693d6-4ced-4a1b-a821-887fac229b90\") " pod="openstack/keystone-64da-account-create-update-z85gw" Jan 28 07:03:14 crc kubenswrapper[4642]: I0128 07:03:14.376824 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkjsl\" (UniqueName: \"kubernetes.io/projected/c10693d6-4ced-4a1b-a821-887fac229b90-kube-api-access-kkjsl\") pod \"keystone-64da-account-create-update-z85gw\" (UID: \"c10693d6-4ced-4a1b-a821-887fac229b90\") " pod="openstack/keystone-64da-account-create-update-z85gw" Jan 28 07:03:14 crc kubenswrapper[4642]: I0128 07:03:14.460816 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"c03a521e-dd32-4a74-b452-512fe8bdae8e","Type":"ContainerStarted","Data":"188dc3da9f6ad7cf8698c3c257327127180f3843bd79da7e613ff0824fc14c6b"} Jan 28 07:03:14 crc kubenswrapper[4642]: I0128 07:03:14.477535 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hm84b\" (UniqueName: \"kubernetes.io/projected/b7e90a78-c529-4572-83af-92513b7ce545-kube-api-access-hm84b\") pod \"keystone-db-create-98zhc\" (UID: \"b7e90a78-c529-4572-83af-92513b7ce545\") " pod="openstack/keystone-db-create-98zhc" Jan 28 07:03:14 crc kubenswrapper[4642]: I0128 07:03:14.477578 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkjsl\" (UniqueName: \"kubernetes.io/projected/c10693d6-4ced-4a1b-a821-887fac229b90-kube-api-access-kkjsl\") pod \"keystone-64da-account-create-update-z85gw\" (UID: \"c10693d6-4ced-4a1b-a821-887fac229b90\") " pod="openstack/keystone-64da-account-create-update-z85gw" Jan 28 07:03:14 crc kubenswrapper[4642]: I0128 07:03:14.477648 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c10693d6-4ced-4a1b-a821-887fac229b90-operator-scripts\") pod \"keystone-64da-account-create-update-z85gw\" (UID: \"c10693d6-4ced-4a1b-a821-887fac229b90\") " pod="openstack/keystone-64da-account-create-update-z85gw" Jan 28 07:03:14 crc kubenswrapper[4642]: I0128 07:03:14.477668 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7e90a78-c529-4572-83af-92513b7ce545-operator-scripts\") pod \"keystone-db-create-98zhc\" (UID: \"b7e90a78-c529-4572-83af-92513b7ce545\") " pod="openstack/keystone-db-create-98zhc" Jan 28 07:03:14 crc kubenswrapper[4642]: I0128 07:03:14.478296 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c10693d6-4ced-4a1b-a821-887fac229b90-operator-scripts\") pod \"keystone-64da-account-create-update-z85gw\" (UID: \"c10693d6-4ced-4a1b-a821-887fac229b90\") " pod="openstack/keystone-64da-account-create-update-z85gw" Jan 28 07:03:14 crc kubenswrapper[4642]: I0128 07:03:14.479136 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=7.663572404 podStartE2EDuration="41.479126914s" podCreationTimestamp="2026-01-28 07:02:33 +0000 UTC" firstStartedPulling="2026-01-28 07:02:34.881802656 +0000 UTC m=+878.113891465" lastFinishedPulling="2026-01-28 07:03:08.697357166 +0000 UTC m=+911.929445975" observedRunningTime="2026-01-28 07:03:14.474264827 +0000 UTC m=+917.706353637" watchObservedRunningTime="2026-01-28 07:03:14.479126914 +0000 UTC m=+917.711215723" Jan 28 07:03:14 crc kubenswrapper[4642]: I0128 07:03:14.493540 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkjsl\" (UniqueName: \"kubernetes.io/projected/c10693d6-4ced-4a1b-a821-887fac229b90-kube-api-access-kkjsl\") pod \"keystone-64da-account-create-update-z85gw\" (UID: \"c10693d6-4ced-4a1b-a821-887fac229b90\") " pod="openstack/keystone-64da-account-create-update-z85gw" Jan 28 07:03:14 crc kubenswrapper[4642]: I0128 07:03:14.565160 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-rg5pj"] Jan 28 07:03:14 crc kubenswrapper[4642]: I0128 07:03:14.565931 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-rg5pj" Jan 28 07:03:14 crc kubenswrapper[4642]: I0128 07:03:14.572541 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-rg5pj"] Jan 28 07:03:14 crc kubenswrapper[4642]: I0128 07:03:14.579074 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7e90a78-c529-4572-83af-92513b7ce545-operator-scripts\") pod \"keystone-db-create-98zhc\" (UID: \"b7e90a78-c529-4572-83af-92513b7ce545\") " pod="openstack/keystone-db-create-98zhc" Jan 28 07:03:14 crc kubenswrapper[4642]: I0128 07:03:14.579626 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hm84b\" (UniqueName: \"kubernetes.io/projected/b7e90a78-c529-4572-83af-92513b7ce545-kube-api-access-hm84b\") pod \"keystone-db-create-98zhc\" (UID: \"b7e90a78-c529-4572-83af-92513b7ce545\") " pod="openstack/keystone-db-create-98zhc" Jan 28 07:03:14 crc kubenswrapper[4642]: I0128 07:03:14.579789 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7e90a78-c529-4572-83af-92513b7ce545-operator-scripts\") pod \"keystone-db-create-98zhc\" (UID: \"b7e90a78-c529-4572-83af-92513b7ce545\") " pod="openstack/keystone-db-create-98zhc" Jan 28 07:03:14 crc kubenswrapper[4642]: I0128 07:03:14.593484 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hm84b\" (UniqueName: \"kubernetes.io/projected/b7e90a78-c529-4572-83af-92513b7ce545-kube-api-access-hm84b\") pod \"keystone-db-create-98zhc\" (UID: \"b7e90a78-c529-4572-83af-92513b7ce545\") " pod="openstack/keystone-db-create-98zhc" Jan 28 07:03:14 crc kubenswrapper[4642]: I0128 07:03:14.632711 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-64da-account-create-update-z85gw" Jan 28 07:03:14 crc kubenswrapper[4642]: I0128 07:03:14.655570 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-b6eb-account-create-update-4q9gp"] Jan 28 07:03:14 crc kubenswrapper[4642]: I0128 07:03:14.656432 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-b6eb-account-create-update-4q9gp" Jan 28 07:03:14 crc kubenswrapper[4642]: I0128 07:03:14.658151 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 28 07:03:14 crc kubenswrapper[4642]: I0128 07:03:14.666719 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-b6eb-account-create-update-4q9gp"] Jan 28 07:03:14 crc kubenswrapper[4642]: I0128 07:03:14.670328 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 28 07:03:14 crc kubenswrapper[4642]: I0128 07:03:14.679984 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-98zhc" Jan 28 07:03:14 crc kubenswrapper[4642]: I0128 07:03:14.682353 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c37e9b0e-970a-44db-bc75-0782625ab2a2-operator-scripts\") pod \"placement-db-create-rg5pj\" (UID: \"c37e9b0e-970a-44db-bc75-0782625ab2a2\") " pod="openstack/placement-db-create-rg5pj" Jan 28 07:03:14 crc kubenswrapper[4642]: I0128 07:03:14.682384 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95ef778a-9202-43a6-b1b0-fc0938621c71-operator-scripts\") pod \"placement-b6eb-account-create-update-4q9gp\" (UID: \"95ef778a-9202-43a6-b1b0-fc0938621c71\") " pod="openstack/placement-b6eb-account-create-update-4q9gp" Jan 28 07:03:14 crc kubenswrapper[4642]: I0128 07:03:14.682488 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bt5lz\" (UniqueName: \"kubernetes.io/projected/95ef778a-9202-43a6-b1b0-fc0938621c71-kube-api-access-bt5lz\") pod \"placement-b6eb-account-create-update-4q9gp\" (UID: \"95ef778a-9202-43a6-b1b0-fc0938621c71\") " pod="openstack/placement-b6eb-account-create-update-4q9gp" Jan 28 07:03:14 crc kubenswrapper[4642]: I0128 07:03:14.682515 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ph7sp\" (UniqueName: \"kubernetes.io/projected/c37e9b0e-970a-44db-bc75-0782625ab2a2-kube-api-access-ph7sp\") pod \"placement-db-create-rg5pj\" (UID: \"c37e9b0e-970a-44db-bc75-0782625ab2a2\") " pod="openstack/placement-db-create-rg5pj" Jan 28 07:03:14 crc kubenswrapper[4642]: I0128 07:03:14.783695 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bt5lz\" (UniqueName: \"kubernetes.io/projected/95ef778a-9202-43a6-b1b0-fc0938621c71-kube-api-access-bt5lz\") pod \"placement-b6eb-account-create-update-4q9gp\" (UID: \"95ef778a-9202-43a6-b1b0-fc0938621c71\") " pod="openstack/placement-b6eb-account-create-update-4q9gp" Jan 28 07:03:14 crc kubenswrapper[4642]: I0128 07:03:14.784016 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ph7sp\" (UniqueName: \"kubernetes.io/projected/c37e9b0e-970a-44db-bc75-0782625ab2a2-kube-api-access-ph7sp\") pod \"placement-db-create-rg5pj\" (UID: \"c37e9b0e-970a-44db-bc75-0782625ab2a2\") " pod="openstack/placement-db-create-rg5pj" Jan 28 07:03:14 crc kubenswrapper[4642]: I0128 07:03:14.784283 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c37e9b0e-970a-44db-bc75-0782625ab2a2-operator-scripts\") pod \"placement-db-create-rg5pj\" (UID: \"c37e9b0e-970a-44db-bc75-0782625ab2a2\") " pod="openstack/placement-db-create-rg5pj" Jan 28 07:03:14 crc kubenswrapper[4642]: I0128 07:03:14.784311 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95ef778a-9202-43a6-b1b0-fc0938621c71-operator-scripts\") pod \"placement-b6eb-account-create-update-4q9gp\" (UID: \"95ef778a-9202-43a6-b1b0-fc0938621c71\") " pod="openstack/placement-b6eb-account-create-update-4q9gp" Jan 28 07:03:14 crc kubenswrapper[4642]: I0128 07:03:14.784929 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c37e9b0e-970a-44db-bc75-0782625ab2a2-operator-scripts\") pod \"placement-db-create-rg5pj\" (UID: \"c37e9b0e-970a-44db-bc75-0782625ab2a2\") " pod="openstack/placement-db-create-rg5pj" Jan 28 07:03:14 crc kubenswrapper[4642]: I0128 07:03:14.784993 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95ef778a-9202-43a6-b1b0-fc0938621c71-operator-scripts\") pod \"placement-b6eb-account-create-update-4q9gp\" (UID: \"95ef778a-9202-43a6-b1b0-fc0938621c71\") " pod="openstack/placement-b6eb-account-create-update-4q9gp" Jan 28 07:03:14 crc kubenswrapper[4642]: I0128 07:03:14.800597 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bt5lz\" (UniqueName: \"kubernetes.io/projected/95ef778a-9202-43a6-b1b0-fc0938621c71-kube-api-access-bt5lz\") pod \"placement-b6eb-account-create-update-4q9gp\" (UID: \"95ef778a-9202-43a6-b1b0-fc0938621c71\") " pod="openstack/placement-b6eb-account-create-update-4q9gp" Jan 28 07:03:14 crc kubenswrapper[4642]: I0128 07:03:14.800738 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ph7sp\" (UniqueName: \"kubernetes.io/projected/c37e9b0e-970a-44db-bc75-0782625ab2a2-kube-api-access-ph7sp\") pod \"placement-db-create-rg5pj\" (UID: \"c37e9b0e-970a-44db-bc75-0782625ab2a2\") " pod="openstack/placement-db-create-rg5pj" Jan 28 07:03:14 crc kubenswrapper[4642]: I0128 07:03:14.854886 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-dlrtb"] Jan 28 07:03:14 crc kubenswrapper[4642]: I0128 07:03:14.855674 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-dlrtb" Jan 28 07:03:14 crc kubenswrapper[4642]: I0128 07:03:14.862860 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-dlrtb"] Jan 28 07:03:14 crc kubenswrapper[4642]: I0128 07:03:14.887654 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-rg5pj" Jan 28 07:03:14 crc kubenswrapper[4642]: I0128 07:03:14.888285 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9ecdc58f-cfc4-47ac-959c-34336d6c2e36-operator-scripts\") pod \"glance-db-create-dlrtb\" (UID: \"9ecdc58f-cfc4-47ac-959c-34336d6c2e36\") " pod="openstack/glance-db-create-dlrtb" Jan 28 07:03:14 crc kubenswrapper[4642]: I0128 07:03:14.888347 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5tl8\" (UniqueName: \"kubernetes.io/projected/9ecdc58f-cfc4-47ac-959c-34336d6c2e36-kube-api-access-m5tl8\") pod \"glance-db-create-dlrtb\" (UID: \"9ecdc58f-cfc4-47ac-959c-34336d6c2e36\") " pod="openstack/glance-db-create-dlrtb" Jan 28 07:03:14 crc kubenswrapper[4642]: I0128 07:03:14.948130 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-703f-account-create-update-qw47f"] Jan 28 07:03:14 crc kubenswrapper[4642]: I0128 07:03:14.949796 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-703f-account-create-update-qw47f" Jan 28 07:03:14 crc kubenswrapper[4642]: I0128 07:03:14.957662 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-703f-account-create-update-qw47f"] Jan 28 07:03:14 crc kubenswrapper[4642]: I0128 07:03:14.966381 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 28 07:03:14 crc kubenswrapper[4642]: I0128 07:03:14.990095 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4bbb7547-8266-45d7-9198-1f6fd3a4418b-operator-scripts\") pod \"glance-703f-account-create-update-qw47f\" (UID: \"4bbb7547-8266-45d7-9198-1f6fd3a4418b\") " pod="openstack/glance-703f-account-create-update-qw47f" Jan 28 07:03:14 crc kubenswrapper[4642]: I0128 07:03:14.990139 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhx7q\" (UniqueName: \"kubernetes.io/projected/4bbb7547-8266-45d7-9198-1f6fd3a4418b-kube-api-access-fhx7q\") pod \"glance-703f-account-create-update-qw47f\" (UID: \"4bbb7547-8266-45d7-9198-1f6fd3a4418b\") " pod="openstack/glance-703f-account-create-update-qw47f" Jan 28 07:03:14 crc kubenswrapper[4642]: I0128 07:03:14.990175 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9ecdc58f-cfc4-47ac-959c-34336d6c2e36-operator-scripts\") pod \"glance-db-create-dlrtb\" (UID: \"9ecdc58f-cfc4-47ac-959c-34336d6c2e36\") " pod="openstack/glance-db-create-dlrtb" Jan 28 07:03:14 crc kubenswrapper[4642]: I0128 07:03:14.990321 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5tl8\" (UniqueName: \"kubernetes.io/projected/9ecdc58f-cfc4-47ac-959c-34336d6c2e36-kube-api-access-m5tl8\") pod \"glance-db-create-dlrtb\" (UID: \"9ecdc58f-cfc4-47ac-959c-34336d6c2e36\") " pod="openstack/glance-db-create-dlrtb" Jan 28 07:03:14 crc kubenswrapper[4642]: I0128 07:03:14.990821 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9ecdc58f-cfc4-47ac-959c-34336d6c2e36-operator-scripts\") pod \"glance-db-create-dlrtb\" (UID: \"9ecdc58f-cfc4-47ac-959c-34336d6c2e36\") " pod="openstack/glance-db-create-dlrtb" Jan 28 07:03:15 crc kubenswrapper[4642]: I0128 07:03:15.005549 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5tl8\" (UniqueName: \"kubernetes.io/projected/9ecdc58f-cfc4-47ac-959c-34336d6c2e36-kube-api-access-m5tl8\") pod \"glance-db-create-dlrtb\" (UID: \"9ecdc58f-cfc4-47ac-959c-34336d6c2e36\") " pod="openstack/glance-db-create-dlrtb" Jan 28 07:03:15 crc kubenswrapper[4642]: I0128 07:03:15.011565 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-b6eb-account-create-update-4q9gp" Jan 28 07:03:15 crc kubenswrapper[4642]: I0128 07:03:15.022524 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-64da-account-create-update-z85gw"] Jan 28 07:03:15 crc kubenswrapper[4642]: I0128 07:03:15.092719 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4bbb7547-8266-45d7-9198-1f6fd3a4418b-operator-scripts\") pod \"glance-703f-account-create-update-qw47f\" (UID: \"4bbb7547-8266-45d7-9198-1f6fd3a4418b\") " pod="openstack/glance-703f-account-create-update-qw47f" Jan 28 07:03:15 crc kubenswrapper[4642]: I0128 07:03:15.092816 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhx7q\" (UniqueName: \"kubernetes.io/projected/4bbb7547-8266-45d7-9198-1f6fd3a4418b-kube-api-access-fhx7q\") pod \"glance-703f-account-create-update-qw47f\" (UID: \"4bbb7547-8266-45d7-9198-1f6fd3a4418b\") " pod="openstack/glance-703f-account-create-update-qw47f" Jan 28 07:03:15 crc kubenswrapper[4642]: I0128 07:03:15.093404 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4bbb7547-8266-45d7-9198-1f6fd3a4418b-operator-scripts\") pod \"glance-703f-account-create-update-qw47f\" (UID: \"4bbb7547-8266-45d7-9198-1f6fd3a4418b\") " pod="openstack/glance-703f-account-create-update-qw47f" Jan 28 07:03:15 crc kubenswrapper[4642]: I0128 07:03:15.110719 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhx7q\" (UniqueName: \"kubernetes.io/projected/4bbb7547-8266-45d7-9198-1f6fd3a4418b-kube-api-access-fhx7q\") pod \"glance-703f-account-create-update-qw47f\" (UID: \"4bbb7547-8266-45d7-9198-1f6fd3a4418b\") " pod="openstack/glance-703f-account-create-update-qw47f" Jan 28 07:03:15 crc kubenswrapper[4642]: I0128 07:03:15.116811 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-98zhc"] Jan 28 07:03:15 crc kubenswrapper[4642]: W0128 07:03:15.118225 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb7e90a78_c529_4572_83af_92513b7ce545.slice/crio-9177d4dd349b5f8b92d4c6cf084132eecff1e8f46c53181d1f92d9e3e5914d3c WatchSource:0}: Error finding container 9177d4dd349b5f8b92d4c6cf084132eecff1e8f46c53181d1f92d9e3e5914d3c: Status 404 returned error can't find the container with id 9177d4dd349b5f8b92d4c6cf084132eecff1e8f46c53181d1f92d9e3e5914d3c Jan 28 07:03:15 crc kubenswrapper[4642]: I0128 07:03:15.174198 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-dlrtb" Jan 28 07:03:15 crc kubenswrapper[4642]: I0128 07:03:15.263370 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-703f-account-create-update-qw47f" Jan 28 07:03:15 crc kubenswrapper[4642]: I0128 07:03:15.289662 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-rg5pj"] Jan 28 07:03:15 crc kubenswrapper[4642]: I0128 07:03:15.406176 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-b6eb-account-create-update-4q9gp"] Jan 28 07:03:15 crc kubenswrapper[4642]: W0128 07:03:15.419970 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc37e9b0e_970a_44db_bc75_0782625ab2a2.slice/crio-238abfa87fe79b631a94a8b37e727a2c395399332ef2678c4f62ad86109ebed4 WatchSource:0}: Error finding container 238abfa87fe79b631a94a8b37e727a2c395399332ef2678c4f62ad86109ebed4: Status 404 returned error can't find the container with id 238abfa87fe79b631a94a8b37e727a2c395399332ef2678c4f62ad86109ebed4 Jan 28 07:03:15 crc kubenswrapper[4642]: W0128 07:03:15.443952 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod95ef778a_9202_43a6_b1b0_fc0938621c71.slice/crio-e3a4c140c85622c001f620cec9f4d7d24dfc42e3fac5c89ecdc9b1803864acf1 WatchSource:0}: Error finding container e3a4c140c85622c001f620cec9f4d7d24dfc42e3fac5c89ecdc9b1803864acf1: Status 404 returned error can't find the container with id e3a4c140c85622c001f620cec9f4d7d24dfc42e3fac5c89ecdc9b1803864acf1 Jan 28 07:03:15 crc kubenswrapper[4642]: I0128 07:03:15.467585 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-b6eb-account-create-update-4q9gp" event={"ID":"95ef778a-9202-43a6-b1b0-fc0938621c71","Type":"ContainerStarted","Data":"e3a4c140c85622c001f620cec9f4d7d24dfc42e3fac5c89ecdc9b1803864acf1"} Jan 28 07:03:15 crc kubenswrapper[4642]: I0128 07:03:15.468993 4642 generic.go:334] "Generic (PLEG): container finished" podID="b7e90a78-c529-4572-83af-92513b7ce545" containerID="41ac7ba403bc3a29df1cde08ca227a70a61a69d3668bf8d4ebb8ba754652c629" exitCode=0 Jan 28 07:03:15 crc kubenswrapper[4642]: I0128 07:03:15.469053 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-98zhc" event={"ID":"b7e90a78-c529-4572-83af-92513b7ce545","Type":"ContainerDied","Data":"41ac7ba403bc3a29df1cde08ca227a70a61a69d3668bf8d4ebb8ba754652c629"} Jan 28 07:03:15 crc kubenswrapper[4642]: I0128 07:03:15.469081 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-98zhc" event={"ID":"b7e90a78-c529-4572-83af-92513b7ce545","Type":"ContainerStarted","Data":"9177d4dd349b5f8b92d4c6cf084132eecff1e8f46c53181d1f92d9e3e5914d3c"} Jan 28 07:03:15 crc kubenswrapper[4642]: I0128 07:03:15.470456 4642 generic.go:334] "Generic (PLEG): container finished" podID="c10693d6-4ced-4a1b-a821-887fac229b90" containerID="eb960a0316c91acff06525b7c0f6749f8951c001c383be27618cda1cbfa5a48b" exitCode=0 Jan 28 07:03:15 crc kubenswrapper[4642]: I0128 07:03:15.470537 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-64da-account-create-update-z85gw" event={"ID":"c10693d6-4ced-4a1b-a821-887fac229b90","Type":"ContainerDied","Data":"eb960a0316c91acff06525b7c0f6749f8951c001c383be27618cda1cbfa5a48b"} Jan 28 07:03:15 crc kubenswrapper[4642]: I0128 07:03:15.470558 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-64da-account-create-update-z85gw" event={"ID":"c10693d6-4ced-4a1b-a821-887fac229b90","Type":"ContainerStarted","Data":"128df50800f3305cab17696d34c0570b8969dec288fb6ca00dee01ede4eab742"} Jan 28 07:03:15 crc kubenswrapper[4642]: I0128 07:03:15.471787 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-rg5pj" event={"ID":"c37e9b0e-970a-44db-bc75-0782625ab2a2","Type":"ContainerStarted","Data":"238abfa87fe79b631a94a8b37e727a2c395399332ef2678c4f62ad86109ebed4"} Jan 28 07:03:15 crc kubenswrapper[4642]: I0128 07:03:15.561285 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-dlrtb"] Jan 28 07:03:15 crc kubenswrapper[4642]: I0128 07:03:15.642406 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-703f-account-create-update-qw47f"] Jan 28 07:03:15 crc kubenswrapper[4642]: W0128 07:03:15.643958 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4bbb7547_8266_45d7_9198_1f6fd3a4418b.slice/crio-a6034de27fbc5ed07e7b82921442a4f907ede664acfe4f05857c339d05a9e83b WatchSource:0}: Error finding container a6034de27fbc5ed07e7b82921442a4f907ede664acfe4f05857c339d05a9e83b: Status 404 returned error can't find the container with id a6034de27fbc5ed07e7b82921442a4f907ede664acfe4f05857c339d05a9e83b Jan 28 07:03:16 crc kubenswrapper[4642]: I0128 07:03:16.478963 4642 generic.go:334] "Generic (PLEG): container finished" podID="c37e9b0e-970a-44db-bc75-0782625ab2a2" containerID="b8385e6027184f93c993c5dc6836d52a30c77447640e2428806bcc3cfce26f4d" exitCode=0 Jan 28 07:03:16 crc kubenswrapper[4642]: I0128 07:03:16.479016 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-rg5pj" event={"ID":"c37e9b0e-970a-44db-bc75-0782625ab2a2","Type":"ContainerDied","Data":"b8385e6027184f93c993c5dc6836d52a30c77447640e2428806bcc3cfce26f4d"} Jan 28 07:03:16 crc kubenswrapper[4642]: I0128 07:03:16.481440 4642 generic.go:334] "Generic (PLEG): container finished" podID="4bbb7547-8266-45d7-9198-1f6fd3a4418b" containerID="b2259952f305589b13f158320ae6f333bcd9218bef4a783ee41ca75cb770b361" exitCode=0 Jan 28 07:03:16 crc kubenswrapper[4642]: I0128 07:03:16.481557 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-703f-account-create-update-qw47f" event={"ID":"4bbb7547-8266-45d7-9198-1f6fd3a4418b","Type":"ContainerDied","Data":"b2259952f305589b13f158320ae6f333bcd9218bef4a783ee41ca75cb770b361"} Jan 28 07:03:16 crc kubenswrapper[4642]: I0128 07:03:16.481655 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-703f-account-create-update-qw47f" event={"ID":"4bbb7547-8266-45d7-9198-1f6fd3a4418b","Type":"ContainerStarted","Data":"a6034de27fbc5ed07e7b82921442a4f907ede664acfe4f05857c339d05a9e83b"} Jan 28 07:03:16 crc kubenswrapper[4642]: I0128 07:03:16.482794 4642 generic.go:334] "Generic (PLEG): container finished" podID="95ef778a-9202-43a6-b1b0-fc0938621c71" containerID="38e8b7b27d4683fe6c07bef8e86d9255eb5962e071a0583ce41df11189fef666" exitCode=0 Jan 28 07:03:16 crc kubenswrapper[4642]: I0128 07:03:16.482844 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-b6eb-account-create-update-4q9gp" event={"ID":"95ef778a-9202-43a6-b1b0-fc0938621c71","Type":"ContainerDied","Data":"38e8b7b27d4683fe6c07bef8e86d9255eb5962e071a0583ce41df11189fef666"} Jan 28 07:03:16 crc kubenswrapper[4642]: I0128 07:03:16.484235 4642 generic.go:334] "Generic (PLEG): container finished" podID="9ecdc58f-cfc4-47ac-959c-34336d6c2e36" containerID="255b576e9feec0de26dbc9a69f13edcab06e7cfe5df136a1809c418f4bbe3287" exitCode=0 Jan 28 07:03:16 crc kubenswrapper[4642]: I0128 07:03:16.484271 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-dlrtb" event={"ID":"9ecdc58f-cfc4-47ac-959c-34336d6c2e36","Type":"ContainerDied","Data":"255b576e9feec0de26dbc9a69f13edcab06e7cfe5df136a1809c418f4bbe3287"} Jan 28 07:03:16 crc kubenswrapper[4642]: I0128 07:03:16.484294 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-dlrtb" event={"ID":"9ecdc58f-cfc4-47ac-959c-34336d6c2e36","Type":"ContainerStarted","Data":"50aaf4bc6addb04866d80e8e47d72685231cfe185cf6c0ad7f233ad28f3222b2"} Jan 28 07:03:16 crc kubenswrapper[4642]: I0128 07:03:16.642277 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6c56fc69cc-rtglx"] Jan 28 07:03:16 crc kubenswrapper[4642]: I0128 07:03:16.656746 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c56fc69cc-rtglx" Jan 28 07:03:16 crc kubenswrapper[4642]: I0128 07:03:16.667458 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c56fc69cc-rtglx"] Jan 28 07:03:16 crc kubenswrapper[4642]: I0128 07:03:16.751019 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/877c5c9a-8740-4a17-b470-ac9c2274c745-ovsdbserver-sb\") pod \"dnsmasq-dns-6c56fc69cc-rtglx\" (UID: \"877c5c9a-8740-4a17-b470-ac9c2274c745\") " pod="openstack/dnsmasq-dns-6c56fc69cc-rtglx" Jan 28 07:03:16 crc kubenswrapper[4642]: I0128 07:03:16.751154 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/877c5c9a-8740-4a17-b470-ac9c2274c745-dns-svc\") pod \"dnsmasq-dns-6c56fc69cc-rtglx\" (UID: \"877c5c9a-8740-4a17-b470-ac9c2274c745\") " pod="openstack/dnsmasq-dns-6c56fc69cc-rtglx" Jan 28 07:03:16 crc kubenswrapper[4642]: I0128 07:03:16.751241 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/877c5c9a-8740-4a17-b470-ac9c2274c745-ovsdbserver-nb\") pod \"dnsmasq-dns-6c56fc69cc-rtglx\" (UID: \"877c5c9a-8740-4a17-b470-ac9c2274c745\") " pod="openstack/dnsmasq-dns-6c56fc69cc-rtglx" Jan 28 07:03:16 crc kubenswrapper[4642]: I0128 07:03:16.751292 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvnhn\" (UniqueName: \"kubernetes.io/projected/877c5c9a-8740-4a17-b470-ac9c2274c745-kube-api-access-mvnhn\") pod \"dnsmasq-dns-6c56fc69cc-rtglx\" (UID: \"877c5c9a-8740-4a17-b470-ac9c2274c745\") " pod="openstack/dnsmasq-dns-6c56fc69cc-rtglx" Jan 28 07:03:16 crc kubenswrapper[4642]: I0128 07:03:16.751326 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/877c5c9a-8740-4a17-b470-ac9c2274c745-config\") pod \"dnsmasq-dns-6c56fc69cc-rtglx\" (UID: \"877c5c9a-8740-4a17-b470-ac9c2274c745\") " pod="openstack/dnsmasq-dns-6c56fc69cc-rtglx" Jan 28 07:03:16 crc kubenswrapper[4642]: I0128 07:03:16.855838 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/877c5c9a-8740-4a17-b470-ac9c2274c745-ovsdbserver-sb\") pod \"dnsmasq-dns-6c56fc69cc-rtglx\" (UID: \"877c5c9a-8740-4a17-b470-ac9c2274c745\") " pod="openstack/dnsmasq-dns-6c56fc69cc-rtglx" Jan 28 07:03:16 crc kubenswrapper[4642]: I0128 07:03:16.856401 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/877c5c9a-8740-4a17-b470-ac9c2274c745-dns-svc\") pod \"dnsmasq-dns-6c56fc69cc-rtglx\" (UID: \"877c5c9a-8740-4a17-b470-ac9c2274c745\") " pod="openstack/dnsmasq-dns-6c56fc69cc-rtglx" Jan 28 07:03:16 crc kubenswrapper[4642]: I0128 07:03:16.856479 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/877c5c9a-8740-4a17-b470-ac9c2274c745-ovsdbserver-nb\") pod \"dnsmasq-dns-6c56fc69cc-rtglx\" (UID: \"877c5c9a-8740-4a17-b470-ac9c2274c745\") " pod="openstack/dnsmasq-dns-6c56fc69cc-rtglx" Jan 28 07:03:16 crc kubenswrapper[4642]: I0128 07:03:16.856544 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvnhn\" (UniqueName: \"kubernetes.io/projected/877c5c9a-8740-4a17-b470-ac9c2274c745-kube-api-access-mvnhn\") pod \"dnsmasq-dns-6c56fc69cc-rtglx\" (UID: \"877c5c9a-8740-4a17-b470-ac9c2274c745\") " pod="openstack/dnsmasq-dns-6c56fc69cc-rtglx" Jan 28 07:03:16 crc kubenswrapper[4642]: I0128 07:03:16.856585 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/877c5c9a-8740-4a17-b470-ac9c2274c745-config\") pod \"dnsmasq-dns-6c56fc69cc-rtglx\" (UID: \"877c5c9a-8740-4a17-b470-ac9c2274c745\") " pod="openstack/dnsmasq-dns-6c56fc69cc-rtglx" Jan 28 07:03:16 crc kubenswrapper[4642]: I0128 07:03:16.856728 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/877c5c9a-8740-4a17-b470-ac9c2274c745-ovsdbserver-sb\") pod \"dnsmasq-dns-6c56fc69cc-rtglx\" (UID: \"877c5c9a-8740-4a17-b470-ac9c2274c745\") " pod="openstack/dnsmasq-dns-6c56fc69cc-rtglx" Jan 28 07:03:16 crc kubenswrapper[4642]: I0128 07:03:16.857255 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/877c5c9a-8740-4a17-b470-ac9c2274c745-config\") pod \"dnsmasq-dns-6c56fc69cc-rtglx\" (UID: \"877c5c9a-8740-4a17-b470-ac9c2274c745\") " pod="openstack/dnsmasq-dns-6c56fc69cc-rtglx" Jan 28 07:03:16 crc kubenswrapper[4642]: I0128 07:03:16.858779 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/877c5c9a-8740-4a17-b470-ac9c2274c745-ovsdbserver-nb\") pod \"dnsmasq-dns-6c56fc69cc-rtglx\" (UID: \"877c5c9a-8740-4a17-b470-ac9c2274c745\") " pod="openstack/dnsmasq-dns-6c56fc69cc-rtglx" Jan 28 07:03:16 crc kubenswrapper[4642]: I0128 07:03:16.860029 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/877c5c9a-8740-4a17-b470-ac9c2274c745-dns-svc\") pod \"dnsmasq-dns-6c56fc69cc-rtglx\" (UID: \"877c5c9a-8740-4a17-b470-ac9c2274c745\") " pod="openstack/dnsmasq-dns-6c56fc69cc-rtglx" Jan 28 07:03:16 crc kubenswrapper[4642]: I0128 07:03:16.875283 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvnhn\" (UniqueName: \"kubernetes.io/projected/877c5c9a-8740-4a17-b470-ac9c2274c745-kube-api-access-mvnhn\") pod \"dnsmasq-dns-6c56fc69cc-rtglx\" (UID: \"877c5c9a-8740-4a17-b470-ac9c2274c745\") " pod="openstack/dnsmasq-dns-6c56fc69cc-rtglx" Jan 28 07:03:16 crc kubenswrapper[4642]: I0128 07:03:16.877216 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-64da-account-create-update-z85gw" Jan 28 07:03:16 crc kubenswrapper[4642]: I0128 07:03:16.936910 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-98zhc" Jan 28 07:03:16 crc kubenswrapper[4642]: I0128 07:03:16.958350 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c10693d6-4ced-4a1b-a821-887fac229b90-operator-scripts\") pod \"c10693d6-4ced-4a1b-a821-887fac229b90\" (UID: \"c10693d6-4ced-4a1b-a821-887fac229b90\") " Jan 28 07:03:16 crc kubenswrapper[4642]: I0128 07:03:16.958496 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kkjsl\" (UniqueName: \"kubernetes.io/projected/c10693d6-4ced-4a1b-a821-887fac229b90-kube-api-access-kkjsl\") pod \"c10693d6-4ced-4a1b-a821-887fac229b90\" (UID: \"c10693d6-4ced-4a1b-a821-887fac229b90\") " Jan 28 07:03:16 crc kubenswrapper[4642]: I0128 07:03:16.958974 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c10693d6-4ced-4a1b-a821-887fac229b90-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c10693d6-4ced-4a1b-a821-887fac229b90" (UID: "c10693d6-4ced-4a1b-a821-887fac229b90"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:03:16 crc kubenswrapper[4642]: I0128 07:03:16.959211 4642 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c10693d6-4ced-4a1b-a821-887fac229b90-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 07:03:16 crc kubenswrapper[4642]: I0128 07:03:16.962107 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c10693d6-4ced-4a1b-a821-887fac229b90-kube-api-access-kkjsl" (OuterVolumeSpecName: "kube-api-access-kkjsl") pod "c10693d6-4ced-4a1b-a821-887fac229b90" (UID: "c10693d6-4ced-4a1b-a821-887fac229b90"). InnerVolumeSpecName "kube-api-access-kkjsl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:03:16 crc kubenswrapper[4642]: I0128 07:03:16.971704 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c56fc69cc-rtglx" Jan 28 07:03:17 crc kubenswrapper[4642]: I0128 07:03:17.059885 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hm84b\" (UniqueName: \"kubernetes.io/projected/b7e90a78-c529-4572-83af-92513b7ce545-kube-api-access-hm84b\") pod \"b7e90a78-c529-4572-83af-92513b7ce545\" (UID: \"b7e90a78-c529-4572-83af-92513b7ce545\") " Jan 28 07:03:17 crc kubenswrapper[4642]: I0128 07:03:17.060064 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7e90a78-c529-4572-83af-92513b7ce545-operator-scripts\") pod \"b7e90a78-c529-4572-83af-92513b7ce545\" (UID: \"b7e90a78-c529-4572-83af-92513b7ce545\") " Jan 28 07:03:17 crc kubenswrapper[4642]: I0128 07:03:17.060485 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kkjsl\" (UniqueName: \"kubernetes.io/projected/c10693d6-4ced-4a1b-a821-887fac229b90-kube-api-access-kkjsl\") on node \"crc\" DevicePath \"\"" Jan 28 07:03:17 crc kubenswrapper[4642]: I0128 07:03:17.060634 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7e90a78-c529-4572-83af-92513b7ce545-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b7e90a78-c529-4572-83af-92513b7ce545" (UID: "b7e90a78-c529-4572-83af-92513b7ce545"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:03:17 crc kubenswrapper[4642]: I0128 07:03:17.063556 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7e90a78-c529-4572-83af-92513b7ce545-kube-api-access-hm84b" (OuterVolumeSpecName: "kube-api-access-hm84b") pod "b7e90a78-c529-4572-83af-92513b7ce545" (UID: "b7e90a78-c529-4572-83af-92513b7ce545"). InnerVolumeSpecName "kube-api-access-hm84b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:03:17 crc kubenswrapper[4642]: I0128 07:03:17.162031 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hm84b\" (UniqueName: \"kubernetes.io/projected/b7e90a78-c529-4572-83af-92513b7ce545-kube-api-access-hm84b\") on node \"crc\" DevicePath \"\"" Jan 28 07:03:17 crc kubenswrapper[4642]: I0128 07:03:17.162284 4642 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7e90a78-c529-4572-83af-92513b7ce545-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 07:03:17 crc kubenswrapper[4642]: I0128 07:03:17.328470 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c56fc69cc-rtglx"] Jan 28 07:03:17 crc kubenswrapper[4642]: I0128 07:03:17.423823 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 28 07:03:17 crc kubenswrapper[4642]: I0128 07:03:17.494662 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-64da-account-create-update-z85gw" event={"ID":"c10693d6-4ced-4a1b-a821-887fac229b90","Type":"ContainerDied","Data":"128df50800f3305cab17696d34c0570b8969dec288fb6ca00dee01ede4eab742"} Jan 28 07:03:17 crc kubenswrapper[4642]: I0128 07:03:17.494715 4642 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="128df50800f3305cab17696d34c0570b8969dec288fb6ca00dee01ede4eab742" Jan 28 07:03:17 crc kubenswrapper[4642]: I0128 07:03:17.494781 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-64da-account-create-update-z85gw" Jan 28 07:03:17 crc kubenswrapper[4642]: I0128 07:03:17.500388 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c56fc69cc-rtglx" event={"ID":"877c5c9a-8740-4a17-b470-ac9c2274c745","Type":"ContainerStarted","Data":"f514761e77c52a75a51b39a23da4c4f263a11940a2fdb09b5290281b1eae921c"} Jan 28 07:03:17 crc kubenswrapper[4642]: I0128 07:03:17.502839 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-98zhc" event={"ID":"b7e90a78-c529-4572-83af-92513b7ce545","Type":"ContainerDied","Data":"9177d4dd349b5f8b92d4c6cf084132eecff1e8f46c53181d1f92d9e3e5914d3c"} Jan 28 07:03:17 crc kubenswrapper[4642]: I0128 07:03:17.502861 4642 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9177d4dd349b5f8b92d4c6cf084132eecff1e8f46c53181d1f92d9e3e5914d3c" Jan 28 07:03:17 crc kubenswrapper[4642]: I0128 07:03:17.502906 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-98zhc" Jan 28 07:03:17 crc kubenswrapper[4642]: I0128 07:03:17.643360 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 28 07:03:17 crc kubenswrapper[4642]: E0128 07:03:17.643932 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c10693d6-4ced-4a1b-a821-887fac229b90" containerName="mariadb-account-create-update" Jan 28 07:03:17 crc kubenswrapper[4642]: I0128 07:03:17.643949 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="c10693d6-4ced-4a1b-a821-887fac229b90" containerName="mariadb-account-create-update" Jan 28 07:03:17 crc kubenswrapper[4642]: E0128 07:03:17.643967 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7e90a78-c529-4572-83af-92513b7ce545" containerName="mariadb-database-create" Jan 28 07:03:17 crc kubenswrapper[4642]: I0128 07:03:17.643973 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7e90a78-c529-4572-83af-92513b7ce545" containerName="mariadb-database-create" Jan 28 07:03:17 crc kubenswrapper[4642]: I0128 07:03:17.644101 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7e90a78-c529-4572-83af-92513b7ce545" containerName="mariadb-database-create" Jan 28 07:03:17 crc kubenswrapper[4642]: I0128 07:03:17.644124 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="c10693d6-4ced-4a1b-a821-887fac229b90" containerName="mariadb-account-create-update" Jan 28 07:03:17 crc kubenswrapper[4642]: I0128 07:03:17.644807 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 28 07:03:17 crc kubenswrapper[4642]: I0128 07:03:17.647259 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 28 07:03:17 crc kubenswrapper[4642]: I0128 07:03:17.647830 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 28 07:03:17 crc kubenswrapper[4642]: I0128 07:03:17.647966 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-4kz4b" Jan 28 07:03:17 crc kubenswrapper[4642]: I0128 07:03:17.648123 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 28 07:03:17 crc kubenswrapper[4642]: I0128 07:03:17.657555 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 28 07:03:17 crc kubenswrapper[4642]: I0128 07:03:17.686484 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 28 07:03:17 crc kubenswrapper[4642]: I0128 07:03:17.702781 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 28 07:03:17 crc kubenswrapper[4642]: I0128 07:03:17.704659 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 28 07:03:17 crc kubenswrapper[4642]: I0128 07:03:17.706683 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 28 07:03:17 crc kubenswrapper[4642]: I0128 07:03:17.706878 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 28 07:03:17 crc kubenswrapper[4642]: I0128 07:03:17.707032 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-b2dpg" Jan 28 07:03:17 crc kubenswrapper[4642]: I0128 07:03:17.714853 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 28 07:03:17 crc kubenswrapper[4642]: I0128 07:03:17.774486 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/4dd91859-662e-4131-a376-57998c03d752-cache\") pod \"swift-storage-0\" (UID: \"4dd91859-662e-4131-a376-57998c03d752\") " pod="openstack/swift-storage-0" Jan 28 07:03:17 crc kubenswrapper[4642]: I0128 07:03:17.774527 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc7a3db8-5279-4295-a18a-59749e31d9a4-config\") pod \"ovn-northd-0\" (UID: \"cc7a3db8-5279-4295-a18a-59749e31d9a4\") " pod="openstack/ovn-northd-0" Jan 28 07:03:17 crc kubenswrapper[4642]: I0128 07:03:17.774558 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cc7a3db8-5279-4295-a18a-59749e31d9a4-scripts\") pod \"ovn-northd-0\" (UID: \"cc7a3db8-5279-4295-a18a-59749e31d9a4\") " pod="openstack/ovn-northd-0" Jan 28 07:03:17 crc kubenswrapper[4642]: I0128 07:03:17.774577 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"4dd91859-662e-4131-a376-57998c03d752\") " pod="openstack/swift-storage-0" Jan 28 07:03:17 crc kubenswrapper[4642]: I0128 07:03:17.774601 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/4dd91859-662e-4131-a376-57998c03d752-lock\") pod \"swift-storage-0\" (UID: \"4dd91859-662e-4131-a376-57998c03d752\") " pod="openstack/swift-storage-0" Jan 28 07:03:17 crc kubenswrapper[4642]: I0128 07:03:17.774635 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vg6cx\" (UniqueName: \"kubernetes.io/projected/4dd91859-662e-4131-a376-57998c03d752-kube-api-access-vg6cx\") pod \"swift-storage-0\" (UID: \"4dd91859-662e-4131-a376-57998c03d752\") " pod="openstack/swift-storage-0" Jan 28 07:03:17 crc kubenswrapper[4642]: I0128 07:03:17.774672 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dd91859-662e-4131-a376-57998c03d752-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"4dd91859-662e-4131-a376-57998c03d752\") " pod="openstack/swift-storage-0" Jan 28 07:03:17 crc kubenswrapper[4642]: I0128 07:03:17.774706 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4dd91859-662e-4131-a376-57998c03d752-etc-swift\") pod \"swift-storage-0\" (UID: \"4dd91859-662e-4131-a376-57998c03d752\") " pod="openstack/swift-storage-0" Jan 28 07:03:17 crc kubenswrapper[4642]: I0128 07:03:17.774727 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pf4zd\" (UniqueName: \"kubernetes.io/projected/cc7a3db8-5279-4295-a18a-59749e31d9a4-kube-api-access-pf4zd\") pod \"ovn-northd-0\" (UID: \"cc7a3db8-5279-4295-a18a-59749e31d9a4\") " pod="openstack/ovn-northd-0" Jan 28 07:03:17 crc kubenswrapper[4642]: I0128 07:03:17.774744 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc7a3db8-5279-4295-a18a-59749e31d9a4-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"cc7a3db8-5279-4295-a18a-59749e31d9a4\") " pod="openstack/ovn-northd-0" Jan 28 07:03:17 crc kubenswrapper[4642]: I0128 07:03:17.774760 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc7a3db8-5279-4295-a18a-59749e31d9a4-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"cc7a3db8-5279-4295-a18a-59749e31d9a4\") " pod="openstack/ovn-northd-0" Jan 28 07:03:17 crc kubenswrapper[4642]: I0128 07:03:17.774780 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc7a3db8-5279-4295-a18a-59749e31d9a4-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"cc7a3db8-5279-4295-a18a-59749e31d9a4\") " pod="openstack/ovn-northd-0" Jan 28 07:03:17 crc kubenswrapper[4642]: I0128 07:03:17.774825 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/cc7a3db8-5279-4295-a18a-59749e31d9a4-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"cc7a3db8-5279-4295-a18a-59749e31d9a4\") " pod="openstack/ovn-northd-0" Jan 28 07:03:17 crc kubenswrapper[4642]: I0128 07:03:17.787013 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-703f-account-create-update-qw47f" Jan 28 07:03:17 crc kubenswrapper[4642]: I0128 07:03:17.875491 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4bbb7547-8266-45d7-9198-1f6fd3a4418b-operator-scripts\") pod \"4bbb7547-8266-45d7-9198-1f6fd3a4418b\" (UID: \"4bbb7547-8266-45d7-9198-1f6fd3a4418b\") " Jan 28 07:03:17 crc kubenswrapper[4642]: I0128 07:03:17.875782 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhx7q\" (UniqueName: \"kubernetes.io/projected/4bbb7547-8266-45d7-9198-1f6fd3a4418b-kube-api-access-fhx7q\") pod \"4bbb7547-8266-45d7-9198-1f6fd3a4418b\" (UID: \"4bbb7547-8266-45d7-9198-1f6fd3a4418b\") " Jan 28 07:03:17 crc kubenswrapper[4642]: I0128 07:03:17.876044 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4dd91859-662e-4131-a376-57998c03d752-etc-swift\") pod \"swift-storage-0\" (UID: \"4dd91859-662e-4131-a376-57998c03d752\") " pod="openstack/swift-storage-0" Jan 28 07:03:17 crc kubenswrapper[4642]: I0128 07:03:17.876092 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pf4zd\" (UniqueName: \"kubernetes.io/projected/cc7a3db8-5279-4295-a18a-59749e31d9a4-kube-api-access-pf4zd\") pod \"ovn-northd-0\" (UID: \"cc7a3db8-5279-4295-a18a-59749e31d9a4\") " pod="openstack/ovn-northd-0" Jan 28 07:03:17 crc kubenswrapper[4642]: I0128 07:03:17.876119 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc7a3db8-5279-4295-a18a-59749e31d9a4-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"cc7a3db8-5279-4295-a18a-59749e31d9a4\") " pod="openstack/ovn-northd-0" Jan 28 07:03:17 crc kubenswrapper[4642]: I0128 07:03:17.876138 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc7a3db8-5279-4295-a18a-59749e31d9a4-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"cc7a3db8-5279-4295-a18a-59749e31d9a4\") " pod="openstack/ovn-northd-0" Jan 28 07:03:17 crc kubenswrapper[4642]: I0128 07:03:17.876159 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc7a3db8-5279-4295-a18a-59749e31d9a4-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"cc7a3db8-5279-4295-a18a-59749e31d9a4\") " pod="openstack/ovn-northd-0" Jan 28 07:03:17 crc kubenswrapper[4642]: I0128 07:03:17.876215 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/cc7a3db8-5279-4295-a18a-59749e31d9a4-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"cc7a3db8-5279-4295-a18a-59749e31d9a4\") " pod="openstack/ovn-northd-0" Jan 28 07:03:17 crc kubenswrapper[4642]: I0128 07:03:17.876243 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/4dd91859-662e-4131-a376-57998c03d752-cache\") pod \"swift-storage-0\" (UID: \"4dd91859-662e-4131-a376-57998c03d752\") " pod="openstack/swift-storage-0" Jan 28 07:03:17 crc kubenswrapper[4642]: I0128 07:03:17.876269 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc7a3db8-5279-4295-a18a-59749e31d9a4-config\") pod \"ovn-northd-0\" (UID: \"cc7a3db8-5279-4295-a18a-59749e31d9a4\") " pod="openstack/ovn-northd-0" Jan 28 07:03:17 crc kubenswrapper[4642]: I0128 07:03:17.876301 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cc7a3db8-5279-4295-a18a-59749e31d9a4-scripts\") pod \"ovn-northd-0\" (UID: \"cc7a3db8-5279-4295-a18a-59749e31d9a4\") " pod="openstack/ovn-northd-0" Jan 28 07:03:17 crc kubenswrapper[4642]: I0128 07:03:17.876323 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"4dd91859-662e-4131-a376-57998c03d752\") " pod="openstack/swift-storage-0" Jan 28 07:03:17 crc kubenswrapper[4642]: I0128 07:03:17.876339 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/4dd91859-662e-4131-a376-57998c03d752-lock\") pod \"swift-storage-0\" (UID: \"4dd91859-662e-4131-a376-57998c03d752\") " pod="openstack/swift-storage-0" Jan 28 07:03:17 crc kubenswrapper[4642]: I0128 07:03:17.876361 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vg6cx\" (UniqueName: \"kubernetes.io/projected/4dd91859-662e-4131-a376-57998c03d752-kube-api-access-vg6cx\") pod \"swift-storage-0\" (UID: \"4dd91859-662e-4131-a376-57998c03d752\") " pod="openstack/swift-storage-0" Jan 28 07:03:17 crc kubenswrapper[4642]: I0128 07:03:17.876404 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dd91859-662e-4131-a376-57998c03d752-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"4dd91859-662e-4131-a376-57998c03d752\") " pod="openstack/swift-storage-0" Jan 28 07:03:17 crc kubenswrapper[4642]: I0128 07:03:17.878130 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/cc7a3db8-5279-4295-a18a-59749e31d9a4-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"cc7a3db8-5279-4295-a18a-59749e31d9a4\") " pod="openstack/ovn-northd-0" Jan 28 07:03:17 crc kubenswrapper[4642]: I0128 07:03:17.878150 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bbb7547-8266-45d7-9198-1f6fd3a4418b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4bbb7547-8266-45d7-9198-1f6fd3a4418b" (UID: "4bbb7547-8266-45d7-9198-1f6fd3a4418b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:03:17 crc kubenswrapper[4642]: E0128 07:03:17.878297 4642 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 28 07:03:17 crc kubenswrapper[4642]: E0128 07:03:17.878314 4642 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 28 07:03:17 crc kubenswrapper[4642]: E0128 07:03:17.878360 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4dd91859-662e-4131-a376-57998c03d752-etc-swift podName:4dd91859-662e-4131-a376-57998c03d752 nodeName:}" failed. No retries permitted until 2026-01-28 07:03:18.37834596 +0000 UTC m=+921.610434770 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/4dd91859-662e-4131-a376-57998c03d752-etc-swift") pod "swift-storage-0" (UID: "4dd91859-662e-4131-a376-57998c03d752") : configmap "swift-ring-files" not found Jan 28 07:03:17 crc kubenswrapper[4642]: I0128 07:03:17.878969 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cc7a3db8-5279-4295-a18a-59749e31d9a4-scripts\") pod \"ovn-northd-0\" (UID: \"cc7a3db8-5279-4295-a18a-59749e31d9a4\") " pod="openstack/ovn-northd-0" Jan 28 07:03:17 crc kubenswrapper[4642]: I0128 07:03:17.879175 4642 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"4dd91859-662e-4131-a376-57998c03d752\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/swift-storage-0" Jan 28 07:03:17 crc kubenswrapper[4642]: I0128 07:03:17.879309 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/4dd91859-662e-4131-a376-57998c03d752-lock\") pod \"swift-storage-0\" (UID: \"4dd91859-662e-4131-a376-57998c03d752\") " pod="openstack/swift-storage-0" Jan 28 07:03:17 crc kubenswrapper[4642]: I0128 07:03:17.879706 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/4dd91859-662e-4131-a376-57998c03d752-cache\") pod \"swift-storage-0\" (UID: \"4dd91859-662e-4131-a376-57998c03d752\") " pod="openstack/swift-storage-0" Jan 28 07:03:17 crc kubenswrapper[4642]: I0128 07:03:17.880461 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc7a3db8-5279-4295-a18a-59749e31d9a4-config\") pod \"ovn-northd-0\" (UID: \"cc7a3db8-5279-4295-a18a-59749e31d9a4\") " pod="openstack/ovn-northd-0" Jan 28 07:03:17 crc kubenswrapper[4642]: I0128 07:03:17.881318 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bbb7547-8266-45d7-9198-1f6fd3a4418b-kube-api-access-fhx7q" (OuterVolumeSpecName: "kube-api-access-fhx7q") pod "4bbb7547-8266-45d7-9198-1f6fd3a4418b" (UID: "4bbb7547-8266-45d7-9198-1f6fd3a4418b"). InnerVolumeSpecName "kube-api-access-fhx7q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:03:17 crc kubenswrapper[4642]: I0128 07:03:17.882524 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc7a3db8-5279-4295-a18a-59749e31d9a4-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"cc7a3db8-5279-4295-a18a-59749e31d9a4\") " pod="openstack/ovn-northd-0" Jan 28 07:03:17 crc kubenswrapper[4642]: I0128 07:03:17.883280 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc7a3db8-5279-4295-a18a-59749e31d9a4-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"cc7a3db8-5279-4295-a18a-59749e31d9a4\") " pod="openstack/ovn-northd-0" Jan 28 07:03:17 crc kubenswrapper[4642]: I0128 07:03:17.884249 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dd91859-662e-4131-a376-57998c03d752-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"4dd91859-662e-4131-a376-57998c03d752\") " pod="openstack/swift-storage-0" Jan 28 07:03:17 crc kubenswrapper[4642]: I0128 07:03:17.889770 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc7a3db8-5279-4295-a18a-59749e31d9a4-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"cc7a3db8-5279-4295-a18a-59749e31d9a4\") " pod="openstack/ovn-northd-0" Jan 28 07:03:17 crc kubenswrapper[4642]: I0128 07:03:17.894936 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pf4zd\" (UniqueName: \"kubernetes.io/projected/cc7a3db8-5279-4295-a18a-59749e31d9a4-kube-api-access-pf4zd\") pod \"ovn-northd-0\" (UID: \"cc7a3db8-5279-4295-a18a-59749e31d9a4\") " pod="openstack/ovn-northd-0" Jan 28 07:03:17 crc kubenswrapper[4642]: I0128 07:03:17.896265 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vg6cx\" (UniqueName: \"kubernetes.io/projected/4dd91859-662e-4131-a376-57998c03d752-kube-api-access-vg6cx\") pod \"swift-storage-0\" (UID: \"4dd91859-662e-4131-a376-57998c03d752\") " pod="openstack/swift-storage-0" Jan 28 07:03:17 crc kubenswrapper[4642]: I0128 07:03:17.905445 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"4dd91859-662e-4131-a376-57998c03d752\") " pod="openstack/swift-storage-0" Jan 28 07:03:17 crc kubenswrapper[4642]: I0128 07:03:17.947343 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-dlrtb" Jan 28 07:03:17 crc kubenswrapper[4642]: I0128 07:03:17.950822 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-rg5pj" Jan 28 07:03:17 crc kubenswrapper[4642]: I0128 07:03:17.954561 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-b6eb-account-create-update-4q9gp" Jan 28 07:03:17 crc kubenswrapper[4642]: I0128 07:03:17.967723 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 28 07:03:17 crc kubenswrapper[4642]: I0128 07:03:17.985430 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhx7q\" (UniqueName: \"kubernetes.io/projected/4bbb7547-8266-45d7-9198-1f6fd3a4418b-kube-api-access-fhx7q\") on node \"crc\" DevicePath \"\"" Jan 28 07:03:17 crc kubenswrapper[4642]: I0128 07:03:17.985470 4642 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4bbb7547-8266-45d7-9198-1f6fd3a4418b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 07:03:18 crc kubenswrapper[4642]: I0128 07:03:18.086638 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ph7sp\" (UniqueName: \"kubernetes.io/projected/c37e9b0e-970a-44db-bc75-0782625ab2a2-kube-api-access-ph7sp\") pod \"c37e9b0e-970a-44db-bc75-0782625ab2a2\" (UID: \"c37e9b0e-970a-44db-bc75-0782625ab2a2\") " Jan 28 07:03:18 crc kubenswrapper[4642]: I0128 07:03:18.086960 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9ecdc58f-cfc4-47ac-959c-34336d6c2e36-operator-scripts\") pod \"9ecdc58f-cfc4-47ac-959c-34336d6c2e36\" (UID: \"9ecdc58f-cfc4-47ac-959c-34336d6c2e36\") " Jan 28 07:03:18 crc kubenswrapper[4642]: I0128 07:03:18.086982 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c37e9b0e-970a-44db-bc75-0782625ab2a2-operator-scripts\") pod \"c37e9b0e-970a-44db-bc75-0782625ab2a2\" (UID: \"c37e9b0e-970a-44db-bc75-0782625ab2a2\") " Jan 28 07:03:18 crc kubenswrapper[4642]: I0128 07:03:18.087016 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5tl8\" (UniqueName: \"kubernetes.io/projected/9ecdc58f-cfc4-47ac-959c-34336d6c2e36-kube-api-access-m5tl8\") pod \"9ecdc58f-cfc4-47ac-959c-34336d6c2e36\" (UID: \"9ecdc58f-cfc4-47ac-959c-34336d6c2e36\") " Jan 28 07:03:18 crc kubenswrapper[4642]: I0128 07:03:18.087181 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bt5lz\" (UniqueName: \"kubernetes.io/projected/95ef778a-9202-43a6-b1b0-fc0938621c71-kube-api-access-bt5lz\") pod \"95ef778a-9202-43a6-b1b0-fc0938621c71\" (UID: \"95ef778a-9202-43a6-b1b0-fc0938621c71\") " Jan 28 07:03:18 crc kubenswrapper[4642]: I0128 07:03:18.087266 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95ef778a-9202-43a6-b1b0-fc0938621c71-operator-scripts\") pod \"95ef778a-9202-43a6-b1b0-fc0938621c71\" (UID: \"95ef778a-9202-43a6-b1b0-fc0938621c71\") " Jan 28 07:03:18 crc kubenswrapper[4642]: I0128 07:03:18.087500 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ecdc58f-cfc4-47ac-959c-34336d6c2e36-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9ecdc58f-cfc4-47ac-959c-34336d6c2e36" (UID: "9ecdc58f-cfc4-47ac-959c-34336d6c2e36"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:03:18 crc kubenswrapper[4642]: I0128 07:03:18.087621 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c37e9b0e-970a-44db-bc75-0782625ab2a2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c37e9b0e-970a-44db-bc75-0782625ab2a2" (UID: "c37e9b0e-970a-44db-bc75-0782625ab2a2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:03:18 crc kubenswrapper[4642]: I0128 07:03:18.087749 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95ef778a-9202-43a6-b1b0-fc0938621c71-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "95ef778a-9202-43a6-b1b0-fc0938621c71" (UID: "95ef778a-9202-43a6-b1b0-fc0938621c71"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:03:18 crc kubenswrapper[4642]: I0128 07:03:18.087880 4642 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95ef778a-9202-43a6-b1b0-fc0938621c71-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 07:03:18 crc kubenswrapper[4642]: I0128 07:03:18.087898 4642 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9ecdc58f-cfc4-47ac-959c-34336d6c2e36-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 07:03:18 crc kubenswrapper[4642]: I0128 07:03:18.087908 4642 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c37e9b0e-970a-44db-bc75-0782625ab2a2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 07:03:18 crc kubenswrapper[4642]: I0128 07:03:18.091985 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c37e9b0e-970a-44db-bc75-0782625ab2a2-kube-api-access-ph7sp" (OuterVolumeSpecName: "kube-api-access-ph7sp") pod "c37e9b0e-970a-44db-bc75-0782625ab2a2" (UID: "c37e9b0e-970a-44db-bc75-0782625ab2a2"). InnerVolumeSpecName "kube-api-access-ph7sp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:03:18 crc kubenswrapper[4642]: I0128 07:03:18.093498 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95ef778a-9202-43a6-b1b0-fc0938621c71-kube-api-access-bt5lz" (OuterVolumeSpecName: "kube-api-access-bt5lz") pod "95ef778a-9202-43a6-b1b0-fc0938621c71" (UID: "95ef778a-9202-43a6-b1b0-fc0938621c71"). InnerVolumeSpecName "kube-api-access-bt5lz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:03:18 crc kubenswrapper[4642]: I0128 07:03:18.094773 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ecdc58f-cfc4-47ac-959c-34336d6c2e36-kube-api-access-m5tl8" (OuterVolumeSpecName: "kube-api-access-m5tl8") pod "9ecdc58f-cfc4-47ac-959c-34336d6c2e36" (UID: "9ecdc58f-cfc4-47ac-959c-34336d6c2e36"). InnerVolumeSpecName "kube-api-access-m5tl8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:03:18 crc kubenswrapper[4642]: I0128 07:03:18.191848 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ph7sp\" (UniqueName: \"kubernetes.io/projected/c37e9b0e-970a-44db-bc75-0782625ab2a2-kube-api-access-ph7sp\") on node \"crc\" DevicePath \"\"" Jan 28 07:03:18 crc kubenswrapper[4642]: I0128 07:03:18.191885 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m5tl8\" (UniqueName: \"kubernetes.io/projected/9ecdc58f-cfc4-47ac-959c-34336d6c2e36-kube-api-access-m5tl8\") on node \"crc\" DevicePath \"\"" Jan 28 07:03:18 crc kubenswrapper[4642]: I0128 07:03:18.191896 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bt5lz\" (UniqueName: \"kubernetes.io/projected/95ef778a-9202-43a6-b1b0-fc0938621c71-kube-api-access-bt5lz\") on node \"crc\" DevicePath \"\"" Jan 28 07:03:18 crc kubenswrapper[4642]: I0128 07:03:18.206360 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-pp9sd"] Jan 28 07:03:18 crc kubenswrapper[4642]: E0128 07:03:18.206679 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ecdc58f-cfc4-47ac-959c-34336d6c2e36" containerName="mariadb-database-create" Jan 28 07:03:18 crc kubenswrapper[4642]: I0128 07:03:18.206697 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ecdc58f-cfc4-47ac-959c-34336d6c2e36" containerName="mariadb-database-create" Jan 28 07:03:18 crc kubenswrapper[4642]: E0128 07:03:18.206715 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c37e9b0e-970a-44db-bc75-0782625ab2a2" containerName="mariadb-database-create" Jan 28 07:03:18 crc kubenswrapper[4642]: I0128 07:03:18.206722 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="c37e9b0e-970a-44db-bc75-0782625ab2a2" containerName="mariadb-database-create" Jan 28 07:03:18 crc kubenswrapper[4642]: E0128 07:03:18.206730 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95ef778a-9202-43a6-b1b0-fc0938621c71" containerName="mariadb-account-create-update" Jan 28 07:03:18 crc kubenswrapper[4642]: I0128 07:03:18.206736 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="95ef778a-9202-43a6-b1b0-fc0938621c71" containerName="mariadb-account-create-update" Jan 28 07:03:18 crc kubenswrapper[4642]: E0128 07:03:18.206756 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bbb7547-8266-45d7-9198-1f6fd3a4418b" containerName="mariadb-account-create-update" Jan 28 07:03:18 crc kubenswrapper[4642]: I0128 07:03:18.206761 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bbb7547-8266-45d7-9198-1f6fd3a4418b" containerName="mariadb-account-create-update" Jan 28 07:03:18 crc kubenswrapper[4642]: I0128 07:03:18.206881 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="c37e9b0e-970a-44db-bc75-0782625ab2a2" containerName="mariadb-database-create" Jan 28 07:03:18 crc kubenswrapper[4642]: I0128 07:03:18.206897 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="95ef778a-9202-43a6-b1b0-fc0938621c71" containerName="mariadb-account-create-update" Jan 28 07:03:18 crc kubenswrapper[4642]: I0128 07:03:18.206905 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ecdc58f-cfc4-47ac-959c-34336d6c2e36" containerName="mariadb-database-create" Jan 28 07:03:18 crc kubenswrapper[4642]: I0128 07:03:18.206917 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bbb7547-8266-45d7-9198-1f6fd3a4418b" containerName="mariadb-account-create-update" Jan 28 07:03:18 crc kubenswrapper[4642]: I0128 07:03:18.207394 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-pp9sd" Jan 28 07:03:18 crc kubenswrapper[4642]: I0128 07:03:18.213397 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 28 07:03:18 crc kubenswrapper[4642]: I0128 07:03:18.213549 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 28 07:03:18 crc kubenswrapper[4642]: I0128 07:03:18.213654 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 28 07:03:18 crc kubenswrapper[4642]: I0128 07:03:18.218911 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-pp9sd"] Jan 28 07:03:18 crc kubenswrapper[4642]: I0128 07:03:18.293440 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9c8f1362-c01b-4533-b2fa-a7cbfb573175-scripts\") pod \"swift-ring-rebalance-pp9sd\" (UID: \"9c8f1362-c01b-4533-b2fa-a7cbfb573175\") " pod="openstack/swift-ring-rebalance-pp9sd" Jan 28 07:03:18 crc kubenswrapper[4642]: I0128 07:03:18.293497 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/9c8f1362-c01b-4533-b2fa-a7cbfb573175-dispersionconf\") pod \"swift-ring-rebalance-pp9sd\" (UID: \"9c8f1362-c01b-4533-b2fa-a7cbfb573175\") " pod="openstack/swift-ring-rebalance-pp9sd" Jan 28 07:03:18 crc kubenswrapper[4642]: I0128 07:03:18.293527 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/9c8f1362-c01b-4533-b2fa-a7cbfb573175-etc-swift\") pod \"swift-ring-rebalance-pp9sd\" (UID: \"9c8f1362-c01b-4533-b2fa-a7cbfb573175\") " pod="openstack/swift-ring-rebalance-pp9sd" Jan 28 07:03:18 crc kubenswrapper[4642]: I0128 07:03:18.293557 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c8f1362-c01b-4533-b2fa-a7cbfb573175-combined-ca-bundle\") pod \"swift-ring-rebalance-pp9sd\" (UID: \"9c8f1362-c01b-4533-b2fa-a7cbfb573175\") " pod="openstack/swift-ring-rebalance-pp9sd" Jan 28 07:03:18 crc kubenswrapper[4642]: I0128 07:03:18.293643 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n598r\" (UniqueName: \"kubernetes.io/projected/9c8f1362-c01b-4533-b2fa-a7cbfb573175-kube-api-access-n598r\") pod \"swift-ring-rebalance-pp9sd\" (UID: \"9c8f1362-c01b-4533-b2fa-a7cbfb573175\") " pod="openstack/swift-ring-rebalance-pp9sd" Jan 28 07:03:18 crc kubenswrapper[4642]: I0128 07:03:18.293678 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/9c8f1362-c01b-4533-b2fa-a7cbfb573175-ring-data-devices\") pod \"swift-ring-rebalance-pp9sd\" (UID: \"9c8f1362-c01b-4533-b2fa-a7cbfb573175\") " pod="openstack/swift-ring-rebalance-pp9sd" Jan 28 07:03:18 crc kubenswrapper[4642]: I0128 07:03:18.293767 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/9c8f1362-c01b-4533-b2fa-a7cbfb573175-swiftconf\") pod \"swift-ring-rebalance-pp9sd\" (UID: \"9c8f1362-c01b-4533-b2fa-a7cbfb573175\") " pod="openstack/swift-ring-rebalance-pp9sd" Jan 28 07:03:18 crc kubenswrapper[4642]: I0128 07:03:18.340437 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 28 07:03:18 crc kubenswrapper[4642]: W0128 07:03:18.345003 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcc7a3db8_5279_4295_a18a_59749e31d9a4.slice/crio-9a26ba8969e6cedee50ccbd1df4913ad9a5743f4452bc5ba98972ba66b20b234 WatchSource:0}: Error finding container 9a26ba8969e6cedee50ccbd1df4913ad9a5743f4452bc5ba98972ba66b20b234: Status 404 returned error can't find the container with id 9a26ba8969e6cedee50ccbd1df4913ad9a5743f4452bc5ba98972ba66b20b234 Jan 28 07:03:18 crc kubenswrapper[4642]: I0128 07:03:18.395296 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/9c8f1362-c01b-4533-b2fa-a7cbfb573175-ring-data-devices\") pod \"swift-ring-rebalance-pp9sd\" (UID: \"9c8f1362-c01b-4533-b2fa-a7cbfb573175\") " pod="openstack/swift-ring-rebalance-pp9sd" Jan 28 07:03:18 crc kubenswrapper[4642]: I0128 07:03:18.395341 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4dd91859-662e-4131-a376-57998c03d752-etc-swift\") pod \"swift-storage-0\" (UID: \"4dd91859-662e-4131-a376-57998c03d752\") " pod="openstack/swift-storage-0" Jan 28 07:03:18 crc kubenswrapper[4642]: I0128 07:03:18.395385 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/9c8f1362-c01b-4533-b2fa-a7cbfb573175-swiftconf\") pod \"swift-ring-rebalance-pp9sd\" (UID: \"9c8f1362-c01b-4533-b2fa-a7cbfb573175\") " pod="openstack/swift-ring-rebalance-pp9sd" Jan 28 07:03:18 crc kubenswrapper[4642]: I0128 07:03:18.395410 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9c8f1362-c01b-4533-b2fa-a7cbfb573175-scripts\") pod \"swift-ring-rebalance-pp9sd\" (UID: \"9c8f1362-c01b-4533-b2fa-a7cbfb573175\") " pod="openstack/swift-ring-rebalance-pp9sd" Jan 28 07:03:18 crc kubenswrapper[4642]: I0128 07:03:18.395443 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/9c8f1362-c01b-4533-b2fa-a7cbfb573175-dispersionconf\") pod \"swift-ring-rebalance-pp9sd\" (UID: \"9c8f1362-c01b-4533-b2fa-a7cbfb573175\") " pod="openstack/swift-ring-rebalance-pp9sd" Jan 28 07:03:18 crc kubenswrapper[4642]: I0128 07:03:18.395464 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/9c8f1362-c01b-4533-b2fa-a7cbfb573175-etc-swift\") pod \"swift-ring-rebalance-pp9sd\" (UID: \"9c8f1362-c01b-4533-b2fa-a7cbfb573175\") " pod="openstack/swift-ring-rebalance-pp9sd" Jan 28 07:03:18 crc kubenswrapper[4642]: I0128 07:03:18.395484 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c8f1362-c01b-4533-b2fa-a7cbfb573175-combined-ca-bundle\") pod \"swift-ring-rebalance-pp9sd\" (UID: \"9c8f1362-c01b-4533-b2fa-a7cbfb573175\") " pod="openstack/swift-ring-rebalance-pp9sd" Jan 28 07:03:18 crc kubenswrapper[4642]: I0128 07:03:18.395532 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n598r\" (UniqueName: \"kubernetes.io/projected/9c8f1362-c01b-4533-b2fa-a7cbfb573175-kube-api-access-n598r\") pod \"swift-ring-rebalance-pp9sd\" (UID: \"9c8f1362-c01b-4533-b2fa-a7cbfb573175\") " pod="openstack/swift-ring-rebalance-pp9sd" Jan 28 07:03:18 crc kubenswrapper[4642]: I0128 07:03:18.396364 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9c8f1362-c01b-4533-b2fa-a7cbfb573175-scripts\") pod \"swift-ring-rebalance-pp9sd\" (UID: \"9c8f1362-c01b-4533-b2fa-a7cbfb573175\") " pod="openstack/swift-ring-rebalance-pp9sd" Jan 28 07:03:18 crc kubenswrapper[4642]: I0128 07:03:18.396775 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/9c8f1362-c01b-4533-b2fa-a7cbfb573175-ring-data-devices\") pod \"swift-ring-rebalance-pp9sd\" (UID: \"9c8f1362-c01b-4533-b2fa-a7cbfb573175\") " pod="openstack/swift-ring-rebalance-pp9sd" Jan 28 07:03:18 crc kubenswrapper[4642]: E0128 07:03:18.396952 4642 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 28 07:03:18 crc kubenswrapper[4642]: E0128 07:03:18.396972 4642 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 28 07:03:18 crc kubenswrapper[4642]: E0128 07:03:18.397004 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4dd91859-662e-4131-a376-57998c03d752-etc-swift podName:4dd91859-662e-4131-a376-57998c03d752 nodeName:}" failed. No retries permitted until 2026-01-28 07:03:19.396994552 +0000 UTC m=+922.629083361 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/4dd91859-662e-4131-a376-57998c03d752-etc-swift") pod "swift-storage-0" (UID: "4dd91859-662e-4131-a376-57998c03d752") : configmap "swift-ring-files" not found Jan 28 07:03:18 crc kubenswrapper[4642]: I0128 07:03:18.397400 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/9c8f1362-c01b-4533-b2fa-a7cbfb573175-etc-swift\") pod \"swift-ring-rebalance-pp9sd\" (UID: \"9c8f1362-c01b-4533-b2fa-a7cbfb573175\") " pod="openstack/swift-ring-rebalance-pp9sd" Jan 28 07:03:18 crc kubenswrapper[4642]: I0128 07:03:18.399896 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/9c8f1362-c01b-4533-b2fa-a7cbfb573175-swiftconf\") pod \"swift-ring-rebalance-pp9sd\" (UID: \"9c8f1362-c01b-4533-b2fa-a7cbfb573175\") " pod="openstack/swift-ring-rebalance-pp9sd" Jan 28 07:03:18 crc kubenswrapper[4642]: I0128 07:03:18.400549 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/9c8f1362-c01b-4533-b2fa-a7cbfb573175-dispersionconf\") pod \"swift-ring-rebalance-pp9sd\" (UID: \"9c8f1362-c01b-4533-b2fa-a7cbfb573175\") " pod="openstack/swift-ring-rebalance-pp9sd" Jan 28 07:03:18 crc kubenswrapper[4642]: I0128 07:03:18.400475 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c8f1362-c01b-4533-b2fa-a7cbfb573175-combined-ca-bundle\") pod \"swift-ring-rebalance-pp9sd\" (UID: \"9c8f1362-c01b-4533-b2fa-a7cbfb573175\") " pod="openstack/swift-ring-rebalance-pp9sd" Jan 28 07:03:18 crc kubenswrapper[4642]: I0128 07:03:18.408534 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n598r\" (UniqueName: \"kubernetes.io/projected/9c8f1362-c01b-4533-b2fa-a7cbfb573175-kube-api-access-n598r\") pod \"swift-ring-rebalance-pp9sd\" (UID: \"9c8f1362-c01b-4533-b2fa-a7cbfb573175\") " pod="openstack/swift-ring-rebalance-pp9sd" Jan 28 07:03:18 crc kubenswrapper[4642]: I0128 07:03:18.512601 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-dlrtb" event={"ID":"9ecdc58f-cfc4-47ac-959c-34336d6c2e36","Type":"ContainerDied","Data":"50aaf4bc6addb04866d80e8e47d72685231cfe185cf6c0ad7f233ad28f3222b2"} Jan 28 07:03:18 crc kubenswrapper[4642]: I0128 07:03:18.512641 4642 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50aaf4bc6addb04866d80e8e47d72685231cfe185cf6c0ad7f233ad28f3222b2" Jan 28 07:03:18 crc kubenswrapper[4642]: I0128 07:03:18.512710 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-dlrtb" Jan 28 07:03:18 crc kubenswrapper[4642]: I0128 07:03:18.517281 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-rg5pj" event={"ID":"c37e9b0e-970a-44db-bc75-0782625ab2a2","Type":"ContainerDied","Data":"238abfa87fe79b631a94a8b37e727a2c395399332ef2678c4f62ad86109ebed4"} Jan 28 07:03:18 crc kubenswrapper[4642]: I0128 07:03:18.517320 4642 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="238abfa87fe79b631a94a8b37e727a2c395399332ef2678c4f62ad86109ebed4" Jan 28 07:03:18 crc kubenswrapper[4642]: I0128 07:03:18.517381 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-rg5pj" Jan 28 07:03:18 crc kubenswrapper[4642]: I0128 07:03:18.518726 4642 generic.go:334] "Generic (PLEG): container finished" podID="877c5c9a-8740-4a17-b470-ac9c2274c745" containerID="3d436b0e7ef91cb84ab1067d51aea6f235f519c2e6bdcaa5e75d23c2a94365ed" exitCode=0 Jan 28 07:03:18 crc kubenswrapper[4642]: I0128 07:03:18.518749 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c56fc69cc-rtglx" event={"ID":"877c5c9a-8740-4a17-b470-ac9c2274c745","Type":"ContainerDied","Data":"3d436b0e7ef91cb84ab1067d51aea6f235f519c2e6bdcaa5e75d23c2a94365ed"} Jan 28 07:03:18 crc kubenswrapper[4642]: I0128 07:03:18.520553 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-703f-account-create-update-qw47f" Jan 28 07:03:18 crc kubenswrapper[4642]: I0128 07:03:18.520720 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-703f-account-create-update-qw47f" event={"ID":"4bbb7547-8266-45d7-9198-1f6fd3a4418b","Type":"ContainerDied","Data":"a6034de27fbc5ed07e7b82921442a4f907ede664acfe4f05857c339d05a9e83b"} Jan 28 07:03:18 crc kubenswrapper[4642]: I0128 07:03:18.521432 4642 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6034de27fbc5ed07e7b82921442a4f907ede664acfe4f05857c339d05a9e83b" Jan 28 07:03:18 crc kubenswrapper[4642]: I0128 07:03:18.521661 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"cc7a3db8-5279-4295-a18a-59749e31d9a4","Type":"ContainerStarted","Data":"9a26ba8969e6cedee50ccbd1df4913ad9a5743f4452bc5ba98972ba66b20b234"} Jan 28 07:03:18 crc kubenswrapper[4642]: I0128 07:03:18.524303 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-b6eb-account-create-update-4q9gp" event={"ID":"95ef778a-9202-43a6-b1b0-fc0938621c71","Type":"ContainerDied","Data":"e3a4c140c85622c001f620cec9f4d7d24dfc42e3fac5c89ecdc9b1803864acf1"} Jan 28 07:03:18 crc kubenswrapper[4642]: I0128 07:03:18.524330 4642 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3a4c140c85622c001f620cec9f4d7d24dfc42e3fac5c89ecdc9b1803864acf1" Jan 28 07:03:18 crc kubenswrapper[4642]: I0128 07:03:18.524489 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-b6eb-account-create-update-4q9gp" Jan 28 07:03:18 crc kubenswrapper[4642]: I0128 07:03:18.528818 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-pp9sd" Jan 28 07:03:18 crc kubenswrapper[4642]: I0128 07:03:18.911916 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-pp9sd"] Jan 28 07:03:19 crc kubenswrapper[4642]: I0128 07:03:19.412743 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4dd91859-662e-4131-a376-57998c03d752-etc-swift\") pod \"swift-storage-0\" (UID: \"4dd91859-662e-4131-a376-57998c03d752\") " pod="openstack/swift-storage-0" Jan 28 07:03:19 crc kubenswrapper[4642]: E0128 07:03:19.413045 4642 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 28 07:03:19 crc kubenswrapper[4642]: E0128 07:03:19.413062 4642 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 28 07:03:19 crc kubenswrapper[4642]: E0128 07:03:19.413125 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4dd91859-662e-4131-a376-57998c03d752-etc-swift podName:4dd91859-662e-4131-a376-57998c03d752 nodeName:}" failed. No retries permitted until 2026-01-28 07:03:21.413091572 +0000 UTC m=+924.645180381 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/4dd91859-662e-4131-a376-57998c03d752-etc-swift") pod "swift-storage-0" (UID: "4dd91859-662e-4131-a376-57998c03d752") : configmap "swift-ring-files" not found Jan 28 07:03:19 crc kubenswrapper[4642]: I0128 07:03:19.532106 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c56fc69cc-rtglx" event={"ID":"877c5c9a-8740-4a17-b470-ac9c2274c745","Type":"ContainerStarted","Data":"e82be3005291e711166acae93067cfc0a8633c8ae36389630fa328b2e351e263"} Jan 28 07:03:19 crc kubenswrapper[4642]: I0128 07:03:19.532213 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6c56fc69cc-rtglx" Jan 28 07:03:19 crc kubenswrapper[4642]: I0128 07:03:19.533482 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-pp9sd" event={"ID":"9c8f1362-c01b-4533-b2fa-a7cbfb573175","Type":"ContainerStarted","Data":"6e8e5de1e444fb1ecc8cd7a3093cfdb097e973cb075873bd4e6e67f5ac3c4e5f"} Jan 28 07:03:19 crc kubenswrapper[4642]: I0128 07:03:19.548035 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6c56fc69cc-rtglx" podStartSLOduration=3.548022944 podStartE2EDuration="3.548022944s" podCreationTimestamp="2026-01-28 07:03:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 07:03:19.545787367 +0000 UTC m=+922.777876177" watchObservedRunningTime="2026-01-28 07:03:19.548022944 +0000 UTC m=+922.780111752" Jan 28 07:03:20 crc kubenswrapper[4642]: I0128 07:03:20.144555 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-kxtxh"] Jan 28 07:03:20 crc kubenswrapper[4642]: I0128 07:03:20.146042 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-kxtxh" Jan 28 07:03:20 crc kubenswrapper[4642]: I0128 07:03:20.147536 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 28 07:03:20 crc kubenswrapper[4642]: I0128 07:03:20.148667 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-7m5n4" Jan 28 07:03:20 crc kubenswrapper[4642]: I0128 07:03:20.153850 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-kxtxh"] Jan 28 07:03:20 crc kubenswrapper[4642]: I0128 07:03:20.225323 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/455075b1-b6a1-4b44-b1a7-fc86ed3fc8e0-config-data\") pod \"glance-db-sync-kxtxh\" (UID: \"455075b1-b6a1-4b44-b1a7-fc86ed3fc8e0\") " pod="openstack/glance-db-sync-kxtxh" Jan 28 07:03:20 crc kubenswrapper[4642]: I0128 07:03:20.225382 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/455075b1-b6a1-4b44-b1a7-fc86ed3fc8e0-db-sync-config-data\") pod \"glance-db-sync-kxtxh\" (UID: \"455075b1-b6a1-4b44-b1a7-fc86ed3fc8e0\") " pod="openstack/glance-db-sync-kxtxh" Jan 28 07:03:20 crc kubenswrapper[4642]: I0128 07:03:20.225454 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/455075b1-b6a1-4b44-b1a7-fc86ed3fc8e0-combined-ca-bundle\") pod \"glance-db-sync-kxtxh\" (UID: \"455075b1-b6a1-4b44-b1a7-fc86ed3fc8e0\") " pod="openstack/glance-db-sync-kxtxh" Jan 28 07:03:20 crc kubenswrapper[4642]: I0128 07:03:20.225523 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjtr2\" (UniqueName: \"kubernetes.io/projected/455075b1-b6a1-4b44-b1a7-fc86ed3fc8e0-kube-api-access-bjtr2\") pod \"glance-db-sync-kxtxh\" (UID: \"455075b1-b6a1-4b44-b1a7-fc86ed3fc8e0\") " pod="openstack/glance-db-sync-kxtxh" Jan 28 07:03:20 crc kubenswrapper[4642]: I0128 07:03:20.326994 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/455075b1-b6a1-4b44-b1a7-fc86ed3fc8e0-config-data\") pod \"glance-db-sync-kxtxh\" (UID: \"455075b1-b6a1-4b44-b1a7-fc86ed3fc8e0\") " pod="openstack/glance-db-sync-kxtxh" Jan 28 07:03:20 crc kubenswrapper[4642]: I0128 07:03:20.327081 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/455075b1-b6a1-4b44-b1a7-fc86ed3fc8e0-db-sync-config-data\") pod \"glance-db-sync-kxtxh\" (UID: \"455075b1-b6a1-4b44-b1a7-fc86ed3fc8e0\") " pod="openstack/glance-db-sync-kxtxh" Jan 28 07:03:20 crc kubenswrapper[4642]: I0128 07:03:20.327173 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/455075b1-b6a1-4b44-b1a7-fc86ed3fc8e0-combined-ca-bundle\") pod \"glance-db-sync-kxtxh\" (UID: \"455075b1-b6a1-4b44-b1a7-fc86ed3fc8e0\") " pod="openstack/glance-db-sync-kxtxh" Jan 28 07:03:20 crc kubenswrapper[4642]: I0128 07:03:20.327301 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjtr2\" (UniqueName: \"kubernetes.io/projected/455075b1-b6a1-4b44-b1a7-fc86ed3fc8e0-kube-api-access-bjtr2\") pod \"glance-db-sync-kxtxh\" (UID: \"455075b1-b6a1-4b44-b1a7-fc86ed3fc8e0\") " pod="openstack/glance-db-sync-kxtxh" Jan 28 07:03:20 crc kubenswrapper[4642]: I0128 07:03:20.331672 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/455075b1-b6a1-4b44-b1a7-fc86ed3fc8e0-config-data\") pod \"glance-db-sync-kxtxh\" (UID: \"455075b1-b6a1-4b44-b1a7-fc86ed3fc8e0\") " pod="openstack/glance-db-sync-kxtxh" Jan 28 07:03:20 crc kubenswrapper[4642]: I0128 07:03:20.331866 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/455075b1-b6a1-4b44-b1a7-fc86ed3fc8e0-db-sync-config-data\") pod \"glance-db-sync-kxtxh\" (UID: \"455075b1-b6a1-4b44-b1a7-fc86ed3fc8e0\") " pod="openstack/glance-db-sync-kxtxh" Jan 28 07:03:20 crc kubenswrapper[4642]: I0128 07:03:20.332609 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/455075b1-b6a1-4b44-b1a7-fc86ed3fc8e0-combined-ca-bundle\") pod \"glance-db-sync-kxtxh\" (UID: \"455075b1-b6a1-4b44-b1a7-fc86ed3fc8e0\") " pod="openstack/glance-db-sync-kxtxh" Jan 28 07:03:20 crc kubenswrapper[4642]: I0128 07:03:20.340954 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjtr2\" (UniqueName: \"kubernetes.io/projected/455075b1-b6a1-4b44-b1a7-fc86ed3fc8e0-kube-api-access-bjtr2\") pod \"glance-db-sync-kxtxh\" (UID: \"455075b1-b6a1-4b44-b1a7-fc86ed3fc8e0\") " pod="openstack/glance-db-sync-kxtxh" Jan 28 07:03:20 crc kubenswrapper[4642]: I0128 07:03:20.467877 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-kxtxh" Jan 28 07:03:20 crc kubenswrapper[4642]: I0128 07:03:20.554309 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"cc7a3db8-5279-4295-a18a-59749e31d9a4","Type":"ContainerStarted","Data":"11e271315b4181b4132c73d722ba3cd9382a0f5166cec26bfda792c9000fa78e"} Jan 28 07:03:20 crc kubenswrapper[4642]: I0128 07:03:20.554362 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"cc7a3db8-5279-4295-a18a-59749e31d9a4","Type":"ContainerStarted","Data":"761c80f384343ef4f7490c2be4cc6cc900e74b41aec80326803207e423626367"} Jan 28 07:03:20 crc kubenswrapper[4642]: I0128 07:03:20.953289 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.65652396 podStartE2EDuration="3.953274042s" podCreationTimestamp="2026-01-28 07:03:17 +0000 UTC" firstStartedPulling="2026-01-28 07:03:18.347110959 +0000 UTC m=+921.579199757" lastFinishedPulling="2026-01-28 07:03:19.643861029 +0000 UTC m=+922.875949839" observedRunningTime="2026-01-28 07:03:20.578787587 +0000 UTC m=+923.810876396" watchObservedRunningTime="2026-01-28 07:03:20.953274042 +0000 UTC m=+924.185362851" Jan 28 07:03:20 crc kubenswrapper[4642]: I0128 07:03:20.954058 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-kxtxh"] Jan 28 07:03:20 crc kubenswrapper[4642]: W0128 07:03:20.958203 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod455075b1_b6a1_4b44_b1a7_fc86ed3fc8e0.slice/crio-ed892deccdc1c86cf86ca174db104e3027b21901eff2fdbb1771d43016e09b10 WatchSource:0}: Error finding container ed892deccdc1c86cf86ca174db104e3027b21901eff2fdbb1771d43016e09b10: Status 404 returned error can't find the container with id ed892deccdc1c86cf86ca174db104e3027b21901eff2fdbb1771d43016e09b10 Jan 28 07:03:21 crc kubenswrapper[4642]: I0128 07:03:21.459542 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4dd91859-662e-4131-a376-57998c03d752-etc-swift\") pod \"swift-storage-0\" (UID: \"4dd91859-662e-4131-a376-57998c03d752\") " pod="openstack/swift-storage-0" Jan 28 07:03:21 crc kubenswrapper[4642]: E0128 07:03:21.459697 4642 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 28 07:03:21 crc kubenswrapper[4642]: E0128 07:03:21.459716 4642 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 28 07:03:21 crc kubenswrapper[4642]: E0128 07:03:21.459760 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4dd91859-662e-4131-a376-57998c03d752-etc-swift podName:4dd91859-662e-4131-a376-57998c03d752 nodeName:}" failed. No retries permitted until 2026-01-28 07:03:25.459747961 +0000 UTC m=+928.691836770 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/4dd91859-662e-4131-a376-57998c03d752-etc-swift") pod "swift-storage-0" (UID: "4dd91859-662e-4131-a376-57998c03d752") : configmap "swift-ring-files" not found Jan 28 07:03:21 crc kubenswrapper[4642]: I0128 07:03:21.529662 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-8kn7s"] Jan 28 07:03:21 crc kubenswrapper[4642]: I0128 07:03:21.530655 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-8kn7s" Jan 28 07:03:21 crc kubenswrapper[4642]: I0128 07:03:21.532138 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 28 07:03:21 crc kubenswrapper[4642]: I0128 07:03:21.544775 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-8kn7s"] Jan 28 07:03:21 crc kubenswrapper[4642]: I0128 07:03:21.564453 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-kxtxh" event={"ID":"455075b1-b6a1-4b44-b1a7-fc86ed3fc8e0","Type":"ContainerStarted","Data":"ed892deccdc1c86cf86ca174db104e3027b21901eff2fdbb1771d43016e09b10"} Jan 28 07:03:21 crc kubenswrapper[4642]: I0128 07:03:21.564607 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 28 07:03:21 crc kubenswrapper[4642]: I0128 07:03:21.662280 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7scr\" (UniqueName: \"kubernetes.io/projected/616e3599-a69f-46be-aa76-4316b64dc3e1-kube-api-access-j7scr\") pod \"root-account-create-update-8kn7s\" (UID: \"616e3599-a69f-46be-aa76-4316b64dc3e1\") " pod="openstack/root-account-create-update-8kn7s" Jan 28 07:03:21 crc kubenswrapper[4642]: I0128 07:03:21.662319 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/616e3599-a69f-46be-aa76-4316b64dc3e1-operator-scripts\") pod \"root-account-create-update-8kn7s\" (UID: \"616e3599-a69f-46be-aa76-4316b64dc3e1\") " pod="openstack/root-account-create-update-8kn7s" Jan 28 07:03:21 crc kubenswrapper[4642]: I0128 07:03:21.763719 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7scr\" (UniqueName: \"kubernetes.io/projected/616e3599-a69f-46be-aa76-4316b64dc3e1-kube-api-access-j7scr\") pod \"root-account-create-update-8kn7s\" (UID: \"616e3599-a69f-46be-aa76-4316b64dc3e1\") " pod="openstack/root-account-create-update-8kn7s" Jan 28 07:03:21 crc kubenswrapper[4642]: I0128 07:03:21.763756 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/616e3599-a69f-46be-aa76-4316b64dc3e1-operator-scripts\") pod \"root-account-create-update-8kn7s\" (UID: \"616e3599-a69f-46be-aa76-4316b64dc3e1\") " pod="openstack/root-account-create-update-8kn7s" Jan 28 07:03:21 crc kubenswrapper[4642]: I0128 07:03:21.764512 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/616e3599-a69f-46be-aa76-4316b64dc3e1-operator-scripts\") pod \"root-account-create-update-8kn7s\" (UID: \"616e3599-a69f-46be-aa76-4316b64dc3e1\") " pod="openstack/root-account-create-update-8kn7s" Jan 28 07:03:21 crc kubenswrapper[4642]: I0128 07:03:21.788399 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7scr\" (UniqueName: \"kubernetes.io/projected/616e3599-a69f-46be-aa76-4316b64dc3e1-kube-api-access-j7scr\") pod \"root-account-create-update-8kn7s\" (UID: \"616e3599-a69f-46be-aa76-4316b64dc3e1\") " pod="openstack/root-account-create-update-8kn7s" Jan 28 07:03:21 crc kubenswrapper[4642]: I0128 07:03:21.853053 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-8kn7s" Jan 28 07:03:23 crc kubenswrapper[4642]: I0128 07:03:23.212326 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-8kn7s"] Jan 28 07:03:23 crc kubenswrapper[4642]: W0128 07:03:23.216101 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod616e3599_a69f_46be_aa76_4316b64dc3e1.slice/crio-994d7b5b429842eb95563802dfe719201aeefc632e3cf820ee5d4c187fbf29cf WatchSource:0}: Error finding container 994d7b5b429842eb95563802dfe719201aeefc632e3cf820ee5d4c187fbf29cf: Status 404 returned error can't find the container with id 994d7b5b429842eb95563802dfe719201aeefc632e3cf820ee5d4c187fbf29cf Jan 28 07:03:23 crc kubenswrapper[4642]: I0128 07:03:23.582519 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-pp9sd" event={"ID":"9c8f1362-c01b-4533-b2fa-a7cbfb573175","Type":"ContainerStarted","Data":"8c016f2d8b32a580bf6e84dbcc8e6dfece084a583b58310bb64ba7bbe6a63e79"} Jan 28 07:03:23 crc kubenswrapper[4642]: I0128 07:03:23.588019 4642 generic.go:334] "Generic (PLEG): container finished" podID="616e3599-a69f-46be-aa76-4316b64dc3e1" containerID="393b5f0552af00e7059e84fdd61eb1629f86e0494dc5a0e0754cc1b9c95d9942" exitCode=0 Jan 28 07:03:23 crc kubenswrapper[4642]: I0128 07:03:23.588062 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-8kn7s" event={"ID":"616e3599-a69f-46be-aa76-4316b64dc3e1","Type":"ContainerDied","Data":"393b5f0552af00e7059e84fdd61eb1629f86e0494dc5a0e0754cc1b9c95d9942"} Jan 28 07:03:23 crc kubenswrapper[4642]: I0128 07:03:23.588106 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-8kn7s" event={"ID":"616e3599-a69f-46be-aa76-4316b64dc3e1","Type":"ContainerStarted","Data":"994d7b5b429842eb95563802dfe719201aeefc632e3cf820ee5d4c187fbf29cf"} Jan 28 07:03:23 crc kubenswrapper[4642]: I0128 07:03:23.601361 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-pp9sd" podStartSLOduration=1.662286571 podStartE2EDuration="5.601344585s" podCreationTimestamp="2026-01-28 07:03:18 +0000 UTC" firstStartedPulling="2026-01-28 07:03:18.91718233 +0000 UTC m=+922.149271138" lastFinishedPulling="2026-01-28 07:03:22.856240343 +0000 UTC m=+926.088329152" observedRunningTime="2026-01-28 07:03:23.595118263 +0000 UTC m=+926.827230766" watchObservedRunningTime="2026-01-28 07:03:23.601344585 +0000 UTC m=+926.833433395" Jan 28 07:03:24 crc kubenswrapper[4642]: I0128 07:03:24.422925 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 28 07:03:24 crc kubenswrapper[4642]: I0128 07:03:24.422968 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 28 07:03:24 crc kubenswrapper[4642]: I0128 07:03:24.480601 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 28 07:03:24 crc kubenswrapper[4642]: I0128 07:03:24.663153 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 28 07:03:24 crc kubenswrapper[4642]: I0128 07:03:24.917683 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-8kn7s" Jan 28 07:03:25 crc kubenswrapper[4642]: I0128 07:03:25.024068 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7scr\" (UniqueName: \"kubernetes.io/projected/616e3599-a69f-46be-aa76-4316b64dc3e1-kube-api-access-j7scr\") pod \"616e3599-a69f-46be-aa76-4316b64dc3e1\" (UID: \"616e3599-a69f-46be-aa76-4316b64dc3e1\") " Jan 28 07:03:25 crc kubenswrapper[4642]: I0128 07:03:25.024238 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/616e3599-a69f-46be-aa76-4316b64dc3e1-operator-scripts\") pod \"616e3599-a69f-46be-aa76-4316b64dc3e1\" (UID: \"616e3599-a69f-46be-aa76-4316b64dc3e1\") " Jan 28 07:03:25 crc kubenswrapper[4642]: I0128 07:03:25.024783 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/616e3599-a69f-46be-aa76-4316b64dc3e1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "616e3599-a69f-46be-aa76-4316b64dc3e1" (UID: "616e3599-a69f-46be-aa76-4316b64dc3e1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:03:25 crc kubenswrapper[4642]: I0128 07:03:25.032264 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/616e3599-a69f-46be-aa76-4316b64dc3e1-kube-api-access-j7scr" (OuterVolumeSpecName: "kube-api-access-j7scr") pod "616e3599-a69f-46be-aa76-4316b64dc3e1" (UID: "616e3599-a69f-46be-aa76-4316b64dc3e1"). InnerVolumeSpecName "kube-api-access-j7scr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:03:25 crc kubenswrapper[4642]: I0128 07:03:25.126616 4642 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/616e3599-a69f-46be-aa76-4316b64dc3e1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 07:03:25 crc kubenswrapper[4642]: I0128 07:03:25.126636 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j7scr\" (UniqueName: \"kubernetes.io/projected/616e3599-a69f-46be-aa76-4316b64dc3e1-kube-api-access-j7scr\") on node \"crc\" DevicePath \"\"" Jan 28 07:03:25 crc kubenswrapper[4642]: I0128 07:03:25.533860 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4dd91859-662e-4131-a376-57998c03d752-etc-swift\") pod \"swift-storage-0\" (UID: \"4dd91859-662e-4131-a376-57998c03d752\") " pod="openstack/swift-storage-0" Jan 28 07:03:25 crc kubenswrapper[4642]: E0128 07:03:25.534051 4642 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 28 07:03:25 crc kubenswrapper[4642]: E0128 07:03:25.534074 4642 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 28 07:03:25 crc kubenswrapper[4642]: E0128 07:03:25.534137 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4dd91859-662e-4131-a376-57998c03d752-etc-swift podName:4dd91859-662e-4131-a376-57998c03d752 nodeName:}" failed. No retries permitted until 2026-01-28 07:03:33.534122238 +0000 UTC m=+936.766211047 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/4dd91859-662e-4131-a376-57998c03d752-etc-swift") pod "swift-storage-0" (UID: "4dd91859-662e-4131-a376-57998c03d752") : configmap "swift-ring-files" not found Jan 28 07:03:25 crc kubenswrapper[4642]: I0128 07:03:25.602896 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-8kn7s" event={"ID":"616e3599-a69f-46be-aa76-4316b64dc3e1","Type":"ContainerDied","Data":"994d7b5b429842eb95563802dfe719201aeefc632e3cf820ee5d4c187fbf29cf"} Jan 28 07:03:25 crc kubenswrapper[4642]: I0128 07:03:25.602936 4642 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="994d7b5b429842eb95563802dfe719201aeefc632e3cf820ee5d4c187fbf29cf" Jan 28 07:03:25 crc kubenswrapper[4642]: I0128 07:03:25.602905 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-8kn7s" Jan 28 07:03:26 crc kubenswrapper[4642]: I0128 07:03:26.974313 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6c56fc69cc-rtglx" Jan 28 07:03:27 crc kubenswrapper[4642]: I0128 07:03:27.021148 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7685d4bf9-pr5j7"] Jan 28 07:03:27 crc kubenswrapper[4642]: I0128 07:03:27.021974 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7685d4bf9-pr5j7" podUID="25db2b80-2d19-4f8f-9ea8-0c0c7b3688e0" containerName="dnsmasq-dns" containerID="cri-o://3e02c9c379e85e04650bc1f84fb2398264edb0c901b2237e7cb5db9b24559df5" gracePeriod=10 Jan 28 07:03:27 crc kubenswrapper[4642]: I0128 07:03:27.452527 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7685d4bf9-pr5j7" Jan 28 07:03:27 crc kubenswrapper[4642]: I0128 07:03:27.577798 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25db2b80-2d19-4f8f-9ea8-0c0c7b3688e0-config\") pod \"25db2b80-2d19-4f8f-9ea8-0c0c7b3688e0\" (UID: \"25db2b80-2d19-4f8f-9ea8-0c0c7b3688e0\") " Jan 28 07:03:27 crc kubenswrapper[4642]: I0128 07:03:27.577901 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/25db2b80-2d19-4f8f-9ea8-0c0c7b3688e0-ovsdbserver-nb\") pod \"25db2b80-2d19-4f8f-9ea8-0c0c7b3688e0\" (UID: \"25db2b80-2d19-4f8f-9ea8-0c0c7b3688e0\") " Jan 28 07:03:27 crc kubenswrapper[4642]: I0128 07:03:27.577927 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/25db2b80-2d19-4f8f-9ea8-0c0c7b3688e0-dns-svc\") pod \"25db2b80-2d19-4f8f-9ea8-0c0c7b3688e0\" (UID: \"25db2b80-2d19-4f8f-9ea8-0c0c7b3688e0\") " Jan 28 07:03:27 crc kubenswrapper[4642]: I0128 07:03:27.577954 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9mgk\" (UniqueName: \"kubernetes.io/projected/25db2b80-2d19-4f8f-9ea8-0c0c7b3688e0-kube-api-access-n9mgk\") pod \"25db2b80-2d19-4f8f-9ea8-0c0c7b3688e0\" (UID: \"25db2b80-2d19-4f8f-9ea8-0c0c7b3688e0\") " Jan 28 07:03:27 crc kubenswrapper[4642]: I0128 07:03:27.577977 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/25db2b80-2d19-4f8f-9ea8-0c0c7b3688e0-ovsdbserver-sb\") pod \"25db2b80-2d19-4f8f-9ea8-0c0c7b3688e0\" (UID: \"25db2b80-2d19-4f8f-9ea8-0c0c7b3688e0\") " Jan 28 07:03:27 crc kubenswrapper[4642]: I0128 07:03:27.588264 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25db2b80-2d19-4f8f-9ea8-0c0c7b3688e0-kube-api-access-n9mgk" (OuterVolumeSpecName: "kube-api-access-n9mgk") pod "25db2b80-2d19-4f8f-9ea8-0c0c7b3688e0" (UID: "25db2b80-2d19-4f8f-9ea8-0c0c7b3688e0"). InnerVolumeSpecName "kube-api-access-n9mgk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:03:27 crc kubenswrapper[4642]: I0128 07:03:27.611324 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25db2b80-2d19-4f8f-9ea8-0c0c7b3688e0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "25db2b80-2d19-4f8f-9ea8-0c0c7b3688e0" (UID: "25db2b80-2d19-4f8f-9ea8-0c0c7b3688e0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:03:27 crc kubenswrapper[4642]: I0128 07:03:27.611361 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25db2b80-2d19-4f8f-9ea8-0c0c7b3688e0-config" (OuterVolumeSpecName: "config") pod "25db2b80-2d19-4f8f-9ea8-0c0c7b3688e0" (UID: "25db2b80-2d19-4f8f-9ea8-0c0c7b3688e0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:03:27 crc kubenswrapper[4642]: I0128 07:03:27.622555 4642 generic.go:334] "Generic (PLEG): container finished" podID="25db2b80-2d19-4f8f-9ea8-0c0c7b3688e0" containerID="3e02c9c379e85e04650bc1f84fb2398264edb0c901b2237e7cb5db9b24559df5" exitCode=0 Jan 28 07:03:27 crc kubenswrapper[4642]: I0128 07:03:27.622597 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7685d4bf9-pr5j7" event={"ID":"25db2b80-2d19-4f8f-9ea8-0c0c7b3688e0","Type":"ContainerDied","Data":"3e02c9c379e85e04650bc1f84fb2398264edb0c901b2237e7cb5db9b24559df5"} Jan 28 07:03:27 crc kubenswrapper[4642]: I0128 07:03:27.622622 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7685d4bf9-pr5j7" event={"ID":"25db2b80-2d19-4f8f-9ea8-0c0c7b3688e0","Type":"ContainerDied","Data":"a23cd79df42fbf6321d479eea64006c2469c371fbc72dc26a5ebe71f0d6bb2e4"} Jan 28 07:03:27 crc kubenswrapper[4642]: I0128 07:03:27.622639 4642 scope.go:117] "RemoveContainer" containerID="3e02c9c379e85e04650bc1f84fb2398264edb0c901b2237e7cb5db9b24559df5" Jan 28 07:03:27 crc kubenswrapper[4642]: I0128 07:03:27.622675 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7685d4bf9-pr5j7" Jan 28 07:03:27 crc kubenswrapper[4642]: I0128 07:03:27.623976 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25db2b80-2d19-4f8f-9ea8-0c0c7b3688e0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "25db2b80-2d19-4f8f-9ea8-0c0c7b3688e0" (UID: "25db2b80-2d19-4f8f-9ea8-0c0c7b3688e0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:03:27 crc kubenswrapper[4642]: I0128 07:03:27.624836 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25db2b80-2d19-4f8f-9ea8-0c0c7b3688e0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "25db2b80-2d19-4f8f-9ea8-0c0c7b3688e0" (UID: "25db2b80-2d19-4f8f-9ea8-0c0c7b3688e0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:03:27 crc kubenswrapper[4642]: I0128 07:03:27.646279 4642 scope.go:117] "RemoveContainer" containerID="b14b59f44e215b13b5a7d2a260930af3d60e06590ea1cacd6cbbe26f275b7baf" Jan 28 07:03:27 crc kubenswrapper[4642]: I0128 07:03:27.663660 4642 scope.go:117] "RemoveContainer" containerID="3e02c9c379e85e04650bc1f84fb2398264edb0c901b2237e7cb5db9b24559df5" Jan 28 07:03:27 crc kubenswrapper[4642]: E0128 07:03:27.663928 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e02c9c379e85e04650bc1f84fb2398264edb0c901b2237e7cb5db9b24559df5\": container with ID starting with 3e02c9c379e85e04650bc1f84fb2398264edb0c901b2237e7cb5db9b24559df5 not found: ID does not exist" containerID="3e02c9c379e85e04650bc1f84fb2398264edb0c901b2237e7cb5db9b24559df5" Jan 28 07:03:27 crc kubenswrapper[4642]: I0128 07:03:27.663958 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e02c9c379e85e04650bc1f84fb2398264edb0c901b2237e7cb5db9b24559df5"} err="failed to get container status \"3e02c9c379e85e04650bc1f84fb2398264edb0c901b2237e7cb5db9b24559df5\": rpc error: code = NotFound desc = could not find container \"3e02c9c379e85e04650bc1f84fb2398264edb0c901b2237e7cb5db9b24559df5\": container with ID starting with 3e02c9c379e85e04650bc1f84fb2398264edb0c901b2237e7cb5db9b24559df5 not found: ID does not exist" Jan 28 07:03:27 crc kubenswrapper[4642]: I0128 07:03:27.663977 4642 scope.go:117] "RemoveContainer" containerID="b14b59f44e215b13b5a7d2a260930af3d60e06590ea1cacd6cbbe26f275b7baf" Jan 28 07:03:27 crc kubenswrapper[4642]: E0128 07:03:27.664400 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b14b59f44e215b13b5a7d2a260930af3d60e06590ea1cacd6cbbe26f275b7baf\": container with ID starting with b14b59f44e215b13b5a7d2a260930af3d60e06590ea1cacd6cbbe26f275b7baf not found: ID does not exist" containerID="b14b59f44e215b13b5a7d2a260930af3d60e06590ea1cacd6cbbe26f275b7baf" Jan 28 07:03:27 crc kubenswrapper[4642]: I0128 07:03:27.664423 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b14b59f44e215b13b5a7d2a260930af3d60e06590ea1cacd6cbbe26f275b7baf"} err="failed to get container status \"b14b59f44e215b13b5a7d2a260930af3d60e06590ea1cacd6cbbe26f275b7baf\": rpc error: code = NotFound desc = could not find container \"b14b59f44e215b13b5a7d2a260930af3d60e06590ea1cacd6cbbe26f275b7baf\": container with ID starting with b14b59f44e215b13b5a7d2a260930af3d60e06590ea1cacd6cbbe26f275b7baf not found: ID does not exist" Jan 28 07:03:27 crc kubenswrapper[4642]: I0128 07:03:27.680200 4642 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25db2b80-2d19-4f8f-9ea8-0c0c7b3688e0-config\") on node \"crc\" DevicePath \"\"" Jan 28 07:03:27 crc kubenswrapper[4642]: I0128 07:03:27.680222 4642 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/25db2b80-2d19-4f8f-9ea8-0c0c7b3688e0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 07:03:27 crc kubenswrapper[4642]: I0128 07:03:27.680233 4642 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/25db2b80-2d19-4f8f-9ea8-0c0c7b3688e0-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 07:03:27 crc kubenswrapper[4642]: I0128 07:03:27.680245 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9mgk\" (UniqueName: \"kubernetes.io/projected/25db2b80-2d19-4f8f-9ea8-0c0c7b3688e0-kube-api-access-n9mgk\") on node \"crc\" DevicePath \"\"" Jan 28 07:03:27 crc kubenswrapper[4642]: I0128 07:03:27.680255 4642 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/25db2b80-2d19-4f8f-9ea8-0c0c7b3688e0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 07:03:27 crc kubenswrapper[4642]: I0128 07:03:27.951138 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7685d4bf9-pr5j7"] Jan 28 07:03:27 crc kubenswrapper[4642]: I0128 07:03:27.956397 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7685d4bf9-pr5j7"] Jan 28 07:03:28 crc kubenswrapper[4642]: I0128 07:03:28.630939 4642 generic.go:334] "Generic (PLEG): container finished" podID="9c8f1362-c01b-4533-b2fa-a7cbfb573175" containerID="8c016f2d8b32a580bf6e84dbcc8e6dfece084a583b58310bb64ba7bbe6a63e79" exitCode=0 Jan 28 07:03:28 crc kubenswrapper[4642]: I0128 07:03:28.631256 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-pp9sd" event={"ID":"9c8f1362-c01b-4533-b2fa-a7cbfb573175","Type":"ContainerDied","Data":"8c016f2d8b32a580bf6e84dbcc8e6dfece084a583b58310bb64ba7bbe6a63e79"} Jan 28 07:03:29 crc kubenswrapper[4642]: I0128 07:03:29.106858 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25db2b80-2d19-4f8f-9ea8-0c0c7b3688e0" path="/var/lib/kubelet/pods/25db2b80-2d19-4f8f-9ea8-0c0c7b3688e0/volumes" Jan 28 07:03:29 crc kubenswrapper[4642]: I0128 07:03:29.892339 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-pp9sd" Jan 28 07:03:30 crc kubenswrapper[4642]: I0128 07:03:30.021597 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9c8f1362-c01b-4533-b2fa-a7cbfb573175-scripts\") pod \"9c8f1362-c01b-4533-b2fa-a7cbfb573175\" (UID: \"9c8f1362-c01b-4533-b2fa-a7cbfb573175\") " Jan 28 07:03:30 crc kubenswrapper[4642]: I0128 07:03:30.021715 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/9c8f1362-c01b-4533-b2fa-a7cbfb573175-etc-swift\") pod \"9c8f1362-c01b-4533-b2fa-a7cbfb573175\" (UID: \"9c8f1362-c01b-4533-b2fa-a7cbfb573175\") " Jan 28 07:03:30 crc kubenswrapper[4642]: I0128 07:03:30.021774 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/9c8f1362-c01b-4533-b2fa-a7cbfb573175-dispersionconf\") pod \"9c8f1362-c01b-4533-b2fa-a7cbfb573175\" (UID: \"9c8f1362-c01b-4533-b2fa-a7cbfb573175\") " Jan 28 07:03:30 crc kubenswrapper[4642]: I0128 07:03:30.021799 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c8f1362-c01b-4533-b2fa-a7cbfb573175-combined-ca-bundle\") pod \"9c8f1362-c01b-4533-b2fa-a7cbfb573175\" (UID: \"9c8f1362-c01b-4533-b2fa-a7cbfb573175\") " Jan 28 07:03:30 crc kubenswrapper[4642]: I0128 07:03:30.021885 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n598r\" (UniqueName: \"kubernetes.io/projected/9c8f1362-c01b-4533-b2fa-a7cbfb573175-kube-api-access-n598r\") pod \"9c8f1362-c01b-4533-b2fa-a7cbfb573175\" (UID: \"9c8f1362-c01b-4533-b2fa-a7cbfb573175\") " Jan 28 07:03:30 crc kubenswrapper[4642]: I0128 07:03:30.021918 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/9c8f1362-c01b-4533-b2fa-a7cbfb573175-ring-data-devices\") pod \"9c8f1362-c01b-4533-b2fa-a7cbfb573175\" (UID: \"9c8f1362-c01b-4533-b2fa-a7cbfb573175\") " Jan 28 07:03:30 crc kubenswrapper[4642]: I0128 07:03:30.021939 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/9c8f1362-c01b-4533-b2fa-a7cbfb573175-swiftconf\") pod \"9c8f1362-c01b-4533-b2fa-a7cbfb573175\" (UID: \"9c8f1362-c01b-4533-b2fa-a7cbfb573175\") " Jan 28 07:03:30 crc kubenswrapper[4642]: I0128 07:03:30.022591 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c8f1362-c01b-4533-b2fa-a7cbfb573175-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "9c8f1362-c01b-4533-b2fa-a7cbfb573175" (UID: "9c8f1362-c01b-4533-b2fa-a7cbfb573175"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:03:30 crc kubenswrapper[4642]: I0128 07:03:30.023120 4642 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/9c8f1362-c01b-4533-b2fa-a7cbfb573175-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 28 07:03:30 crc kubenswrapper[4642]: I0128 07:03:30.023443 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c8f1362-c01b-4533-b2fa-a7cbfb573175-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "9c8f1362-c01b-4533-b2fa-a7cbfb573175" (UID: "9c8f1362-c01b-4533-b2fa-a7cbfb573175"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 07:03:30 crc kubenswrapper[4642]: I0128 07:03:30.027112 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c8f1362-c01b-4533-b2fa-a7cbfb573175-kube-api-access-n598r" (OuterVolumeSpecName: "kube-api-access-n598r") pod "9c8f1362-c01b-4533-b2fa-a7cbfb573175" (UID: "9c8f1362-c01b-4533-b2fa-a7cbfb573175"). InnerVolumeSpecName "kube-api-access-n598r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:03:30 crc kubenswrapper[4642]: I0128 07:03:30.028818 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c8f1362-c01b-4533-b2fa-a7cbfb573175-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "9c8f1362-c01b-4533-b2fa-a7cbfb573175" (UID: "9c8f1362-c01b-4533-b2fa-a7cbfb573175"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:03:30 crc kubenswrapper[4642]: I0128 07:03:30.038203 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c8f1362-c01b-4533-b2fa-a7cbfb573175-scripts" (OuterVolumeSpecName: "scripts") pod "9c8f1362-c01b-4533-b2fa-a7cbfb573175" (UID: "9c8f1362-c01b-4533-b2fa-a7cbfb573175"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:03:30 crc kubenswrapper[4642]: I0128 07:03:30.039944 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c8f1362-c01b-4533-b2fa-a7cbfb573175-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "9c8f1362-c01b-4533-b2fa-a7cbfb573175" (UID: "9c8f1362-c01b-4533-b2fa-a7cbfb573175"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:03:30 crc kubenswrapper[4642]: I0128 07:03:30.042365 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c8f1362-c01b-4533-b2fa-a7cbfb573175-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9c8f1362-c01b-4533-b2fa-a7cbfb573175" (UID: "9c8f1362-c01b-4533-b2fa-a7cbfb573175"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:03:30 crc kubenswrapper[4642]: I0128 07:03:30.124840 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n598r\" (UniqueName: \"kubernetes.io/projected/9c8f1362-c01b-4533-b2fa-a7cbfb573175-kube-api-access-n598r\") on node \"crc\" DevicePath \"\"" Jan 28 07:03:30 crc kubenswrapper[4642]: I0128 07:03:30.124868 4642 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/9c8f1362-c01b-4533-b2fa-a7cbfb573175-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 28 07:03:30 crc kubenswrapper[4642]: I0128 07:03:30.124924 4642 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9c8f1362-c01b-4533-b2fa-a7cbfb573175-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 07:03:30 crc kubenswrapper[4642]: I0128 07:03:30.124954 4642 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/9c8f1362-c01b-4533-b2fa-a7cbfb573175-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 28 07:03:30 crc kubenswrapper[4642]: I0128 07:03:30.124964 4642 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/9c8f1362-c01b-4533-b2fa-a7cbfb573175-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 28 07:03:30 crc kubenswrapper[4642]: I0128 07:03:30.124972 4642 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c8f1362-c01b-4533-b2fa-a7cbfb573175-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 07:03:30 crc kubenswrapper[4642]: I0128 07:03:30.654424 4642 generic.go:334] "Generic (PLEG): container finished" podID="716da2e6-dc75-431b-aa9e-d22bb4e0f91b" containerID="083b2e1874e89fce6b3baf1494aef28230adc9faaa23cf7b76895400a29413b9" exitCode=0 Jan 28 07:03:30 crc kubenswrapper[4642]: I0128 07:03:30.654486 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"716da2e6-dc75-431b-aa9e-d22bb4e0f91b","Type":"ContainerDied","Data":"083b2e1874e89fce6b3baf1494aef28230adc9faaa23cf7b76895400a29413b9"} Jan 28 07:03:30 crc kubenswrapper[4642]: I0128 07:03:30.656869 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-pp9sd" Jan 28 07:03:30 crc kubenswrapper[4642]: I0128 07:03:30.656896 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-pp9sd" event={"ID":"9c8f1362-c01b-4533-b2fa-a7cbfb573175","Type":"ContainerDied","Data":"6e8e5de1e444fb1ecc8cd7a3093cfdb097e973cb075873bd4e6e67f5ac3c4e5f"} Jan 28 07:03:30 crc kubenswrapper[4642]: I0128 07:03:30.656938 4642 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e8e5de1e444fb1ecc8cd7a3093cfdb097e973cb075873bd4e6e67f5ac3c4e5f" Jan 28 07:03:31 crc kubenswrapper[4642]: I0128 07:03:31.665643 4642 generic.go:334] "Generic (PLEG): container finished" podID="ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80" containerID="73f34878b20e4eb14d546def2579541109d36943f72b1b052629caa985ad90b5" exitCode=0 Jan 28 07:03:31 crc kubenswrapper[4642]: I0128 07:03:31.665723 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80","Type":"ContainerDied","Data":"73f34878b20e4eb14d546def2579541109d36943f72b1b052629caa985ad90b5"} Jan 28 07:03:31 crc kubenswrapper[4642]: I0128 07:03:31.669180 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"716da2e6-dc75-431b-aa9e-d22bb4e0f91b","Type":"ContainerStarted","Data":"674cf5c033eb05b2ed7c34c5140170164a7e274c8ffffc8e1733de0f0dff9491"} Jan 28 07:03:31 crc kubenswrapper[4642]: I0128 07:03:31.669382 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 28 07:03:31 crc kubenswrapper[4642]: I0128 07:03:31.706709 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.095231219 podStartE2EDuration="1m1.706691332s" podCreationTimestamp="2026-01-28 07:02:30 +0000 UTC" firstStartedPulling="2026-01-28 07:02:32.025634169 +0000 UTC m=+875.257722977" lastFinishedPulling="2026-01-28 07:02:57.637094282 +0000 UTC m=+900.869183090" observedRunningTime="2026-01-28 07:03:31.705008655 +0000 UTC m=+934.937097465" watchObservedRunningTime="2026-01-28 07:03:31.706691332 +0000 UTC m=+934.938780130" Jan 28 07:03:32 crc kubenswrapper[4642]: I0128 07:03:32.680461 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80","Type":"ContainerStarted","Data":"a6dca50e95cfeef3ed355eb217290c8896aaf62041e2eb565475c2cf4b367379"} Jan 28 07:03:32 crc kubenswrapper[4642]: I0128 07:03:32.682745 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 28 07:03:32 crc kubenswrapper[4642]: I0128 07:03:32.704838 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.2439523 podStartE2EDuration="1m3.704816231s" podCreationTimestamp="2026-01-28 07:02:29 +0000 UTC" firstStartedPulling="2026-01-28 07:02:31.642087006 +0000 UTC m=+874.874175816" lastFinishedPulling="2026-01-28 07:02:58.102950938 +0000 UTC m=+901.335039747" observedRunningTime="2026-01-28 07:03:32.703593762 +0000 UTC m=+935.935682570" watchObservedRunningTime="2026-01-28 07:03:32.704816231 +0000 UTC m=+935.936905040" Jan 28 07:03:33 crc kubenswrapper[4642]: I0128 07:03:33.066931 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-8kn7s"] Jan 28 07:03:33 crc kubenswrapper[4642]: I0128 07:03:33.071670 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-8kn7s"] Jan 28 07:03:33 crc kubenswrapper[4642]: I0128 07:03:33.105896 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="616e3599-a69f-46be-aa76-4316b64dc3e1" path="/var/lib/kubelet/pods/616e3599-a69f-46be-aa76-4316b64dc3e1/volumes" Jan 28 07:03:33 crc kubenswrapper[4642]: I0128 07:03:33.155037 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-6m6vs"] Jan 28 07:03:33 crc kubenswrapper[4642]: E0128 07:03:33.155430 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c8f1362-c01b-4533-b2fa-a7cbfb573175" containerName="swift-ring-rebalance" Jan 28 07:03:33 crc kubenswrapper[4642]: I0128 07:03:33.155460 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c8f1362-c01b-4533-b2fa-a7cbfb573175" containerName="swift-ring-rebalance" Jan 28 07:03:33 crc kubenswrapper[4642]: E0128 07:03:33.155473 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25db2b80-2d19-4f8f-9ea8-0c0c7b3688e0" containerName="init" Jan 28 07:03:33 crc kubenswrapper[4642]: I0128 07:03:33.155480 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="25db2b80-2d19-4f8f-9ea8-0c0c7b3688e0" containerName="init" Jan 28 07:03:33 crc kubenswrapper[4642]: E0128 07:03:33.155497 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="616e3599-a69f-46be-aa76-4316b64dc3e1" containerName="mariadb-account-create-update" Jan 28 07:03:33 crc kubenswrapper[4642]: I0128 07:03:33.155505 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="616e3599-a69f-46be-aa76-4316b64dc3e1" containerName="mariadb-account-create-update" Jan 28 07:03:33 crc kubenswrapper[4642]: E0128 07:03:33.155531 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25db2b80-2d19-4f8f-9ea8-0c0c7b3688e0" containerName="dnsmasq-dns" Jan 28 07:03:33 crc kubenswrapper[4642]: I0128 07:03:33.155537 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="25db2b80-2d19-4f8f-9ea8-0c0c7b3688e0" containerName="dnsmasq-dns" Jan 28 07:03:33 crc kubenswrapper[4642]: I0128 07:03:33.155687 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c8f1362-c01b-4533-b2fa-a7cbfb573175" containerName="swift-ring-rebalance" Jan 28 07:03:33 crc kubenswrapper[4642]: I0128 07:03:33.155708 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="25db2b80-2d19-4f8f-9ea8-0c0c7b3688e0" containerName="dnsmasq-dns" Jan 28 07:03:33 crc kubenswrapper[4642]: I0128 07:03:33.155717 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="616e3599-a69f-46be-aa76-4316b64dc3e1" containerName="mariadb-account-create-update" Jan 28 07:03:33 crc kubenswrapper[4642]: I0128 07:03:33.156220 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-6m6vs" Jan 28 07:03:33 crc kubenswrapper[4642]: I0128 07:03:33.158157 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 28 07:03:33 crc kubenswrapper[4642]: I0128 07:03:33.162414 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-6m6vs"] Jan 28 07:03:33 crc kubenswrapper[4642]: I0128 07:03:33.176371 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/adfe7625-9f97-437f-bf74-86d67336b34f-operator-scripts\") pod \"root-account-create-update-6m6vs\" (UID: \"adfe7625-9f97-437f-bf74-86d67336b34f\") " pod="openstack/root-account-create-update-6m6vs" Jan 28 07:03:33 crc kubenswrapper[4642]: I0128 07:03:33.176569 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ljxn\" (UniqueName: \"kubernetes.io/projected/adfe7625-9f97-437f-bf74-86d67336b34f-kube-api-access-4ljxn\") pod \"root-account-create-update-6m6vs\" (UID: \"adfe7625-9f97-437f-bf74-86d67336b34f\") " pod="openstack/root-account-create-update-6m6vs" Jan 28 07:03:33 crc kubenswrapper[4642]: I0128 07:03:33.278450 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/adfe7625-9f97-437f-bf74-86d67336b34f-operator-scripts\") pod \"root-account-create-update-6m6vs\" (UID: \"adfe7625-9f97-437f-bf74-86d67336b34f\") " pod="openstack/root-account-create-update-6m6vs" Jan 28 07:03:33 crc kubenswrapper[4642]: I0128 07:03:33.278626 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ljxn\" (UniqueName: \"kubernetes.io/projected/adfe7625-9f97-437f-bf74-86d67336b34f-kube-api-access-4ljxn\") pod \"root-account-create-update-6m6vs\" (UID: \"adfe7625-9f97-437f-bf74-86d67336b34f\") " pod="openstack/root-account-create-update-6m6vs" Jan 28 07:03:33 crc kubenswrapper[4642]: I0128 07:03:33.279110 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/adfe7625-9f97-437f-bf74-86d67336b34f-operator-scripts\") pod \"root-account-create-update-6m6vs\" (UID: \"adfe7625-9f97-437f-bf74-86d67336b34f\") " pod="openstack/root-account-create-update-6m6vs" Jan 28 07:03:33 crc kubenswrapper[4642]: I0128 07:03:33.296482 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ljxn\" (UniqueName: \"kubernetes.io/projected/adfe7625-9f97-437f-bf74-86d67336b34f-kube-api-access-4ljxn\") pod \"root-account-create-update-6m6vs\" (UID: \"adfe7625-9f97-437f-bf74-86d67336b34f\") " pod="openstack/root-account-create-update-6m6vs" Jan 28 07:03:33 crc kubenswrapper[4642]: I0128 07:03:33.474018 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-6m6vs" Jan 28 07:03:33 crc kubenswrapper[4642]: I0128 07:03:33.582241 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4dd91859-662e-4131-a376-57998c03d752-etc-swift\") pod \"swift-storage-0\" (UID: \"4dd91859-662e-4131-a376-57998c03d752\") " pod="openstack/swift-storage-0" Jan 28 07:03:33 crc kubenswrapper[4642]: I0128 07:03:33.588153 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4dd91859-662e-4131-a376-57998c03d752-etc-swift\") pod \"swift-storage-0\" (UID: \"4dd91859-662e-4131-a376-57998c03d752\") " pod="openstack/swift-storage-0" Jan 28 07:03:33 crc kubenswrapper[4642]: I0128 07:03:33.617929 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 28 07:03:33 crc kubenswrapper[4642]: I0128 07:03:33.897616 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-6m6vs"] Jan 28 07:03:33 crc kubenswrapper[4642]: W0128 07:03:33.899536 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podadfe7625_9f97_437f_bf74_86d67336b34f.slice/crio-c16c2da50b4b7a1ba3f22d668b320aec65cd37bb7f403868aff259e80f28ae27 WatchSource:0}: Error finding container c16c2da50b4b7a1ba3f22d668b320aec65cd37bb7f403868aff259e80f28ae27: Status 404 returned error can't find the container with id c16c2da50b4b7a1ba3f22d668b320aec65cd37bb7f403868aff259e80f28ae27 Jan 28 07:03:34 crc kubenswrapper[4642]: I0128 07:03:34.139486 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 28 07:03:34 crc kubenswrapper[4642]: W0128 07:03:34.141615 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4dd91859_662e_4131_a376_57998c03d752.slice/crio-9c697c5d9f92ff17da09d798335c6edde41e51c64a67867014b096cb32a802da WatchSource:0}: Error finding container 9c697c5d9f92ff17da09d798335c6edde41e51c64a67867014b096cb32a802da: Status 404 returned error can't find the container with id 9c697c5d9f92ff17da09d798335c6edde41e51c64a67867014b096cb32a802da Jan 28 07:03:34 crc kubenswrapper[4642]: I0128 07:03:34.697535 4642 generic.go:334] "Generic (PLEG): container finished" podID="adfe7625-9f97-437f-bf74-86d67336b34f" containerID="2a9544aee5df4a6addb8b1bb0192adcdc25a5b6392d2e625f467814f2e942c03" exitCode=0 Jan 28 07:03:34 crc kubenswrapper[4642]: I0128 07:03:34.697912 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-6m6vs" event={"ID":"adfe7625-9f97-437f-bf74-86d67336b34f","Type":"ContainerDied","Data":"2a9544aee5df4a6addb8b1bb0192adcdc25a5b6392d2e625f467814f2e942c03"} Jan 28 07:03:34 crc kubenswrapper[4642]: I0128 07:03:34.697948 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-6m6vs" event={"ID":"adfe7625-9f97-437f-bf74-86d67336b34f","Type":"ContainerStarted","Data":"c16c2da50b4b7a1ba3f22d668b320aec65cd37bb7f403868aff259e80f28ae27"} Jan 28 07:03:34 crc kubenswrapper[4642]: I0128 07:03:34.699432 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4dd91859-662e-4131-a376-57998c03d752","Type":"ContainerStarted","Data":"9c697c5d9f92ff17da09d798335c6edde41e51c64a67867014b096cb32a802da"} Jan 28 07:03:35 crc kubenswrapper[4642]: I0128 07:03:35.709832 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4dd91859-662e-4131-a376-57998c03d752","Type":"ContainerStarted","Data":"c2049d8b5f70c75d03126fe1afe62ddcd5718b4fc4ba545a247b02c983e5ca32"} Jan 28 07:03:35 crc kubenswrapper[4642]: I0128 07:03:35.950921 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-6m6vs" Jan 28 07:03:36 crc kubenswrapper[4642]: I0128 07:03:36.026550 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/adfe7625-9f97-437f-bf74-86d67336b34f-operator-scripts\") pod \"adfe7625-9f97-437f-bf74-86d67336b34f\" (UID: \"adfe7625-9f97-437f-bf74-86d67336b34f\") " Jan 28 07:03:36 crc kubenswrapper[4642]: I0128 07:03:36.026706 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ljxn\" (UniqueName: \"kubernetes.io/projected/adfe7625-9f97-437f-bf74-86d67336b34f-kube-api-access-4ljxn\") pod \"adfe7625-9f97-437f-bf74-86d67336b34f\" (UID: \"adfe7625-9f97-437f-bf74-86d67336b34f\") " Jan 28 07:03:36 crc kubenswrapper[4642]: I0128 07:03:36.027220 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adfe7625-9f97-437f-bf74-86d67336b34f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "adfe7625-9f97-437f-bf74-86d67336b34f" (UID: "adfe7625-9f97-437f-bf74-86d67336b34f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:03:36 crc kubenswrapper[4642]: I0128 07:03:36.031785 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adfe7625-9f97-437f-bf74-86d67336b34f-kube-api-access-4ljxn" (OuterVolumeSpecName: "kube-api-access-4ljxn") pod "adfe7625-9f97-437f-bf74-86d67336b34f" (UID: "adfe7625-9f97-437f-bf74-86d67336b34f"). InnerVolumeSpecName "kube-api-access-4ljxn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:03:36 crc kubenswrapper[4642]: I0128 07:03:36.128244 4642 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/adfe7625-9f97-437f-bf74-86d67336b34f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 07:03:36 crc kubenswrapper[4642]: I0128 07:03:36.128271 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4ljxn\" (UniqueName: \"kubernetes.io/projected/adfe7625-9f97-437f-bf74-86d67336b34f-kube-api-access-4ljxn\") on node \"crc\" DevicePath \"\"" Jan 28 07:03:36 crc kubenswrapper[4642]: I0128 07:03:36.718281 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-6m6vs" Jan 28 07:03:36 crc kubenswrapper[4642]: I0128 07:03:36.718541 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-6m6vs" event={"ID":"adfe7625-9f97-437f-bf74-86d67336b34f","Type":"ContainerDied","Data":"c16c2da50b4b7a1ba3f22d668b320aec65cd37bb7f403868aff259e80f28ae27"} Jan 28 07:03:36 crc kubenswrapper[4642]: I0128 07:03:36.718591 4642 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c16c2da50b4b7a1ba3f22d668b320aec65cd37bb7f403868aff259e80f28ae27" Jan 28 07:03:36 crc kubenswrapper[4642]: I0128 07:03:36.720742 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4dd91859-662e-4131-a376-57998c03d752","Type":"ContainerStarted","Data":"fdb40f60550a6ffebb9d9878e9d7b2f5deeb9a207a8617855dee6d20abad6f7b"} Jan 28 07:03:36 crc kubenswrapper[4642]: I0128 07:03:36.720787 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4dd91859-662e-4131-a376-57998c03d752","Type":"ContainerStarted","Data":"0522ce468bf4a049911525630e45b3f535828441f6f6671fad82506a32ea48d3"} Jan 28 07:03:36 crc kubenswrapper[4642]: I0128 07:03:36.720798 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4dd91859-662e-4131-a376-57998c03d752","Type":"ContainerStarted","Data":"02cefa58b39c1bcaa5000594802ca8c956b8a9ec0c477518b848dd63d04b242a"} Jan 28 07:03:37 crc kubenswrapper[4642]: I0128 07:03:37.731010 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4dd91859-662e-4131-a376-57998c03d752","Type":"ContainerStarted","Data":"014a8edd8fe9180693c923fc3712e4c8d98bc20cf6b4d35d68022c35dd2a6c8e"} Jan 28 07:03:37 crc kubenswrapper[4642]: I0128 07:03:37.731462 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4dd91859-662e-4131-a376-57998c03d752","Type":"ContainerStarted","Data":"9034dffd61ae24796037c13bcb41508eb47bf451ae90cb3b26923dffc81718db"} Jan 28 07:03:38 crc kubenswrapper[4642]: I0128 07:03:38.013986 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 28 07:03:38 crc kubenswrapper[4642]: I0128 07:03:38.199744 4642 patch_prober.go:28] interesting pod/machine-config-daemon-hdsmf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 07:03:38 crc kubenswrapper[4642]: I0128 07:03:38.199789 4642 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 07:03:38 crc kubenswrapper[4642]: I0128 07:03:38.199824 4642 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" Jan 28 07:03:38 crc kubenswrapper[4642]: I0128 07:03:38.200295 4642 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"682b9c3bf1397b4c59a77c5d98ab360bbde5aa7c24a95922a398468c4fd1fcb1"} pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 07:03:38 crc kubenswrapper[4642]: I0128 07:03:38.200342 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" containerName="machine-config-daemon" containerID="cri-o://682b9c3bf1397b4c59a77c5d98ab360bbde5aa7c24a95922a398468c4fd1fcb1" gracePeriod=600 Jan 28 07:03:38 crc kubenswrapper[4642]: I0128 07:03:38.742891 4642 generic.go:334] "Generic (PLEG): container finished" podID="338ae955-434d-40bd-8519-580badf3e175" containerID="682b9c3bf1397b4c59a77c5d98ab360bbde5aa7c24a95922a398468c4fd1fcb1" exitCode=0 Jan 28 07:03:38 crc kubenswrapper[4642]: I0128 07:03:38.742956 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" event={"ID":"338ae955-434d-40bd-8519-580badf3e175","Type":"ContainerDied","Data":"682b9c3bf1397b4c59a77c5d98ab360bbde5aa7c24a95922a398468c4fd1fcb1"} Jan 28 07:03:38 crc kubenswrapper[4642]: I0128 07:03:38.743366 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" event={"ID":"338ae955-434d-40bd-8519-580badf3e175","Type":"ContainerStarted","Data":"141f548b8402c2028aeffc2bc9021f71ad46dc4f636bc6f8740c8315416f2bd3"} Jan 28 07:03:38 crc kubenswrapper[4642]: I0128 07:03:38.743388 4642 scope.go:117] "RemoveContainer" containerID="6c756017776ac34b1c728f1bab1ac90c7064431607aa921734d4ccc64382a3e1" Jan 28 07:03:38 crc kubenswrapper[4642]: I0128 07:03:38.749777 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4dd91859-662e-4131-a376-57998c03d752","Type":"ContainerStarted","Data":"19384d764098c1df7e41700ac4bae0fcb69dcabcfed2f7b75a52014274b802ee"} Jan 28 07:03:38 crc kubenswrapper[4642]: I0128 07:03:38.749809 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4dd91859-662e-4131-a376-57998c03d752","Type":"ContainerStarted","Data":"7d4293804901bafcb8299c41ed1116e6527764db4cad7b7145b6f51f824d577c"} Jan 28 07:03:39 crc kubenswrapper[4642]: I0128 07:03:39.762988 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4dd91859-662e-4131-a376-57998c03d752","Type":"ContainerStarted","Data":"5e3955bc090fd9c231439d14272b4aeb2b9288917d7b804ce01a8b519b9ddc4f"} Jan 28 07:03:40 crc kubenswrapper[4642]: I0128 07:03:40.578794 4642 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-9d4kj" podUID="29b93c34-de22-48ac-80da-b79048401506" containerName="ovn-controller" probeResult="failure" output=< Jan 28 07:03:40 crc kubenswrapper[4642]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 28 07:03:40 crc kubenswrapper[4642]: > Jan 28 07:03:40 crc kubenswrapper[4642]: I0128 07:03:40.638992 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-nvphd" Jan 28 07:03:40 crc kubenswrapper[4642]: I0128 07:03:40.640107 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-nvphd" Jan 28 07:03:40 crc kubenswrapper[4642]: I0128 07:03:40.781703 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4dd91859-662e-4131-a376-57998c03d752","Type":"ContainerStarted","Data":"392b22a76996a9a87e599351f61fcdb6939038f36b8f93fce86344d37a1d611f"} Jan 28 07:03:40 crc kubenswrapper[4642]: I0128 07:03:40.781768 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4dd91859-662e-4131-a376-57998c03d752","Type":"ContainerStarted","Data":"6b0298d39bf334e67cf850a466d8dba6982abebac2682dbcf2746f0d481bfb59"} Jan 28 07:03:40 crc kubenswrapper[4642]: I0128 07:03:40.781781 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4dd91859-662e-4131-a376-57998c03d752","Type":"ContainerStarted","Data":"09771e5821626d227cd9a2e5962105a35558476f24e74833f31f2d50bd6af8ab"} Jan 28 07:03:40 crc kubenswrapper[4642]: I0128 07:03:40.781792 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4dd91859-662e-4131-a376-57998c03d752","Type":"ContainerStarted","Data":"15a1981ab0dbcee4317b7d12b3b1d8b7b4429ecb0cd7a4f106e203636893a569"} Jan 28 07:03:40 crc kubenswrapper[4642]: I0128 07:03:40.781801 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4dd91859-662e-4131-a376-57998c03d752","Type":"ContainerStarted","Data":"f02b366395a788238b629471e573a41b610a5ac1feeccf9d74916ac0be348c49"} Jan 28 07:03:40 crc kubenswrapper[4642]: I0128 07:03:40.781809 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4dd91859-662e-4131-a376-57998c03d752","Type":"ContainerStarted","Data":"6bc98353a5592a24f02db9345b0e5c7ab88d65530e0a24402eb21800099b6993"} Jan 28 07:03:40 crc kubenswrapper[4642]: I0128 07:03:40.811268 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=19.583818416 podStartE2EDuration="24.811251624s" podCreationTimestamp="2026-01-28 07:03:16 +0000 UTC" firstStartedPulling="2026-01-28 07:03:34.144177892 +0000 UTC m=+937.376266701" lastFinishedPulling="2026-01-28 07:03:39.3716111 +0000 UTC m=+942.603699909" observedRunningTime="2026-01-28 07:03:40.809720283 +0000 UTC m=+944.041809112" watchObservedRunningTime="2026-01-28 07:03:40.811251624 +0000 UTC m=+944.043340434" Jan 28 07:03:40 crc kubenswrapper[4642]: I0128 07:03:40.848932 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-9d4kj-config-4xkcw"] Jan 28 07:03:40 crc kubenswrapper[4642]: E0128 07:03:40.849500 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adfe7625-9f97-437f-bf74-86d67336b34f" containerName="mariadb-account-create-update" Jan 28 07:03:40 crc kubenswrapper[4642]: I0128 07:03:40.849526 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="adfe7625-9f97-437f-bf74-86d67336b34f" containerName="mariadb-account-create-update" Jan 28 07:03:40 crc kubenswrapper[4642]: I0128 07:03:40.849787 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="adfe7625-9f97-437f-bf74-86d67336b34f" containerName="mariadb-account-create-update" Jan 28 07:03:40 crc kubenswrapper[4642]: I0128 07:03:40.850567 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9d4kj-config-4xkcw" Jan 28 07:03:40 crc kubenswrapper[4642]: I0128 07:03:40.852714 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 28 07:03:40 crc kubenswrapper[4642]: I0128 07:03:40.861750 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-9d4kj-config-4xkcw"] Jan 28 07:03:40 crc kubenswrapper[4642]: I0128 07:03:40.903721 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47xrf\" (UniqueName: \"kubernetes.io/projected/8bdc7d12-45ca-4119-ac92-cf16305dc903-kube-api-access-47xrf\") pod \"ovn-controller-9d4kj-config-4xkcw\" (UID: \"8bdc7d12-45ca-4119-ac92-cf16305dc903\") " pod="openstack/ovn-controller-9d4kj-config-4xkcw" Jan 28 07:03:40 crc kubenswrapper[4642]: I0128 07:03:40.903794 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8bdc7d12-45ca-4119-ac92-cf16305dc903-var-run\") pod \"ovn-controller-9d4kj-config-4xkcw\" (UID: \"8bdc7d12-45ca-4119-ac92-cf16305dc903\") " pod="openstack/ovn-controller-9d4kj-config-4xkcw" Jan 28 07:03:40 crc kubenswrapper[4642]: I0128 07:03:40.903944 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8bdc7d12-45ca-4119-ac92-cf16305dc903-var-log-ovn\") pod \"ovn-controller-9d4kj-config-4xkcw\" (UID: \"8bdc7d12-45ca-4119-ac92-cf16305dc903\") " pod="openstack/ovn-controller-9d4kj-config-4xkcw" Jan 28 07:03:40 crc kubenswrapper[4642]: I0128 07:03:40.904018 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8bdc7d12-45ca-4119-ac92-cf16305dc903-var-run-ovn\") pod \"ovn-controller-9d4kj-config-4xkcw\" (UID: \"8bdc7d12-45ca-4119-ac92-cf16305dc903\") " pod="openstack/ovn-controller-9d4kj-config-4xkcw" Jan 28 07:03:40 crc kubenswrapper[4642]: I0128 07:03:40.904229 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/8bdc7d12-45ca-4119-ac92-cf16305dc903-additional-scripts\") pod \"ovn-controller-9d4kj-config-4xkcw\" (UID: \"8bdc7d12-45ca-4119-ac92-cf16305dc903\") " pod="openstack/ovn-controller-9d4kj-config-4xkcw" Jan 28 07:03:40 crc kubenswrapper[4642]: I0128 07:03:40.904428 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8bdc7d12-45ca-4119-ac92-cf16305dc903-scripts\") pod \"ovn-controller-9d4kj-config-4xkcw\" (UID: \"8bdc7d12-45ca-4119-ac92-cf16305dc903\") " pod="openstack/ovn-controller-9d4kj-config-4xkcw" Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.007790 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/8bdc7d12-45ca-4119-ac92-cf16305dc903-additional-scripts\") pod \"ovn-controller-9d4kj-config-4xkcw\" (UID: \"8bdc7d12-45ca-4119-ac92-cf16305dc903\") " pod="openstack/ovn-controller-9d4kj-config-4xkcw" Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.007879 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8bdc7d12-45ca-4119-ac92-cf16305dc903-scripts\") pod \"ovn-controller-9d4kj-config-4xkcw\" (UID: \"8bdc7d12-45ca-4119-ac92-cf16305dc903\") " pod="openstack/ovn-controller-9d4kj-config-4xkcw" Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.007903 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47xrf\" (UniqueName: \"kubernetes.io/projected/8bdc7d12-45ca-4119-ac92-cf16305dc903-kube-api-access-47xrf\") pod \"ovn-controller-9d4kj-config-4xkcw\" (UID: \"8bdc7d12-45ca-4119-ac92-cf16305dc903\") " pod="openstack/ovn-controller-9d4kj-config-4xkcw" Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.007925 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8bdc7d12-45ca-4119-ac92-cf16305dc903-var-run\") pod \"ovn-controller-9d4kj-config-4xkcw\" (UID: \"8bdc7d12-45ca-4119-ac92-cf16305dc903\") " pod="openstack/ovn-controller-9d4kj-config-4xkcw" Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.007966 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8bdc7d12-45ca-4119-ac92-cf16305dc903-var-log-ovn\") pod \"ovn-controller-9d4kj-config-4xkcw\" (UID: \"8bdc7d12-45ca-4119-ac92-cf16305dc903\") " pod="openstack/ovn-controller-9d4kj-config-4xkcw" Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.007992 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8bdc7d12-45ca-4119-ac92-cf16305dc903-var-run-ovn\") pod \"ovn-controller-9d4kj-config-4xkcw\" (UID: \"8bdc7d12-45ca-4119-ac92-cf16305dc903\") " pod="openstack/ovn-controller-9d4kj-config-4xkcw" Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.008330 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8bdc7d12-45ca-4119-ac92-cf16305dc903-var-run-ovn\") pod \"ovn-controller-9d4kj-config-4xkcw\" (UID: \"8bdc7d12-45ca-4119-ac92-cf16305dc903\") " pod="openstack/ovn-controller-9d4kj-config-4xkcw" Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.008355 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8bdc7d12-45ca-4119-ac92-cf16305dc903-var-run\") pod \"ovn-controller-9d4kj-config-4xkcw\" (UID: \"8bdc7d12-45ca-4119-ac92-cf16305dc903\") " pod="openstack/ovn-controller-9d4kj-config-4xkcw" Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.008417 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8bdc7d12-45ca-4119-ac92-cf16305dc903-var-log-ovn\") pod \"ovn-controller-9d4kj-config-4xkcw\" (UID: \"8bdc7d12-45ca-4119-ac92-cf16305dc903\") " pod="openstack/ovn-controller-9d4kj-config-4xkcw" Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.008693 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/8bdc7d12-45ca-4119-ac92-cf16305dc903-additional-scripts\") pod \"ovn-controller-9d4kj-config-4xkcw\" (UID: \"8bdc7d12-45ca-4119-ac92-cf16305dc903\") " pod="openstack/ovn-controller-9d4kj-config-4xkcw" Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.009795 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8bdc7d12-45ca-4119-ac92-cf16305dc903-scripts\") pod \"ovn-controller-9d4kj-config-4xkcw\" (UID: \"8bdc7d12-45ca-4119-ac92-cf16305dc903\") " pod="openstack/ovn-controller-9d4kj-config-4xkcw" Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.028307 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47xrf\" (UniqueName: \"kubernetes.io/projected/8bdc7d12-45ca-4119-ac92-cf16305dc903-kube-api-access-47xrf\") pod \"ovn-controller-9d4kj-config-4xkcw\" (UID: \"8bdc7d12-45ca-4119-ac92-cf16305dc903\") " pod="openstack/ovn-controller-9d4kj-config-4xkcw" Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.049975 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-849d6cccf9-nf7n7"] Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.051173 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-849d6cccf9-nf7n7" Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.053891 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.057721 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-849d6cccf9-nf7n7"] Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.165672 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9d4kj-config-4xkcw" Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.211080 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c40a1d5e-2d6f-4f93-9a5f-6fbc808f1a2e-dns-svc\") pod \"dnsmasq-dns-849d6cccf9-nf7n7\" (UID: \"c40a1d5e-2d6f-4f93-9a5f-6fbc808f1a2e\") " pod="openstack/dnsmasq-dns-849d6cccf9-nf7n7" Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.211403 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c40a1d5e-2d6f-4f93-9a5f-6fbc808f1a2e-ovsdbserver-sb\") pod \"dnsmasq-dns-849d6cccf9-nf7n7\" (UID: \"c40a1d5e-2d6f-4f93-9a5f-6fbc808f1a2e\") " pod="openstack/dnsmasq-dns-849d6cccf9-nf7n7" Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.211500 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c40a1d5e-2d6f-4f93-9a5f-6fbc808f1a2e-config\") pod \"dnsmasq-dns-849d6cccf9-nf7n7\" (UID: \"c40a1d5e-2d6f-4f93-9a5f-6fbc808f1a2e\") " pod="openstack/dnsmasq-dns-849d6cccf9-nf7n7" Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.211598 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c40a1d5e-2d6f-4f93-9a5f-6fbc808f1a2e-ovsdbserver-nb\") pod \"dnsmasq-dns-849d6cccf9-nf7n7\" (UID: \"c40a1d5e-2d6f-4f93-9a5f-6fbc808f1a2e\") " pod="openstack/dnsmasq-dns-849d6cccf9-nf7n7" Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.211641 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fv5pb\" (UniqueName: \"kubernetes.io/projected/c40a1d5e-2d6f-4f93-9a5f-6fbc808f1a2e-kube-api-access-fv5pb\") pod \"dnsmasq-dns-849d6cccf9-nf7n7\" (UID: \"c40a1d5e-2d6f-4f93-9a5f-6fbc808f1a2e\") " pod="openstack/dnsmasq-dns-849d6cccf9-nf7n7" Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.211662 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c40a1d5e-2d6f-4f93-9a5f-6fbc808f1a2e-dns-swift-storage-0\") pod \"dnsmasq-dns-849d6cccf9-nf7n7\" (UID: \"c40a1d5e-2d6f-4f93-9a5f-6fbc808f1a2e\") " pod="openstack/dnsmasq-dns-849d6cccf9-nf7n7" Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.216099 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.317631 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c40a1d5e-2d6f-4f93-9a5f-6fbc808f1a2e-dns-svc\") pod \"dnsmasq-dns-849d6cccf9-nf7n7\" (UID: \"c40a1d5e-2d6f-4f93-9a5f-6fbc808f1a2e\") " pod="openstack/dnsmasq-dns-849d6cccf9-nf7n7" Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.317688 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c40a1d5e-2d6f-4f93-9a5f-6fbc808f1a2e-ovsdbserver-sb\") pod \"dnsmasq-dns-849d6cccf9-nf7n7\" (UID: \"c40a1d5e-2d6f-4f93-9a5f-6fbc808f1a2e\") " pod="openstack/dnsmasq-dns-849d6cccf9-nf7n7" Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.317762 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c40a1d5e-2d6f-4f93-9a5f-6fbc808f1a2e-config\") pod \"dnsmasq-dns-849d6cccf9-nf7n7\" (UID: \"c40a1d5e-2d6f-4f93-9a5f-6fbc808f1a2e\") " pod="openstack/dnsmasq-dns-849d6cccf9-nf7n7" Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.317911 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c40a1d5e-2d6f-4f93-9a5f-6fbc808f1a2e-ovsdbserver-nb\") pod \"dnsmasq-dns-849d6cccf9-nf7n7\" (UID: \"c40a1d5e-2d6f-4f93-9a5f-6fbc808f1a2e\") " pod="openstack/dnsmasq-dns-849d6cccf9-nf7n7" Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.317992 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fv5pb\" (UniqueName: \"kubernetes.io/projected/c40a1d5e-2d6f-4f93-9a5f-6fbc808f1a2e-kube-api-access-fv5pb\") pod \"dnsmasq-dns-849d6cccf9-nf7n7\" (UID: \"c40a1d5e-2d6f-4f93-9a5f-6fbc808f1a2e\") " pod="openstack/dnsmasq-dns-849d6cccf9-nf7n7" Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.318024 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c40a1d5e-2d6f-4f93-9a5f-6fbc808f1a2e-dns-swift-storage-0\") pod \"dnsmasq-dns-849d6cccf9-nf7n7\" (UID: \"c40a1d5e-2d6f-4f93-9a5f-6fbc808f1a2e\") " pod="openstack/dnsmasq-dns-849d6cccf9-nf7n7" Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.318682 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c40a1d5e-2d6f-4f93-9a5f-6fbc808f1a2e-dns-svc\") pod \"dnsmasq-dns-849d6cccf9-nf7n7\" (UID: \"c40a1d5e-2d6f-4f93-9a5f-6fbc808f1a2e\") " pod="openstack/dnsmasq-dns-849d6cccf9-nf7n7" Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.319001 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c40a1d5e-2d6f-4f93-9a5f-6fbc808f1a2e-ovsdbserver-sb\") pod \"dnsmasq-dns-849d6cccf9-nf7n7\" (UID: \"c40a1d5e-2d6f-4f93-9a5f-6fbc808f1a2e\") " pod="openstack/dnsmasq-dns-849d6cccf9-nf7n7" Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.320261 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c40a1d5e-2d6f-4f93-9a5f-6fbc808f1a2e-config\") pod \"dnsmasq-dns-849d6cccf9-nf7n7\" (UID: \"c40a1d5e-2d6f-4f93-9a5f-6fbc808f1a2e\") " pod="openstack/dnsmasq-dns-849d6cccf9-nf7n7" Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.320660 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c40a1d5e-2d6f-4f93-9a5f-6fbc808f1a2e-ovsdbserver-nb\") pod \"dnsmasq-dns-849d6cccf9-nf7n7\" (UID: \"c40a1d5e-2d6f-4f93-9a5f-6fbc808f1a2e\") " pod="openstack/dnsmasq-dns-849d6cccf9-nf7n7" Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.324014 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c40a1d5e-2d6f-4f93-9a5f-6fbc808f1a2e-dns-swift-storage-0\") pod \"dnsmasq-dns-849d6cccf9-nf7n7\" (UID: \"c40a1d5e-2d6f-4f93-9a5f-6fbc808f1a2e\") " pod="openstack/dnsmasq-dns-849d6cccf9-nf7n7" Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.343337 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fv5pb\" (UniqueName: \"kubernetes.io/projected/c40a1d5e-2d6f-4f93-9a5f-6fbc808f1a2e-kube-api-access-fv5pb\") pod \"dnsmasq-dns-849d6cccf9-nf7n7\" (UID: \"c40a1d5e-2d6f-4f93-9a5f-6fbc808f1a2e\") " pod="openstack/dnsmasq-dns-849d6cccf9-nf7n7" Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.379558 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-849d6cccf9-nf7n7" Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.456765 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-97hmf"] Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.462636 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-97hmf" Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.472039 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-97hmf"] Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.488396 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.568811 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-jl29g"] Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.570081 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-jl29g" Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.573278 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-3a33-account-create-update-xs5t5"] Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.574339 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-3a33-account-create-update-xs5t5" Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.577380 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.594276 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-3a33-account-create-update-xs5t5"] Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.602523 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-jl29g"] Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.621915 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qz6j\" (UniqueName: \"kubernetes.io/projected/096310a2-8b6e-436b-9b34-6263ac3806b6-kube-api-access-4qz6j\") pod \"cinder-db-create-97hmf\" (UID: \"096310a2-8b6e-436b-9b34-6263ac3806b6\") " pod="openstack/cinder-db-create-97hmf" Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.622040 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/096310a2-8b6e-436b-9b34-6263ac3806b6-operator-scripts\") pod \"cinder-db-create-97hmf\" (UID: \"096310a2-8b6e-436b-9b34-6263ac3806b6\") " pod="openstack/cinder-db-create-97hmf" Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.629154 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-9d4kj-config-4xkcw"] Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.692760 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-93c0-account-create-update-jwrtw"] Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.693820 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-93c0-account-create-update-jwrtw" Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.696741 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.702734 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-93c0-account-create-update-jwrtw"] Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.722848 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/096310a2-8b6e-436b-9b34-6263ac3806b6-operator-scripts\") pod \"cinder-db-create-97hmf\" (UID: \"096310a2-8b6e-436b-9b34-6263ac3806b6\") " pod="openstack/cinder-db-create-97hmf" Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.722915 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/806827f7-19db-4656-89ed-9d2253ecbf67-operator-scripts\") pod \"barbican-db-create-jl29g\" (UID: \"806827f7-19db-4656-89ed-9d2253ecbf67\") " pod="openstack/barbican-db-create-jl29g" Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.722941 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/61bc8d83-3e11-4a38-9df5-f1c3f391540f-operator-scripts\") pod \"cinder-3a33-account-create-update-xs5t5\" (UID: \"61bc8d83-3e11-4a38-9df5-f1c3f391540f\") " pod="openstack/cinder-3a33-account-create-update-xs5t5" Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.723000 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnvsl\" (UniqueName: \"kubernetes.io/projected/61bc8d83-3e11-4a38-9df5-f1c3f391540f-kube-api-access-tnvsl\") pod \"cinder-3a33-account-create-update-xs5t5\" (UID: \"61bc8d83-3e11-4a38-9df5-f1c3f391540f\") " pod="openstack/cinder-3a33-account-create-update-xs5t5" Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.723023 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbsfj\" (UniqueName: \"kubernetes.io/projected/806827f7-19db-4656-89ed-9d2253ecbf67-kube-api-access-cbsfj\") pod \"barbican-db-create-jl29g\" (UID: \"806827f7-19db-4656-89ed-9d2253ecbf67\") " pod="openstack/barbican-db-create-jl29g" Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.723040 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qz6j\" (UniqueName: \"kubernetes.io/projected/096310a2-8b6e-436b-9b34-6263ac3806b6-kube-api-access-4qz6j\") pod \"cinder-db-create-97hmf\" (UID: \"096310a2-8b6e-436b-9b34-6263ac3806b6\") " pod="openstack/cinder-db-create-97hmf" Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.723589 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/096310a2-8b6e-436b-9b34-6263ac3806b6-operator-scripts\") pod \"cinder-db-create-97hmf\" (UID: \"096310a2-8b6e-436b-9b34-6263ac3806b6\") " pod="openstack/cinder-db-create-97hmf" Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.743479 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qz6j\" (UniqueName: \"kubernetes.io/projected/096310a2-8b6e-436b-9b34-6263ac3806b6-kube-api-access-4qz6j\") pod \"cinder-db-create-97hmf\" (UID: \"096310a2-8b6e-436b-9b34-6263ac3806b6\") " pod="openstack/cinder-db-create-97hmf" Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.785716 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-vn7qt"] Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.787205 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-vn7qt" Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.787429 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-97hmf" Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.804616 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-9d4kj-config-4xkcw" event={"ID":"8bdc7d12-45ca-4119-ac92-cf16305dc903","Type":"ContainerStarted","Data":"a3765fa13ba9b6f093cf0b117354da11abf0286857ab779f413e290d14c9d799"} Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.806300 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-vn7qt"] Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.825552 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/806827f7-19db-4656-89ed-9d2253ecbf67-operator-scripts\") pod \"barbican-db-create-jl29g\" (UID: \"806827f7-19db-4656-89ed-9d2253ecbf67\") " pod="openstack/barbican-db-create-jl29g" Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.825591 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/61bc8d83-3e11-4a38-9df5-f1c3f391540f-operator-scripts\") pod \"cinder-3a33-account-create-update-xs5t5\" (UID: \"61bc8d83-3e11-4a38-9df5-f1c3f391540f\") " pod="openstack/cinder-3a33-account-create-update-xs5t5" Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.825643 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3987de36-92f6-4d2b-b4d6-42ab67c7525f-operator-scripts\") pod \"barbican-93c0-account-create-update-jwrtw\" (UID: \"3987de36-92f6-4d2b-b4d6-42ab67c7525f\") " pod="openstack/barbican-93c0-account-create-update-jwrtw" Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.825680 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9c8lr\" (UniqueName: \"kubernetes.io/projected/3987de36-92f6-4d2b-b4d6-42ab67c7525f-kube-api-access-9c8lr\") pod \"barbican-93c0-account-create-update-jwrtw\" (UID: \"3987de36-92f6-4d2b-b4d6-42ab67c7525f\") " pod="openstack/barbican-93c0-account-create-update-jwrtw" Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.825748 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnvsl\" (UniqueName: \"kubernetes.io/projected/61bc8d83-3e11-4a38-9df5-f1c3f391540f-kube-api-access-tnvsl\") pod \"cinder-3a33-account-create-update-xs5t5\" (UID: \"61bc8d83-3e11-4a38-9df5-f1c3f391540f\") " pod="openstack/cinder-3a33-account-create-update-xs5t5" Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.825775 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbsfj\" (UniqueName: \"kubernetes.io/projected/806827f7-19db-4656-89ed-9d2253ecbf67-kube-api-access-cbsfj\") pod \"barbican-db-create-jl29g\" (UID: \"806827f7-19db-4656-89ed-9d2253ecbf67\") " pod="openstack/barbican-db-create-jl29g" Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.827211 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/806827f7-19db-4656-89ed-9d2253ecbf67-operator-scripts\") pod \"barbican-db-create-jl29g\" (UID: \"806827f7-19db-4656-89ed-9d2253ecbf67\") " pod="openstack/barbican-db-create-jl29g" Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.827702 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/61bc8d83-3e11-4a38-9df5-f1c3f391540f-operator-scripts\") pod \"cinder-3a33-account-create-update-xs5t5\" (UID: \"61bc8d83-3e11-4a38-9df5-f1c3f391540f\") " pod="openstack/cinder-3a33-account-create-update-xs5t5" Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.835994 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-gh7r2"] Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.837338 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-gh7r2" Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.844885 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.845082 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.845264 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-lm99t" Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.854399 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.876677 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-gh7r2"] Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.883744 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbsfj\" (UniqueName: \"kubernetes.io/projected/806827f7-19db-4656-89ed-9d2253ecbf67-kube-api-access-cbsfj\") pod \"barbican-db-create-jl29g\" (UID: \"806827f7-19db-4656-89ed-9d2253ecbf67\") " pod="openstack/barbican-db-create-jl29g" Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.891239 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-jl29g" Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.895483 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnvsl\" (UniqueName: \"kubernetes.io/projected/61bc8d83-3e11-4a38-9df5-f1c3f391540f-kube-api-access-tnvsl\") pod \"cinder-3a33-account-create-update-xs5t5\" (UID: \"61bc8d83-3e11-4a38-9df5-f1c3f391540f\") " pod="openstack/cinder-3a33-account-create-update-xs5t5" Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.904514 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-3a33-account-create-update-xs5t5" Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.914606 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5126-account-create-update-gt9kz"] Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.915969 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5126-account-create-update-gt9kz" Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.919073 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.919103 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5126-account-create-update-gt9kz"] Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.929224 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/995eb4f3-f6c8-4fb3-a991-8777f7b645cb-combined-ca-bundle\") pod \"keystone-db-sync-gh7r2\" (UID: \"995eb4f3-f6c8-4fb3-a991-8777f7b645cb\") " pod="openstack/keystone-db-sync-gh7r2" Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.929293 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwq8j\" (UniqueName: \"kubernetes.io/projected/995eb4f3-f6c8-4fb3-a991-8777f7b645cb-kube-api-access-bwq8j\") pod \"keystone-db-sync-gh7r2\" (UID: \"995eb4f3-f6c8-4fb3-a991-8777f7b645cb\") " pod="openstack/keystone-db-sync-gh7r2" Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.929311 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c47afaa6-b39b-4af7-aaee-d47fa6e7932a-operator-scripts\") pod \"neutron-db-create-vn7qt\" (UID: \"c47afaa6-b39b-4af7-aaee-d47fa6e7932a\") " pod="openstack/neutron-db-create-vn7qt" Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.929352 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/995eb4f3-f6c8-4fb3-a991-8777f7b645cb-config-data\") pod \"keystone-db-sync-gh7r2\" (UID: \"995eb4f3-f6c8-4fb3-a991-8777f7b645cb\") " pod="openstack/keystone-db-sync-gh7r2" Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.930010 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3987de36-92f6-4d2b-b4d6-42ab67c7525f-operator-scripts\") pod \"barbican-93c0-account-create-update-jwrtw\" (UID: \"3987de36-92f6-4d2b-b4d6-42ab67c7525f\") " pod="openstack/barbican-93c0-account-create-update-jwrtw" Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.930040 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncsjp\" (UniqueName: \"kubernetes.io/projected/c47afaa6-b39b-4af7-aaee-d47fa6e7932a-kube-api-access-ncsjp\") pod \"neutron-db-create-vn7qt\" (UID: \"c47afaa6-b39b-4af7-aaee-d47fa6e7932a\") " pod="openstack/neutron-db-create-vn7qt" Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.930074 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9c8lr\" (UniqueName: \"kubernetes.io/projected/3987de36-92f6-4d2b-b4d6-42ab67c7525f-kube-api-access-9c8lr\") pod \"barbican-93c0-account-create-update-jwrtw\" (UID: \"3987de36-92f6-4d2b-b4d6-42ab67c7525f\") " pod="openstack/barbican-93c0-account-create-update-jwrtw" Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.931765 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3987de36-92f6-4d2b-b4d6-42ab67c7525f-operator-scripts\") pod \"barbican-93c0-account-create-update-jwrtw\" (UID: \"3987de36-92f6-4d2b-b4d6-42ab67c7525f\") " pod="openstack/barbican-93c0-account-create-update-jwrtw" Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.946387 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9c8lr\" (UniqueName: \"kubernetes.io/projected/3987de36-92f6-4d2b-b4d6-42ab67c7525f-kube-api-access-9c8lr\") pod \"barbican-93c0-account-create-update-jwrtw\" (UID: \"3987de36-92f6-4d2b-b4d6-42ab67c7525f\") " pod="openstack/barbican-93c0-account-create-update-jwrtw" Jan 28 07:03:41 crc kubenswrapper[4642]: I0128 07:03:41.963597 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-849d6cccf9-nf7n7"] Jan 28 07:03:41 crc kubenswrapper[4642]: W0128 07:03:41.981591 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc40a1d5e_2d6f_4f93_9a5f_6fbc808f1a2e.slice/crio-fb3d5c998962cb58d11a913c432d3d7aab955189488ad05c2e8cbf2e18cea5e3 WatchSource:0}: Error finding container fb3d5c998962cb58d11a913c432d3d7aab955189488ad05c2e8cbf2e18cea5e3: Status 404 returned error can't find the container with id fb3d5c998962cb58d11a913c432d3d7aab955189488ad05c2e8cbf2e18cea5e3 Jan 28 07:03:42 crc kubenswrapper[4642]: I0128 07:03:42.022261 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-93c0-account-create-update-jwrtw" Jan 28 07:03:42 crc kubenswrapper[4642]: I0128 07:03:42.033665 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfctv\" (UniqueName: \"kubernetes.io/projected/910e369b-6c77-43c1-95de-f125a1813bc6-kube-api-access-pfctv\") pod \"neutron-5126-account-create-update-gt9kz\" (UID: \"910e369b-6c77-43c1-95de-f125a1813bc6\") " pod="openstack/neutron-5126-account-create-update-gt9kz" Jan 28 07:03:42 crc kubenswrapper[4642]: I0128 07:03:42.033746 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/995eb4f3-f6c8-4fb3-a991-8777f7b645cb-combined-ca-bundle\") pod \"keystone-db-sync-gh7r2\" (UID: \"995eb4f3-f6c8-4fb3-a991-8777f7b645cb\") " pod="openstack/keystone-db-sync-gh7r2" Jan 28 07:03:42 crc kubenswrapper[4642]: I0128 07:03:42.033777 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwq8j\" (UniqueName: \"kubernetes.io/projected/995eb4f3-f6c8-4fb3-a991-8777f7b645cb-kube-api-access-bwq8j\") pod \"keystone-db-sync-gh7r2\" (UID: \"995eb4f3-f6c8-4fb3-a991-8777f7b645cb\") " pod="openstack/keystone-db-sync-gh7r2" Jan 28 07:03:42 crc kubenswrapper[4642]: I0128 07:03:42.033795 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c47afaa6-b39b-4af7-aaee-d47fa6e7932a-operator-scripts\") pod \"neutron-db-create-vn7qt\" (UID: \"c47afaa6-b39b-4af7-aaee-d47fa6e7932a\") " pod="openstack/neutron-db-create-vn7qt" Jan 28 07:03:42 crc kubenswrapper[4642]: I0128 07:03:42.033824 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/995eb4f3-f6c8-4fb3-a991-8777f7b645cb-config-data\") pod \"keystone-db-sync-gh7r2\" (UID: \"995eb4f3-f6c8-4fb3-a991-8777f7b645cb\") " pod="openstack/keystone-db-sync-gh7r2" Jan 28 07:03:42 crc kubenswrapper[4642]: I0128 07:03:42.033929 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/910e369b-6c77-43c1-95de-f125a1813bc6-operator-scripts\") pod \"neutron-5126-account-create-update-gt9kz\" (UID: \"910e369b-6c77-43c1-95de-f125a1813bc6\") " pod="openstack/neutron-5126-account-create-update-gt9kz" Jan 28 07:03:42 crc kubenswrapper[4642]: I0128 07:03:42.034038 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncsjp\" (UniqueName: \"kubernetes.io/projected/c47afaa6-b39b-4af7-aaee-d47fa6e7932a-kube-api-access-ncsjp\") pod \"neutron-db-create-vn7qt\" (UID: \"c47afaa6-b39b-4af7-aaee-d47fa6e7932a\") " pod="openstack/neutron-db-create-vn7qt" Jan 28 07:03:42 crc kubenswrapper[4642]: I0128 07:03:42.040106 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c47afaa6-b39b-4af7-aaee-d47fa6e7932a-operator-scripts\") pod \"neutron-db-create-vn7qt\" (UID: \"c47afaa6-b39b-4af7-aaee-d47fa6e7932a\") " pod="openstack/neutron-db-create-vn7qt" Jan 28 07:03:42 crc kubenswrapper[4642]: I0128 07:03:42.040164 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/995eb4f3-f6c8-4fb3-a991-8777f7b645cb-combined-ca-bundle\") pod \"keystone-db-sync-gh7r2\" (UID: \"995eb4f3-f6c8-4fb3-a991-8777f7b645cb\") " pod="openstack/keystone-db-sync-gh7r2" Jan 28 07:03:42 crc kubenswrapper[4642]: I0128 07:03:42.042738 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/995eb4f3-f6c8-4fb3-a991-8777f7b645cb-config-data\") pod \"keystone-db-sync-gh7r2\" (UID: \"995eb4f3-f6c8-4fb3-a991-8777f7b645cb\") " pod="openstack/keystone-db-sync-gh7r2" Jan 28 07:03:42 crc kubenswrapper[4642]: I0128 07:03:42.053632 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncsjp\" (UniqueName: \"kubernetes.io/projected/c47afaa6-b39b-4af7-aaee-d47fa6e7932a-kube-api-access-ncsjp\") pod \"neutron-db-create-vn7qt\" (UID: \"c47afaa6-b39b-4af7-aaee-d47fa6e7932a\") " pod="openstack/neutron-db-create-vn7qt" Jan 28 07:03:42 crc kubenswrapper[4642]: I0128 07:03:42.059003 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwq8j\" (UniqueName: \"kubernetes.io/projected/995eb4f3-f6c8-4fb3-a991-8777f7b645cb-kube-api-access-bwq8j\") pod \"keystone-db-sync-gh7r2\" (UID: \"995eb4f3-f6c8-4fb3-a991-8777f7b645cb\") " pod="openstack/keystone-db-sync-gh7r2" Jan 28 07:03:43 crc kubenswrapper[4642]: I0128 07:03:42.126290 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-vn7qt" Jan 28 07:03:43 crc kubenswrapper[4642]: I0128 07:03:42.135962 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/910e369b-6c77-43c1-95de-f125a1813bc6-operator-scripts\") pod \"neutron-5126-account-create-update-gt9kz\" (UID: \"910e369b-6c77-43c1-95de-f125a1813bc6\") " pod="openstack/neutron-5126-account-create-update-gt9kz" Jan 28 07:03:43 crc kubenswrapper[4642]: I0128 07:03:42.136021 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfctv\" (UniqueName: \"kubernetes.io/projected/910e369b-6c77-43c1-95de-f125a1813bc6-kube-api-access-pfctv\") pod \"neutron-5126-account-create-update-gt9kz\" (UID: \"910e369b-6c77-43c1-95de-f125a1813bc6\") " pod="openstack/neutron-5126-account-create-update-gt9kz" Jan 28 07:03:43 crc kubenswrapper[4642]: I0128 07:03:42.136724 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/910e369b-6c77-43c1-95de-f125a1813bc6-operator-scripts\") pod \"neutron-5126-account-create-update-gt9kz\" (UID: \"910e369b-6c77-43c1-95de-f125a1813bc6\") " pod="openstack/neutron-5126-account-create-update-gt9kz" Jan 28 07:03:43 crc kubenswrapper[4642]: I0128 07:03:42.149874 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfctv\" (UniqueName: \"kubernetes.io/projected/910e369b-6c77-43c1-95de-f125a1813bc6-kube-api-access-pfctv\") pod \"neutron-5126-account-create-update-gt9kz\" (UID: \"910e369b-6c77-43c1-95de-f125a1813bc6\") " pod="openstack/neutron-5126-account-create-update-gt9kz" Jan 28 07:03:43 crc kubenswrapper[4642]: I0128 07:03:42.185985 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-gh7r2" Jan 28 07:03:43 crc kubenswrapper[4642]: I0128 07:03:42.236302 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5126-account-create-update-gt9kz" Jan 28 07:03:43 crc kubenswrapper[4642]: I0128 07:03:42.298453 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-97hmf"] Jan 28 07:03:43 crc kubenswrapper[4642]: W0128 07:03:42.307784 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod096310a2_8b6e_436b_9b34_6263ac3806b6.slice/crio-d4bcf1e30dbc182734ebffca13545588e510350718676ba8d494ee08aee5e325 WatchSource:0}: Error finding container d4bcf1e30dbc182734ebffca13545588e510350718676ba8d494ee08aee5e325: Status 404 returned error can't find the container with id d4bcf1e30dbc182734ebffca13545588e510350718676ba8d494ee08aee5e325 Jan 28 07:03:43 crc kubenswrapper[4642]: I0128 07:03:42.811277 4642 generic.go:334] "Generic (PLEG): container finished" podID="c40a1d5e-2d6f-4f93-9a5f-6fbc808f1a2e" containerID="d0678ef6e24f0be02ecc6fbfa4d0e423ab40d0e3177d47fdc5d416a94b3962dd" exitCode=0 Jan 28 07:03:43 crc kubenswrapper[4642]: I0128 07:03:42.811470 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-849d6cccf9-nf7n7" event={"ID":"c40a1d5e-2d6f-4f93-9a5f-6fbc808f1a2e","Type":"ContainerDied","Data":"d0678ef6e24f0be02ecc6fbfa4d0e423ab40d0e3177d47fdc5d416a94b3962dd"} Jan 28 07:03:43 crc kubenswrapper[4642]: I0128 07:03:42.811608 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-849d6cccf9-nf7n7" event={"ID":"c40a1d5e-2d6f-4f93-9a5f-6fbc808f1a2e","Type":"ContainerStarted","Data":"fb3d5c998962cb58d11a913c432d3d7aab955189488ad05c2e8cbf2e18cea5e3"} Jan 28 07:03:43 crc kubenswrapper[4642]: I0128 07:03:42.814118 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-9d4kj-config-4xkcw" event={"ID":"8bdc7d12-45ca-4119-ac92-cf16305dc903","Type":"ContainerStarted","Data":"4e2eb66f9a1e2cc53963e15f342b68a48faa3a94507bd022f623cf3e62a0a5ab"} Jan 28 07:03:43 crc kubenswrapper[4642]: I0128 07:03:42.815093 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-97hmf" event={"ID":"096310a2-8b6e-436b-9b34-6263ac3806b6","Type":"ContainerStarted","Data":"7ae425e8f94a4f63672e4cfa8a6c900a3dadacfa6cfee846ce69ea39fe6d4a77"} Jan 28 07:03:43 crc kubenswrapper[4642]: I0128 07:03:42.815112 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-97hmf" event={"ID":"096310a2-8b6e-436b-9b34-6263ac3806b6","Type":"ContainerStarted","Data":"d4bcf1e30dbc182734ebffca13545588e510350718676ba8d494ee08aee5e325"} Jan 28 07:03:43 crc kubenswrapper[4642]: I0128 07:03:42.881568 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-9d4kj-config-4xkcw" podStartSLOduration=2.881548586 podStartE2EDuration="2.881548586s" podCreationTimestamp="2026-01-28 07:03:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 07:03:42.870985044 +0000 UTC m=+946.103073853" watchObservedRunningTime="2026-01-28 07:03:42.881548586 +0000 UTC m=+946.113637395" Jan 28 07:03:43 crc kubenswrapper[4642]: I0128 07:03:42.883814 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-97hmf" podStartSLOduration=1.883808618 podStartE2EDuration="1.883808618s" podCreationTimestamp="2026-01-28 07:03:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 07:03:42.849827792 +0000 UTC m=+946.081916601" watchObservedRunningTime="2026-01-28 07:03:42.883808618 +0000 UTC m=+946.115897427" Jan 28 07:03:43 crc kubenswrapper[4642]: I0128 07:03:43.546505 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-jl29g"] Jan 28 07:03:43 crc kubenswrapper[4642]: I0128 07:03:43.562465 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-3a33-account-create-update-xs5t5"] Jan 28 07:03:43 crc kubenswrapper[4642]: W0128 07:03:43.573543 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod806827f7_19db_4656_89ed_9d2253ecbf67.slice/crio-5227a8e617b156e8bab8f88d007462c64a23ec57ff43a55f6fd7ff1f1ce148a8 WatchSource:0}: Error finding container 5227a8e617b156e8bab8f88d007462c64a23ec57ff43a55f6fd7ff1f1ce148a8: Status 404 returned error can't find the container with id 5227a8e617b156e8bab8f88d007462c64a23ec57ff43a55f6fd7ff1f1ce148a8 Jan 28 07:03:43 crc kubenswrapper[4642]: I0128 07:03:43.682536 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-gh7r2"] Jan 28 07:03:43 crc kubenswrapper[4642]: I0128 07:03:43.693071 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-93c0-account-create-update-jwrtw"] Jan 28 07:03:43 crc kubenswrapper[4642]: I0128 07:03:43.695847 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5126-account-create-update-gt9kz"] Jan 28 07:03:43 crc kubenswrapper[4642]: W0128 07:03:43.706902 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod910e369b_6c77_43c1_95de_f125a1813bc6.slice/crio-6e0ca7a98c3d48c33284535d4cb6ac2ac7effe00622229354dcb82e33deda0c9 WatchSource:0}: Error finding container 6e0ca7a98c3d48c33284535d4cb6ac2ac7effe00622229354dcb82e33deda0c9: Status 404 returned error can't find the container with id 6e0ca7a98c3d48c33284535d4cb6ac2ac7effe00622229354dcb82e33deda0c9 Jan 28 07:03:43 crc kubenswrapper[4642]: I0128 07:03:43.711134 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-vn7qt"] Jan 28 07:03:43 crc kubenswrapper[4642]: W0128 07:03:43.737048 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc47afaa6_b39b_4af7_aaee_d47fa6e7932a.slice/crio-62ae3bb55ef3106e3f7f88cb7fd21ea3fa854e998d2dfffafc38a954f38dfb7f WatchSource:0}: Error finding container 62ae3bb55ef3106e3f7f88cb7fd21ea3fa854e998d2dfffafc38a954f38dfb7f: Status 404 returned error can't find the container with id 62ae3bb55ef3106e3f7f88cb7fd21ea3fa854e998d2dfffafc38a954f38dfb7f Jan 28 07:03:43 crc kubenswrapper[4642]: I0128 07:03:43.868295 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-93c0-account-create-update-jwrtw" event={"ID":"3987de36-92f6-4d2b-b4d6-42ab67c7525f","Type":"ContainerStarted","Data":"9ea05f17d38715cc2687c02b02261cd9357d4f7ea873f5a6c41648d03a77be3e"} Jan 28 07:03:43 crc kubenswrapper[4642]: I0128 07:03:43.876751 4642 generic.go:334] "Generic (PLEG): container finished" podID="8bdc7d12-45ca-4119-ac92-cf16305dc903" containerID="4e2eb66f9a1e2cc53963e15f342b68a48faa3a94507bd022f623cf3e62a0a5ab" exitCode=0 Jan 28 07:03:43 crc kubenswrapper[4642]: I0128 07:03:43.876968 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-9d4kj-config-4xkcw" event={"ID":"8bdc7d12-45ca-4119-ac92-cf16305dc903","Type":"ContainerDied","Data":"4e2eb66f9a1e2cc53963e15f342b68a48faa3a94507bd022f623cf3e62a0a5ab"} Jan 28 07:03:43 crc kubenswrapper[4642]: I0128 07:03:43.884740 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-jl29g" event={"ID":"806827f7-19db-4656-89ed-9d2253ecbf67","Type":"ContainerStarted","Data":"5227a8e617b156e8bab8f88d007462c64a23ec57ff43a55f6fd7ff1f1ce148a8"} Jan 28 07:03:43 crc kubenswrapper[4642]: I0128 07:03:43.895458 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-849d6cccf9-nf7n7" event={"ID":"c40a1d5e-2d6f-4f93-9a5f-6fbc808f1a2e","Type":"ContainerStarted","Data":"716156c7890e28ba48a0c68ce3ac2b1368c24ced520d78fdbc588d840e3ff3f5"} Jan 28 07:03:43 crc kubenswrapper[4642]: I0128 07:03:43.895650 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-849d6cccf9-nf7n7" Jan 28 07:03:43 crc kubenswrapper[4642]: I0128 07:03:43.898672 4642 generic.go:334] "Generic (PLEG): container finished" podID="096310a2-8b6e-436b-9b34-6263ac3806b6" containerID="7ae425e8f94a4f63672e4cfa8a6c900a3dadacfa6cfee846ce69ea39fe6d4a77" exitCode=0 Jan 28 07:03:43 crc kubenswrapper[4642]: I0128 07:03:43.898725 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-97hmf" event={"ID":"096310a2-8b6e-436b-9b34-6263ac3806b6","Type":"ContainerDied","Data":"7ae425e8f94a4f63672e4cfa8a6c900a3dadacfa6cfee846ce69ea39fe6d4a77"} Jan 28 07:03:43 crc kubenswrapper[4642]: I0128 07:03:43.899949 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-3a33-account-create-update-xs5t5" event={"ID":"61bc8d83-3e11-4a38-9df5-f1c3f391540f","Type":"ContainerStarted","Data":"f907ea8cf24a5f1ea3912c7d9966d0b3d343f3935f0c5d0c5fb6af0d65ae288e"} Jan 28 07:03:43 crc kubenswrapper[4642]: I0128 07:03:43.901066 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-gh7r2" event={"ID":"995eb4f3-f6c8-4fb3-a991-8777f7b645cb","Type":"ContainerStarted","Data":"2bdcdce2786dea08cef7826a9602b367509a99b9adb1b5adbd0186daa5cd9f0b"} Jan 28 07:03:43 crc kubenswrapper[4642]: I0128 07:03:43.903331 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5126-account-create-update-gt9kz" event={"ID":"910e369b-6c77-43c1-95de-f125a1813bc6","Type":"ContainerStarted","Data":"6e0ca7a98c3d48c33284535d4cb6ac2ac7effe00622229354dcb82e33deda0c9"} Jan 28 07:03:43 crc kubenswrapper[4642]: I0128 07:03:43.904375 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-vn7qt" event={"ID":"c47afaa6-b39b-4af7-aaee-d47fa6e7932a","Type":"ContainerStarted","Data":"62ae3bb55ef3106e3f7f88cb7fd21ea3fa854e998d2dfffafc38a954f38dfb7f"} Jan 28 07:03:43 crc kubenswrapper[4642]: I0128 07:03:43.914228 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-849d6cccf9-nf7n7" podStartSLOduration=2.914217282 podStartE2EDuration="2.914217282s" podCreationTimestamp="2026-01-28 07:03:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 07:03:43.908697318 +0000 UTC m=+947.140786127" watchObservedRunningTime="2026-01-28 07:03:43.914217282 +0000 UTC m=+947.146306092" Jan 28 07:03:44 crc kubenswrapper[4642]: I0128 07:03:44.916535 4642 generic.go:334] "Generic (PLEG): container finished" podID="61bc8d83-3e11-4a38-9df5-f1c3f391540f" containerID="7d77c7ac7f5ff552436038a3f0848ec7bfc69bf52dcac07c423e4b3060cf797f" exitCode=0 Jan 28 07:03:44 crc kubenswrapper[4642]: I0128 07:03:44.916581 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-3a33-account-create-update-xs5t5" event={"ID":"61bc8d83-3e11-4a38-9df5-f1c3f391540f","Type":"ContainerDied","Data":"7d77c7ac7f5ff552436038a3f0848ec7bfc69bf52dcac07c423e4b3060cf797f"} Jan 28 07:03:44 crc kubenswrapper[4642]: I0128 07:03:44.918718 4642 generic.go:334] "Generic (PLEG): container finished" podID="806827f7-19db-4656-89ed-9d2253ecbf67" containerID="f734b7bb98c92bd5a4bb422c0d89a59844110a34b4c30e57d5142dfcfda108fa" exitCode=0 Jan 28 07:03:44 crc kubenswrapper[4642]: I0128 07:03:44.918742 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-jl29g" event={"ID":"806827f7-19db-4656-89ed-9d2253ecbf67","Type":"ContainerDied","Data":"f734b7bb98c92bd5a4bb422c0d89a59844110a34b4c30e57d5142dfcfda108fa"} Jan 28 07:03:44 crc kubenswrapper[4642]: I0128 07:03:44.920656 4642 generic.go:334] "Generic (PLEG): container finished" podID="910e369b-6c77-43c1-95de-f125a1813bc6" containerID="c0580a3436a26f97b7dfdeb5c1cc58a5444133baa07d8d45a8dbec558df6bf0a" exitCode=0 Jan 28 07:03:44 crc kubenswrapper[4642]: I0128 07:03:44.920709 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5126-account-create-update-gt9kz" event={"ID":"910e369b-6c77-43c1-95de-f125a1813bc6","Type":"ContainerDied","Data":"c0580a3436a26f97b7dfdeb5c1cc58a5444133baa07d8d45a8dbec558df6bf0a"} Jan 28 07:03:44 crc kubenswrapper[4642]: I0128 07:03:44.922928 4642 generic.go:334] "Generic (PLEG): container finished" podID="c47afaa6-b39b-4af7-aaee-d47fa6e7932a" containerID="3350b7651fdc3065210fdccdcf5f7c3b46f91c5ead66442b9642524a0c5bcda2" exitCode=0 Jan 28 07:03:44 crc kubenswrapper[4642]: I0128 07:03:44.922949 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-vn7qt" event={"ID":"c47afaa6-b39b-4af7-aaee-d47fa6e7932a","Type":"ContainerDied","Data":"3350b7651fdc3065210fdccdcf5f7c3b46f91c5ead66442b9642524a0c5bcda2"} Jan 28 07:03:44 crc kubenswrapper[4642]: I0128 07:03:44.930122 4642 generic.go:334] "Generic (PLEG): container finished" podID="3987de36-92f6-4d2b-b4d6-42ab67c7525f" containerID="a6c5166a99a3d5e78a7716afdd8aad77f381dfbc0490021d064ed6b5ee6f781d" exitCode=0 Jan 28 07:03:44 crc kubenswrapper[4642]: I0128 07:03:44.931071 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-93c0-account-create-update-jwrtw" event={"ID":"3987de36-92f6-4d2b-b4d6-42ab67c7525f","Type":"ContainerDied","Data":"a6c5166a99a3d5e78a7716afdd8aad77f381dfbc0490021d064ed6b5ee6f781d"} Jan 28 07:03:45 crc kubenswrapper[4642]: I0128 07:03:45.285353 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9d4kj-config-4xkcw" Jan 28 07:03:45 crc kubenswrapper[4642]: I0128 07:03:45.350892 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-97hmf" Jan 28 07:03:45 crc kubenswrapper[4642]: I0128 07:03:45.432059 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8bdc7d12-45ca-4119-ac92-cf16305dc903-scripts\") pod \"8bdc7d12-45ca-4119-ac92-cf16305dc903\" (UID: \"8bdc7d12-45ca-4119-ac92-cf16305dc903\") " Jan 28 07:03:45 crc kubenswrapper[4642]: I0128 07:03:45.432121 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-47xrf\" (UniqueName: \"kubernetes.io/projected/8bdc7d12-45ca-4119-ac92-cf16305dc903-kube-api-access-47xrf\") pod \"8bdc7d12-45ca-4119-ac92-cf16305dc903\" (UID: \"8bdc7d12-45ca-4119-ac92-cf16305dc903\") " Jan 28 07:03:45 crc kubenswrapper[4642]: I0128 07:03:45.432202 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8bdc7d12-45ca-4119-ac92-cf16305dc903-var-run-ovn\") pod \"8bdc7d12-45ca-4119-ac92-cf16305dc903\" (UID: \"8bdc7d12-45ca-4119-ac92-cf16305dc903\") " Jan 28 07:03:45 crc kubenswrapper[4642]: I0128 07:03:45.432235 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8bdc7d12-45ca-4119-ac92-cf16305dc903-var-log-ovn\") pod \"8bdc7d12-45ca-4119-ac92-cf16305dc903\" (UID: \"8bdc7d12-45ca-4119-ac92-cf16305dc903\") " Jan 28 07:03:45 crc kubenswrapper[4642]: I0128 07:03:45.432285 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8bdc7d12-45ca-4119-ac92-cf16305dc903-var-run\") pod \"8bdc7d12-45ca-4119-ac92-cf16305dc903\" (UID: \"8bdc7d12-45ca-4119-ac92-cf16305dc903\") " Jan 28 07:03:45 crc kubenswrapper[4642]: I0128 07:03:45.432320 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/8bdc7d12-45ca-4119-ac92-cf16305dc903-additional-scripts\") pod \"8bdc7d12-45ca-4119-ac92-cf16305dc903\" (UID: \"8bdc7d12-45ca-4119-ac92-cf16305dc903\") " Jan 28 07:03:45 crc kubenswrapper[4642]: I0128 07:03:45.432479 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8bdc7d12-45ca-4119-ac92-cf16305dc903-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "8bdc7d12-45ca-4119-ac92-cf16305dc903" (UID: "8bdc7d12-45ca-4119-ac92-cf16305dc903"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 07:03:45 crc kubenswrapper[4642]: I0128 07:03:45.432513 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8bdc7d12-45ca-4119-ac92-cf16305dc903-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "8bdc7d12-45ca-4119-ac92-cf16305dc903" (UID: "8bdc7d12-45ca-4119-ac92-cf16305dc903"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 07:03:45 crc kubenswrapper[4642]: I0128 07:03:45.433029 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8bdc7d12-45ca-4119-ac92-cf16305dc903-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "8bdc7d12-45ca-4119-ac92-cf16305dc903" (UID: "8bdc7d12-45ca-4119-ac92-cf16305dc903"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:03:45 crc kubenswrapper[4642]: I0128 07:03:45.433047 4642 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8bdc7d12-45ca-4119-ac92-cf16305dc903-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 28 07:03:45 crc kubenswrapper[4642]: I0128 07:03:45.433062 4642 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8bdc7d12-45ca-4119-ac92-cf16305dc903-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 28 07:03:45 crc kubenswrapper[4642]: I0128 07:03:45.433063 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8bdc7d12-45ca-4119-ac92-cf16305dc903-var-run" (OuterVolumeSpecName: "var-run") pod "8bdc7d12-45ca-4119-ac92-cf16305dc903" (UID: "8bdc7d12-45ca-4119-ac92-cf16305dc903"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 07:03:45 crc kubenswrapper[4642]: I0128 07:03:45.433259 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8bdc7d12-45ca-4119-ac92-cf16305dc903-scripts" (OuterVolumeSpecName: "scripts") pod "8bdc7d12-45ca-4119-ac92-cf16305dc903" (UID: "8bdc7d12-45ca-4119-ac92-cf16305dc903"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:03:45 crc kubenswrapper[4642]: I0128 07:03:45.440327 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bdc7d12-45ca-4119-ac92-cf16305dc903-kube-api-access-47xrf" (OuterVolumeSpecName: "kube-api-access-47xrf") pod "8bdc7d12-45ca-4119-ac92-cf16305dc903" (UID: "8bdc7d12-45ca-4119-ac92-cf16305dc903"). InnerVolumeSpecName "kube-api-access-47xrf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:03:45 crc kubenswrapper[4642]: I0128 07:03:45.533947 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/096310a2-8b6e-436b-9b34-6263ac3806b6-operator-scripts\") pod \"096310a2-8b6e-436b-9b34-6263ac3806b6\" (UID: \"096310a2-8b6e-436b-9b34-6263ac3806b6\") " Jan 28 07:03:45 crc kubenswrapper[4642]: I0128 07:03:45.534039 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qz6j\" (UniqueName: \"kubernetes.io/projected/096310a2-8b6e-436b-9b34-6263ac3806b6-kube-api-access-4qz6j\") pod \"096310a2-8b6e-436b-9b34-6263ac3806b6\" (UID: \"096310a2-8b6e-436b-9b34-6263ac3806b6\") " Jan 28 07:03:45 crc kubenswrapper[4642]: I0128 07:03:45.534618 4642 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8bdc7d12-45ca-4119-ac92-cf16305dc903-var-run\") on node \"crc\" DevicePath \"\"" Jan 28 07:03:45 crc kubenswrapper[4642]: I0128 07:03:45.534639 4642 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/8bdc7d12-45ca-4119-ac92-cf16305dc903-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 07:03:45 crc kubenswrapper[4642]: I0128 07:03:45.534649 4642 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8bdc7d12-45ca-4119-ac92-cf16305dc903-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 07:03:45 crc kubenswrapper[4642]: I0128 07:03:45.534660 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-47xrf\" (UniqueName: \"kubernetes.io/projected/8bdc7d12-45ca-4119-ac92-cf16305dc903-kube-api-access-47xrf\") on node \"crc\" DevicePath \"\"" Jan 28 07:03:45 crc kubenswrapper[4642]: I0128 07:03:45.534834 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/096310a2-8b6e-436b-9b34-6263ac3806b6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "096310a2-8b6e-436b-9b34-6263ac3806b6" (UID: "096310a2-8b6e-436b-9b34-6263ac3806b6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:03:45 crc kubenswrapper[4642]: I0128 07:03:45.537282 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/096310a2-8b6e-436b-9b34-6263ac3806b6-kube-api-access-4qz6j" (OuterVolumeSpecName: "kube-api-access-4qz6j") pod "096310a2-8b6e-436b-9b34-6263ac3806b6" (UID: "096310a2-8b6e-436b-9b34-6263ac3806b6"). InnerVolumeSpecName "kube-api-access-4qz6j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:03:45 crc kubenswrapper[4642]: I0128 07:03:45.575676 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-9d4kj" Jan 28 07:03:45 crc kubenswrapper[4642]: I0128 07:03:45.637586 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qz6j\" (UniqueName: \"kubernetes.io/projected/096310a2-8b6e-436b-9b34-6263ac3806b6-kube-api-access-4qz6j\") on node \"crc\" DevicePath \"\"" Jan 28 07:03:45 crc kubenswrapper[4642]: I0128 07:03:45.637637 4642 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/096310a2-8b6e-436b-9b34-6263ac3806b6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 07:03:45 crc kubenswrapper[4642]: I0128 07:03:45.949644 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-9d4kj-config-4xkcw"] Jan 28 07:03:45 crc kubenswrapper[4642]: I0128 07:03:45.952421 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-9d4kj-config-4xkcw" event={"ID":"8bdc7d12-45ca-4119-ac92-cf16305dc903","Type":"ContainerDied","Data":"a3765fa13ba9b6f093cf0b117354da11abf0286857ab779f413e290d14c9d799"} Jan 28 07:03:45 crc kubenswrapper[4642]: I0128 07:03:45.952465 4642 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3765fa13ba9b6f093cf0b117354da11abf0286857ab779f413e290d14c9d799" Jan 28 07:03:45 crc kubenswrapper[4642]: I0128 07:03:45.952536 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9d4kj-config-4xkcw" Jan 28 07:03:45 crc kubenswrapper[4642]: I0128 07:03:45.956124 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-9d4kj-config-4xkcw"] Jan 28 07:03:45 crc kubenswrapper[4642]: I0128 07:03:45.963614 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-97hmf" Jan 28 07:03:45 crc kubenswrapper[4642]: I0128 07:03:45.964135 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-97hmf" event={"ID":"096310a2-8b6e-436b-9b34-6263ac3806b6","Type":"ContainerDied","Data":"d4bcf1e30dbc182734ebffca13545588e510350718676ba8d494ee08aee5e325"} Jan 28 07:03:45 crc kubenswrapper[4642]: I0128 07:03:45.964225 4642 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4bcf1e30dbc182734ebffca13545588e510350718676ba8d494ee08aee5e325" Jan 28 07:03:47 crc kubenswrapper[4642]: I0128 07:03:47.107076 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8bdc7d12-45ca-4119-ac92-cf16305dc903" path="/var/lib/kubelet/pods/8bdc7d12-45ca-4119-ac92-cf16305dc903/volumes" Jan 28 07:03:51 crc kubenswrapper[4642]: I0128 07:03:51.134939 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-vn7qt" Jan 28 07:03:51 crc kubenswrapper[4642]: I0128 07:03:51.145852 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ncsjp\" (UniqueName: \"kubernetes.io/projected/c47afaa6-b39b-4af7-aaee-d47fa6e7932a-kube-api-access-ncsjp\") pod \"c47afaa6-b39b-4af7-aaee-d47fa6e7932a\" (UID: \"c47afaa6-b39b-4af7-aaee-d47fa6e7932a\") " Jan 28 07:03:51 crc kubenswrapper[4642]: I0128 07:03:51.155287 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c47afaa6-b39b-4af7-aaee-d47fa6e7932a-kube-api-access-ncsjp" (OuterVolumeSpecName: "kube-api-access-ncsjp") pod "c47afaa6-b39b-4af7-aaee-d47fa6e7932a" (UID: "c47afaa6-b39b-4af7-aaee-d47fa6e7932a"). InnerVolumeSpecName "kube-api-access-ncsjp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:03:51 crc kubenswrapper[4642]: I0128 07:03:51.247145 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c47afaa6-b39b-4af7-aaee-d47fa6e7932a-operator-scripts\") pod \"c47afaa6-b39b-4af7-aaee-d47fa6e7932a\" (UID: \"c47afaa6-b39b-4af7-aaee-d47fa6e7932a\") " Jan 28 07:03:51 crc kubenswrapper[4642]: I0128 07:03:51.247610 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ncsjp\" (UniqueName: \"kubernetes.io/projected/c47afaa6-b39b-4af7-aaee-d47fa6e7932a-kube-api-access-ncsjp\") on node \"crc\" DevicePath \"\"" Jan 28 07:03:51 crc kubenswrapper[4642]: I0128 07:03:51.247988 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c47afaa6-b39b-4af7-aaee-d47fa6e7932a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c47afaa6-b39b-4af7-aaee-d47fa6e7932a" (UID: "c47afaa6-b39b-4af7-aaee-d47fa6e7932a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:03:51 crc kubenswrapper[4642]: I0128 07:03:51.348686 4642 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c47afaa6-b39b-4af7-aaee-d47fa6e7932a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 07:03:51 crc kubenswrapper[4642]: I0128 07:03:51.382381 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-849d6cccf9-nf7n7" Jan 28 07:03:51 crc kubenswrapper[4642]: I0128 07:03:51.439831 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c56fc69cc-rtglx"] Jan 28 07:03:51 crc kubenswrapper[4642]: I0128 07:03:51.440246 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6c56fc69cc-rtglx" podUID="877c5c9a-8740-4a17-b470-ac9c2274c745" containerName="dnsmasq-dns" containerID="cri-o://e82be3005291e711166acae93067cfc0a8633c8ae36389630fa328b2e351e263" gracePeriod=10 Jan 28 07:03:51 crc kubenswrapper[4642]: I0128 07:03:51.973421 4642 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6c56fc69cc-rtglx" podUID="877c5c9a-8740-4a17-b470-ac9c2274c745" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.118:5353: connect: connection refused" Jan 28 07:03:52 crc kubenswrapper[4642]: I0128 07:03:52.030687 4642 generic.go:334] "Generic (PLEG): container finished" podID="877c5c9a-8740-4a17-b470-ac9c2274c745" containerID="e82be3005291e711166acae93067cfc0a8633c8ae36389630fa328b2e351e263" exitCode=0 Jan 28 07:03:52 crc kubenswrapper[4642]: I0128 07:03:52.030765 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c56fc69cc-rtglx" event={"ID":"877c5c9a-8740-4a17-b470-ac9c2274c745","Type":"ContainerDied","Data":"e82be3005291e711166acae93067cfc0a8633c8ae36389630fa328b2e351e263"} Jan 28 07:03:52 crc kubenswrapper[4642]: I0128 07:03:52.032264 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-vn7qt" event={"ID":"c47afaa6-b39b-4af7-aaee-d47fa6e7932a","Type":"ContainerDied","Data":"62ae3bb55ef3106e3f7f88cb7fd21ea3fa854e998d2dfffafc38a954f38dfb7f"} Jan 28 07:03:52 crc kubenswrapper[4642]: I0128 07:03:52.032296 4642 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62ae3bb55ef3106e3f7f88cb7fd21ea3fa854e998d2dfffafc38a954f38dfb7f" Jan 28 07:03:52 crc kubenswrapper[4642]: I0128 07:03:52.032342 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-vn7qt" Jan 28 07:03:54 crc kubenswrapper[4642]: I0128 07:03:54.051861 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-93c0-account-create-update-jwrtw" event={"ID":"3987de36-92f6-4d2b-b4d6-42ab67c7525f","Type":"ContainerDied","Data":"9ea05f17d38715cc2687c02b02261cd9357d4f7ea873f5a6c41648d03a77be3e"} Jan 28 07:03:54 crc kubenswrapper[4642]: I0128 07:03:54.052146 4642 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ea05f17d38715cc2687c02b02261cd9357d4f7ea873f5a6c41648d03a77be3e" Jan 28 07:03:54 crc kubenswrapper[4642]: I0128 07:03:54.055163 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-3a33-account-create-update-xs5t5" event={"ID":"61bc8d83-3e11-4a38-9df5-f1c3f391540f","Type":"ContainerDied","Data":"f907ea8cf24a5f1ea3912c7d9966d0b3d343f3935f0c5d0c5fb6af0d65ae288e"} Jan 28 07:03:54 crc kubenswrapper[4642]: I0128 07:03:54.055225 4642 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f907ea8cf24a5f1ea3912c7d9966d0b3d343f3935f0c5d0c5fb6af0d65ae288e" Jan 28 07:03:54 crc kubenswrapper[4642]: I0128 07:03:54.056272 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-jl29g" event={"ID":"806827f7-19db-4656-89ed-9d2253ecbf67","Type":"ContainerDied","Data":"5227a8e617b156e8bab8f88d007462c64a23ec57ff43a55f6fd7ff1f1ce148a8"} Jan 28 07:03:54 crc kubenswrapper[4642]: I0128 07:03:54.056286 4642 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5227a8e617b156e8bab8f88d007462c64a23ec57ff43a55f6fd7ff1f1ce148a8" Jan 28 07:03:54 crc kubenswrapper[4642]: I0128 07:03:54.057144 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5126-account-create-update-gt9kz" event={"ID":"910e369b-6c77-43c1-95de-f125a1813bc6","Type":"ContainerDied","Data":"6e0ca7a98c3d48c33284535d4cb6ac2ac7effe00622229354dcb82e33deda0c9"} Jan 28 07:03:54 crc kubenswrapper[4642]: I0128 07:03:54.057158 4642 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e0ca7a98c3d48c33284535d4cb6ac2ac7effe00622229354dcb82e33deda0c9" Jan 28 07:03:54 crc kubenswrapper[4642]: I0128 07:03:54.098060 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-3a33-account-create-update-xs5t5" Jan 28 07:03:54 crc kubenswrapper[4642]: I0128 07:03:54.104725 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5126-account-create-update-gt9kz" Jan 28 07:03:54 crc kubenswrapper[4642]: I0128 07:03:54.114428 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-93c0-account-create-update-jwrtw" Jan 28 07:03:54 crc kubenswrapper[4642]: I0128 07:03:54.138509 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-jl29g" Jan 28 07:03:54 crc kubenswrapper[4642]: I0128 07:03:54.180359 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c56fc69cc-rtglx" Jan 28 07:03:54 crc kubenswrapper[4642]: I0128 07:03:54.204281 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tnvsl\" (UniqueName: \"kubernetes.io/projected/61bc8d83-3e11-4a38-9df5-f1c3f391540f-kube-api-access-tnvsl\") pod \"61bc8d83-3e11-4a38-9df5-f1c3f391540f\" (UID: \"61bc8d83-3e11-4a38-9df5-f1c3f391540f\") " Jan 28 07:03:54 crc kubenswrapper[4642]: I0128 07:03:54.204417 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/61bc8d83-3e11-4a38-9df5-f1c3f391540f-operator-scripts\") pod \"61bc8d83-3e11-4a38-9df5-f1c3f391540f\" (UID: \"61bc8d83-3e11-4a38-9df5-f1c3f391540f\") " Jan 28 07:03:54 crc kubenswrapper[4642]: I0128 07:03:54.205651 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61bc8d83-3e11-4a38-9df5-f1c3f391540f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "61bc8d83-3e11-4a38-9df5-f1c3f391540f" (UID: "61bc8d83-3e11-4a38-9df5-f1c3f391540f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:03:54 crc kubenswrapper[4642]: I0128 07:03:54.212629 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61bc8d83-3e11-4a38-9df5-f1c3f391540f-kube-api-access-tnvsl" (OuterVolumeSpecName: "kube-api-access-tnvsl") pod "61bc8d83-3e11-4a38-9df5-f1c3f391540f" (UID: "61bc8d83-3e11-4a38-9df5-f1c3f391540f"). InnerVolumeSpecName "kube-api-access-tnvsl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:03:54 crc kubenswrapper[4642]: I0128 07:03:54.306560 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mvnhn\" (UniqueName: \"kubernetes.io/projected/877c5c9a-8740-4a17-b470-ac9c2274c745-kube-api-access-mvnhn\") pod \"877c5c9a-8740-4a17-b470-ac9c2274c745\" (UID: \"877c5c9a-8740-4a17-b470-ac9c2274c745\") " Jan 28 07:03:54 crc kubenswrapper[4642]: I0128 07:03:54.306920 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/806827f7-19db-4656-89ed-9d2253ecbf67-operator-scripts\") pod \"806827f7-19db-4656-89ed-9d2253ecbf67\" (UID: \"806827f7-19db-4656-89ed-9d2253ecbf67\") " Jan 28 07:03:54 crc kubenswrapper[4642]: I0128 07:03:54.306991 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9c8lr\" (UniqueName: \"kubernetes.io/projected/3987de36-92f6-4d2b-b4d6-42ab67c7525f-kube-api-access-9c8lr\") pod \"3987de36-92f6-4d2b-b4d6-42ab67c7525f\" (UID: \"3987de36-92f6-4d2b-b4d6-42ab67c7525f\") " Jan 28 07:03:54 crc kubenswrapper[4642]: I0128 07:03:54.307048 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/910e369b-6c77-43c1-95de-f125a1813bc6-operator-scripts\") pod \"910e369b-6c77-43c1-95de-f125a1813bc6\" (UID: \"910e369b-6c77-43c1-95de-f125a1813bc6\") " Jan 28 07:03:54 crc kubenswrapper[4642]: I0128 07:03:54.307123 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbsfj\" (UniqueName: \"kubernetes.io/projected/806827f7-19db-4656-89ed-9d2253ecbf67-kube-api-access-cbsfj\") pod \"806827f7-19db-4656-89ed-9d2253ecbf67\" (UID: \"806827f7-19db-4656-89ed-9d2253ecbf67\") " Jan 28 07:03:54 crc kubenswrapper[4642]: I0128 07:03:54.307158 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pfctv\" (UniqueName: \"kubernetes.io/projected/910e369b-6c77-43c1-95de-f125a1813bc6-kube-api-access-pfctv\") pod \"910e369b-6c77-43c1-95de-f125a1813bc6\" (UID: \"910e369b-6c77-43c1-95de-f125a1813bc6\") " Jan 28 07:03:54 crc kubenswrapper[4642]: I0128 07:03:54.307224 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/877c5c9a-8740-4a17-b470-ac9c2274c745-ovsdbserver-nb\") pod \"877c5c9a-8740-4a17-b470-ac9c2274c745\" (UID: \"877c5c9a-8740-4a17-b470-ac9c2274c745\") " Jan 28 07:03:54 crc kubenswrapper[4642]: I0128 07:03:54.307244 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/877c5c9a-8740-4a17-b470-ac9c2274c745-dns-svc\") pod \"877c5c9a-8740-4a17-b470-ac9c2274c745\" (UID: \"877c5c9a-8740-4a17-b470-ac9c2274c745\") " Jan 28 07:03:54 crc kubenswrapper[4642]: I0128 07:03:54.307271 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/877c5c9a-8740-4a17-b470-ac9c2274c745-ovsdbserver-sb\") pod \"877c5c9a-8740-4a17-b470-ac9c2274c745\" (UID: \"877c5c9a-8740-4a17-b470-ac9c2274c745\") " Jan 28 07:03:54 crc kubenswrapper[4642]: I0128 07:03:54.307286 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3987de36-92f6-4d2b-b4d6-42ab67c7525f-operator-scripts\") pod \"3987de36-92f6-4d2b-b4d6-42ab67c7525f\" (UID: \"3987de36-92f6-4d2b-b4d6-42ab67c7525f\") " Jan 28 07:03:54 crc kubenswrapper[4642]: I0128 07:03:54.307322 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/877c5c9a-8740-4a17-b470-ac9c2274c745-config\") pod \"877c5c9a-8740-4a17-b470-ac9c2274c745\" (UID: \"877c5c9a-8740-4a17-b470-ac9c2274c745\") " Jan 28 07:03:54 crc kubenswrapper[4642]: I0128 07:03:54.307840 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/910e369b-6c77-43c1-95de-f125a1813bc6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "910e369b-6c77-43c1-95de-f125a1813bc6" (UID: "910e369b-6c77-43c1-95de-f125a1813bc6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:03:54 crc kubenswrapper[4642]: I0128 07:03:54.307894 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3987de36-92f6-4d2b-b4d6-42ab67c7525f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3987de36-92f6-4d2b-b4d6-42ab67c7525f" (UID: "3987de36-92f6-4d2b-b4d6-42ab67c7525f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:03:54 crc kubenswrapper[4642]: I0128 07:03:54.308078 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tnvsl\" (UniqueName: \"kubernetes.io/projected/61bc8d83-3e11-4a38-9df5-f1c3f391540f-kube-api-access-tnvsl\") on node \"crc\" DevicePath \"\"" Jan 28 07:03:54 crc kubenswrapper[4642]: I0128 07:03:54.308096 4642 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/910e369b-6c77-43c1-95de-f125a1813bc6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 07:03:54 crc kubenswrapper[4642]: I0128 07:03:54.308105 4642 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/61bc8d83-3e11-4a38-9df5-f1c3f391540f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 07:03:54 crc kubenswrapper[4642]: I0128 07:03:54.308113 4642 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3987de36-92f6-4d2b-b4d6-42ab67c7525f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 07:03:54 crc kubenswrapper[4642]: I0128 07:03:54.308440 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/806827f7-19db-4656-89ed-9d2253ecbf67-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "806827f7-19db-4656-89ed-9d2253ecbf67" (UID: "806827f7-19db-4656-89ed-9d2253ecbf67"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:03:54 crc kubenswrapper[4642]: I0128 07:03:54.310070 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/877c5c9a-8740-4a17-b470-ac9c2274c745-kube-api-access-mvnhn" (OuterVolumeSpecName: "kube-api-access-mvnhn") pod "877c5c9a-8740-4a17-b470-ac9c2274c745" (UID: "877c5c9a-8740-4a17-b470-ac9c2274c745"). InnerVolumeSpecName "kube-api-access-mvnhn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:03:54 crc kubenswrapper[4642]: I0128 07:03:54.311040 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/910e369b-6c77-43c1-95de-f125a1813bc6-kube-api-access-pfctv" (OuterVolumeSpecName: "kube-api-access-pfctv") pod "910e369b-6c77-43c1-95de-f125a1813bc6" (UID: "910e369b-6c77-43c1-95de-f125a1813bc6"). InnerVolumeSpecName "kube-api-access-pfctv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:03:54 crc kubenswrapper[4642]: I0128 07:03:54.311400 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3987de36-92f6-4d2b-b4d6-42ab67c7525f-kube-api-access-9c8lr" (OuterVolumeSpecName: "kube-api-access-9c8lr") pod "3987de36-92f6-4d2b-b4d6-42ab67c7525f" (UID: "3987de36-92f6-4d2b-b4d6-42ab67c7525f"). InnerVolumeSpecName "kube-api-access-9c8lr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:03:54 crc kubenswrapper[4642]: I0128 07:03:54.316251 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/806827f7-19db-4656-89ed-9d2253ecbf67-kube-api-access-cbsfj" (OuterVolumeSpecName: "kube-api-access-cbsfj") pod "806827f7-19db-4656-89ed-9d2253ecbf67" (UID: "806827f7-19db-4656-89ed-9d2253ecbf67"). InnerVolumeSpecName "kube-api-access-cbsfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:03:54 crc kubenswrapper[4642]: I0128 07:03:54.337489 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/877c5c9a-8740-4a17-b470-ac9c2274c745-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "877c5c9a-8740-4a17-b470-ac9c2274c745" (UID: "877c5c9a-8740-4a17-b470-ac9c2274c745"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:03:54 crc kubenswrapper[4642]: I0128 07:03:54.340887 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/877c5c9a-8740-4a17-b470-ac9c2274c745-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "877c5c9a-8740-4a17-b470-ac9c2274c745" (UID: "877c5c9a-8740-4a17-b470-ac9c2274c745"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:03:54 crc kubenswrapper[4642]: I0128 07:03:54.343001 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/877c5c9a-8740-4a17-b470-ac9c2274c745-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "877c5c9a-8740-4a17-b470-ac9c2274c745" (UID: "877c5c9a-8740-4a17-b470-ac9c2274c745"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:03:54 crc kubenswrapper[4642]: I0128 07:03:54.344990 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/877c5c9a-8740-4a17-b470-ac9c2274c745-config" (OuterVolumeSpecName: "config") pod "877c5c9a-8740-4a17-b470-ac9c2274c745" (UID: "877c5c9a-8740-4a17-b470-ac9c2274c745"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:03:54 crc kubenswrapper[4642]: I0128 07:03:54.410296 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbsfj\" (UniqueName: \"kubernetes.io/projected/806827f7-19db-4656-89ed-9d2253ecbf67-kube-api-access-cbsfj\") on node \"crc\" DevicePath \"\"" Jan 28 07:03:54 crc kubenswrapper[4642]: I0128 07:03:54.410634 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pfctv\" (UniqueName: \"kubernetes.io/projected/910e369b-6c77-43c1-95de-f125a1813bc6-kube-api-access-pfctv\") on node \"crc\" DevicePath \"\"" Jan 28 07:03:54 crc kubenswrapper[4642]: I0128 07:03:54.410717 4642 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/877c5c9a-8740-4a17-b470-ac9c2274c745-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 07:03:54 crc kubenswrapper[4642]: I0128 07:03:54.410784 4642 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/877c5c9a-8740-4a17-b470-ac9c2274c745-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 07:03:54 crc kubenswrapper[4642]: I0128 07:03:54.410834 4642 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/877c5c9a-8740-4a17-b470-ac9c2274c745-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 07:03:54 crc kubenswrapper[4642]: I0128 07:03:54.410890 4642 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/877c5c9a-8740-4a17-b470-ac9c2274c745-config\") on node \"crc\" DevicePath \"\"" Jan 28 07:03:54 crc kubenswrapper[4642]: I0128 07:03:54.410946 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mvnhn\" (UniqueName: \"kubernetes.io/projected/877c5c9a-8740-4a17-b470-ac9c2274c745-kube-api-access-mvnhn\") on node \"crc\" DevicePath \"\"" Jan 28 07:03:54 crc kubenswrapper[4642]: I0128 07:03:54.410997 4642 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/806827f7-19db-4656-89ed-9d2253ecbf67-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 07:03:54 crc kubenswrapper[4642]: I0128 07:03:54.411050 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9c8lr\" (UniqueName: \"kubernetes.io/projected/3987de36-92f6-4d2b-b4d6-42ab67c7525f-kube-api-access-9c8lr\") on node \"crc\" DevicePath \"\"" Jan 28 07:03:55 crc kubenswrapper[4642]: I0128 07:03:55.066972 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-kxtxh" event={"ID":"455075b1-b6a1-4b44-b1a7-fc86ed3fc8e0","Type":"ContainerStarted","Data":"ee5d93f01fa5bb07bc234a89d646ecd0e5b52c4158e76237be7999a9b57da439"} Jan 28 07:03:55 crc kubenswrapper[4642]: I0128 07:03:55.069130 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-gh7r2" event={"ID":"995eb4f3-f6c8-4fb3-a991-8777f7b645cb","Type":"ContainerStarted","Data":"f0b14c61ae9adc600cff3c4c95c1e7be6419108332a6b145c55c936a93d4ea86"} Jan 28 07:03:55 crc kubenswrapper[4642]: I0128 07:03:55.071944 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-jl29g" Jan 28 07:03:55 crc kubenswrapper[4642]: I0128 07:03:55.074260 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c56fc69cc-rtglx" event={"ID":"877c5c9a-8740-4a17-b470-ac9c2274c745","Type":"ContainerDied","Data":"f514761e77c52a75a51b39a23da4c4f263a11940a2fdb09b5290281b1eae921c"} Jan 28 07:03:55 crc kubenswrapper[4642]: I0128 07:03:55.074305 4642 scope.go:117] "RemoveContainer" containerID="e82be3005291e711166acae93067cfc0a8633c8ae36389630fa328b2e351e263" Jan 28 07:03:55 crc kubenswrapper[4642]: I0128 07:03:55.074445 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c56fc69cc-rtglx" Jan 28 07:03:55 crc kubenswrapper[4642]: I0128 07:03:55.074492 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-3a33-account-create-update-xs5t5" Jan 28 07:03:55 crc kubenswrapper[4642]: I0128 07:03:55.076610 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5126-account-create-update-gt9kz" Jan 28 07:03:55 crc kubenswrapper[4642]: I0128 07:03:55.076659 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-93c0-account-create-update-jwrtw" Jan 28 07:03:55 crc kubenswrapper[4642]: I0128 07:03:55.091358 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-kxtxh" podStartSLOduration=2.060534436 podStartE2EDuration="35.09134473s" podCreationTimestamp="2026-01-28 07:03:20 +0000 UTC" firstStartedPulling="2026-01-28 07:03:20.959901901 +0000 UTC m=+924.191990711" lastFinishedPulling="2026-01-28 07:03:53.990712196 +0000 UTC m=+957.222801005" observedRunningTime="2026-01-28 07:03:55.086208576 +0000 UTC m=+958.318297385" watchObservedRunningTime="2026-01-28 07:03:55.09134473 +0000 UTC m=+958.323433538" Jan 28 07:03:55 crc kubenswrapper[4642]: I0128 07:03:55.101126 4642 scope.go:117] "RemoveContainer" containerID="3d436b0e7ef91cb84ab1067d51aea6f235f519c2e6bdcaa5e75d23c2a94365ed" Jan 28 07:03:55 crc kubenswrapper[4642]: I0128 07:03:55.116586 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-gh7r2" podStartSLOduration=3.868010112 podStartE2EDuration="14.116564754s" podCreationTimestamp="2026-01-28 07:03:41 +0000 UTC" firstStartedPulling="2026-01-28 07:03:43.710294842 +0000 UTC m=+946.942383650" lastFinishedPulling="2026-01-28 07:03:53.958849483 +0000 UTC m=+957.190938292" observedRunningTime="2026-01-28 07:03:55.104223367 +0000 UTC m=+958.336312186" watchObservedRunningTime="2026-01-28 07:03:55.116564754 +0000 UTC m=+958.348653563" Jan 28 07:03:55 crc kubenswrapper[4642]: I0128 07:03:55.160779 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c56fc69cc-rtglx"] Jan 28 07:03:55 crc kubenswrapper[4642]: I0128 07:03:55.180965 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6c56fc69cc-rtglx"] Jan 28 07:03:56 crc kubenswrapper[4642]: I0128 07:03:56.083988 4642 generic.go:334] "Generic (PLEG): container finished" podID="995eb4f3-f6c8-4fb3-a991-8777f7b645cb" containerID="f0b14c61ae9adc600cff3c4c95c1e7be6419108332a6b145c55c936a93d4ea86" exitCode=0 Jan 28 07:03:56 crc kubenswrapper[4642]: I0128 07:03:56.084034 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-gh7r2" event={"ID":"995eb4f3-f6c8-4fb3-a991-8777f7b645cb","Type":"ContainerDied","Data":"f0b14c61ae9adc600cff3c4c95c1e7be6419108332a6b145c55c936a93d4ea86"} Jan 28 07:03:57 crc kubenswrapper[4642]: I0128 07:03:57.111657 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="877c5c9a-8740-4a17-b470-ac9c2274c745" path="/var/lib/kubelet/pods/877c5c9a-8740-4a17-b470-ac9c2274c745/volumes" Jan 28 07:03:57 crc kubenswrapper[4642]: I0128 07:03:57.389640 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-gh7r2" Jan 28 07:03:57 crc kubenswrapper[4642]: I0128 07:03:57.558933 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/995eb4f3-f6c8-4fb3-a991-8777f7b645cb-config-data\") pod \"995eb4f3-f6c8-4fb3-a991-8777f7b645cb\" (UID: \"995eb4f3-f6c8-4fb3-a991-8777f7b645cb\") " Jan 28 07:03:57 crc kubenswrapper[4642]: I0128 07:03:57.559109 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/995eb4f3-f6c8-4fb3-a991-8777f7b645cb-combined-ca-bundle\") pod \"995eb4f3-f6c8-4fb3-a991-8777f7b645cb\" (UID: \"995eb4f3-f6c8-4fb3-a991-8777f7b645cb\") " Jan 28 07:03:57 crc kubenswrapper[4642]: I0128 07:03:57.559180 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwq8j\" (UniqueName: \"kubernetes.io/projected/995eb4f3-f6c8-4fb3-a991-8777f7b645cb-kube-api-access-bwq8j\") pod \"995eb4f3-f6c8-4fb3-a991-8777f7b645cb\" (UID: \"995eb4f3-f6c8-4fb3-a991-8777f7b645cb\") " Jan 28 07:03:57 crc kubenswrapper[4642]: I0128 07:03:57.564962 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/995eb4f3-f6c8-4fb3-a991-8777f7b645cb-kube-api-access-bwq8j" (OuterVolumeSpecName: "kube-api-access-bwq8j") pod "995eb4f3-f6c8-4fb3-a991-8777f7b645cb" (UID: "995eb4f3-f6c8-4fb3-a991-8777f7b645cb"). InnerVolumeSpecName "kube-api-access-bwq8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:03:57 crc kubenswrapper[4642]: I0128 07:03:57.579140 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/995eb4f3-f6c8-4fb3-a991-8777f7b645cb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "995eb4f3-f6c8-4fb3-a991-8777f7b645cb" (UID: "995eb4f3-f6c8-4fb3-a991-8777f7b645cb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:03:57 crc kubenswrapper[4642]: I0128 07:03:57.591091 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/995eb4f3-f6c8-4fb3-a991-8777f7b645cb-config-data" (OuterVolumeSpecName: "config-data") pod "995eb4f3-f6c8-4fb3-a991-8777f7b645cb" (UID: "995eb4f3-f6c8-4fb3-a991-8777f7b645cb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:03:57 crc kubenswrapper[4642]: I0128 07:03:57.660851 4642 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/995eb4f3-f6c8-4fb3-a991-8777f7b645cb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 07:03:57 crc kubenswrapper[4642]: I0128 07:03:57.660884 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bwq8j\" (UniqueName: \"kubernetes.io/projected/995eb4f3-f6c8-4fb3-a991-8777f7b645cb-kube-api-access-bwq8j\") on node \"crc\" DevicePath \"\"" Jan 28 07:03:57 crc kubenswrapper[4642]: I0128 07:03:57.660896 4642 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/995eb4f3-f6c8-4fb3-a991-8777f7b645cb-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 07:03:58 crc kubenswrapper[4642]: I0128 07:03:58.119420 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-gh7r2" event={"ID":"995eb4f3-f6c8-4fb3-a991-8777f7b645cb","Type":"ContainerDied","Data":"2bdcdce2786dea08cef7826a9602b367509a99b9adb1b5adbd0186daa5cd9f0b"} Jan 28 07:03:58 crc kubenswrapper[4642]: I0128 07:03:58.119458 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-gh7r2" Jan 28 07:03:58 crc kubenswrapper[4642]: I0128 07:03:58.119473 4642 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2bdcdce2786dea08cef7826a9602b367509a99b9adb1b5adbd0186daa5cd9f0b" Jan 28 07:03:58 crc kubenswrapper[4642]: I0128 07:03:58.617959 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-f6dd467cf-6c45s"] Jan 28 07:03:58 crc kubenswrapper[4642]: E0128 07:03:58.620531 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bdc7d12-45ca-4119-ac92-cf16305dc903" containerName="ovn-config" Jan 28 07:03:58 crc kubenswrapper[4642]: I0128 07:03:58.620569 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bdc7d12-45ca-4119-ac92-cf16305dc903" containerName="ovn-config" Jan 28 07:03:58 crc kubenswrapper[4642]: E0128 07:03:58.620581 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="910e369b-6c77-43c1-95de-f125a1813bc6" containerName="mariadb-account-create-update" Jan 28 07:03:58 crc kubenswrapper[4642]: I0128 07:03:58.620589 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="910e369b-6c77-43c1-95de-f125a1813bc6" containerName="mariadb-account-create-update" Jan 28 07:03:58 crc kubenswrapper[4642]: E0128 07:03:58.620599 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="995eb4f3-f6c8-4fb3-a991-8777f7b645cb" containerName="keystone-db-sync" Jan 28 07:03:58 crc kubenswrapper[4642]: I0128 07:03:58.620605 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="995eb4f3-f6c8-4fb3-a991-8777f7b645cb" containerName="keystone-db-sync" Jan 28 07:03:58 crc kubenswrapper[4642]: E0128 07:03:58.620617 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="877c5c9a-8740-4a17-b470-ac9c2274c745" containerName="init" Jan 28 07:03:58 crc kubenswrapper[4642]: I0128 07:03:58.620622 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="877c5c9a-8740-4a17-b470-ac9c2274c745" containerName="init" Jan 28 07:03:58 crc kubenswrapper[4642]: E0128 07:03:58.620632 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="806827f7-19db-4656-89ed-9d2253ecbf67" containerName="mariadb-database-create" Jan 28 07:03:58 crc kubenswrapper[4642]: I0128 07:03:58.620637 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="806827f7-19db-4656-89ed-9d2253ecbf67" containerName="mariadb-database-create" Jan 28 07:03:58 crc kubenswrapper[4642]: E0128 07:03:58.620647 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61bc8d83-3e11-4a38-9df5-f1c3f391540f" containerName="mariadb-account-create-update" Jan 28 07:03:58 crc kubenswrapper[4642]: I0128 07:03:58.620653 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="61bc8d83-3e11-4a38-9df5-f1c3f391540f" containerName="mariadb-account-create-update" Jan 28 07:03:58 crc kubenswrapper[4642]: E0128 07:03:58.620661 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c47afaa6-b39b-4af7-aaee-d47fa6e7932a" containerName="mariadb-database-create" Jan 28 07:03:58 crc kubenswrapper[4642]: I0128 07:03:58.620666 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="c47afaa6-b39b-4af7-aaee-d47fa6e7932a" containerName="mariadb-database-create" Jan 28 07:03:58 crc kubenswrapper[4642]: E0128 07:03:58.620677 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3987de36-92f6-4d2b-b4d6-42ab67c7525f" containerName="mariadb-account-create-update" Jan 28 07:03:58 crc kubenswrapper[4642]: I0128 07:03:58.620684 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="3987de36-92f6-4d2b-b4d6-42ab67c7525f" containerName="mariadb-account-create-update" Jan 28 07:03:58 crc kubenswrapper[4642]: E0128 07:03:58.620695 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="877c5c9a-8740-4a17-b470-ac9c2274c745" containerName="dnsmasq-dns" Jan 28 07:03:58 crc kubenswrapper[4642]: I0128 07:03:58.620701 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="877c5c9a-8740-4a17-b470-ac9c2274c745" containerName="dnsmasq-dns" Jan 28 07:03:58 crc kubenswrapper[4642]: E0128 07:03:58.620710 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="096310a2-8b6e-436b-9b34-6263ac3806b6" containerName="mariadb-database-create" Jan 28 07:03:58 crc kubenswrapper[4642]: I0128 07:03:58.620715 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="096310a2-8b6e-436b-9b34-6263ac3806b6" containerName="mariadb-database-create" Jan 28 07:03:58 crc kubenswrapper[4642]: I0128 07:03:58.620912 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="877c5c9a-8740-4a17-b470-ac9c2274c745" containerName="dnsmasq-dns" Jan 28 07:03:58 crc kubenswrapper[4642]: I0128 07:03:58.620922 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="3987de36-92f6-4d2b-b4d6-42ab67c7525f" containerName="mariadb-account-create-update" Jan 28 07:03:58 crc kubenswrapper[4642]: I0128 07:03:58.620933 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="096310a2-8b6e-436b-9b34-6263ac3806b6" containerName="mariadb-database-create" Jan 28 07:03:58 crc kubenswrapper[4642]: I0128 07:03:58.620946 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="910e369b-6c77-43c1-95de-f125a1813bc6" containerName="mariadb-account-create-update" Jan 28 07:03:58 crc kubenswrapper[4642]: I0128 07:03:58.620956 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bdc7d12-45ca-4119-ac92-cf16305dc903" containerName="ovn-config" Jan 28 07:03:58 crc kubenswrapper[4642]: I0128 07:03:58.620968 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="806827f7-19db-4656-89ed-9d2253ecbf67" containerName="mariadb-database-create" Jan 28 07:03:58 crc kubenswrapper[4642]: I0128 07:03:58.620976 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="61bc8d83-3e11-4a38-9df5-f1c3f391540f" containerName="mariadb-account-create-update" Jan 28 07:03:58 crc kubenswrapper[4642]: I0128 07:03:58.620983 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="995eb4f3-f6c8-4fb3-a991-8777f7b645cb" containerName="keystone-db-sync" Jan 28 07:03:58 crc kubenswrapper[4642]: I0128 07:03:58.620989 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="c47afaa6-b39b-4af7-aaee-d47fa6e7932a" containerName="mariadb-database-create" Jan 28 07:03:58 crc kubenswrapper[4642]: I0128 07:03:58.621816 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f6dd467cf-6c45s" Jan 28 07:03:58 crc kubenswrapper[4642]: I0128 07:03:58.627004 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f6dd467cf-6c45s"] Jan 28 07:03:58 crc kubenswrapper[4642]: I0128 07:03:58.678681 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0436cf41-376d-4469-9a72-bbc9b028ce14-dns-svc\") pod \"dnsmasq-dns-f6dd467cf-6c45s\" (UID: \"0436cf41-376d-4469-9a72-bbc9b028ce14\") " pod="openstack/dnsmasq-dns-f6dd467cf-6c45s" Jan 28 07:03:58 crc kubenswrapper[4642]: I0128 07:03:58.678799 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0436cf41-376d-4469-9a72-bbc9b028ce14-ovsdbserver-nb\") pod \"dnsmasq-dns-f6dd467cf-6c45s\" (UID: \"0436cf41-376d-4469-9a72-bbc9b028ce14\") " pod="openstack/dnsmasq-dns-f6dd467cf-6c45s" Jan 28 07:03:58 crc kubenswrapper[4642]: I0128 07:03:58.678820 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfqvv\" (UniqueName: \"kubernetes.io/projected/0436cf41-376d-4469-9a72-bbc9b028ce14-kube-api-access-sfqvv\") pod \"dnsmasq-dns-f6dd467cf-6c45s\" (UID: \"0436cf41-376d-4469-9a72-bbc9b028ce14\") " pod="openstack/dnsmasq-dns-f6dd467cf-6c45s" Jan 28 07:03:58 crc kubenswrapper[4642]: I0128 07:03:58.678845 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0436cf41-376d-4469-9a72-bbc9b028ce14-dns-swift-storage-0\") pod \"dnsmasq-dns-f6dd467cf-6c45s\" (UID: \"0436cf41-376d-4469-9a72-bbc9b028ce14\") " pod="openstack/dnsmasq-dns-f6dd467cf-6c45s" Jan 28 07:03:58 crc kubenswrapper[4642]: I0128 07:03:58.678882 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0436cf41-376d-4469-9a72-bbc9b028ce14-config\") pod \"dnsmasq-dns-f6dd467cf-6c45s\" (UID: \"0436cf41-376d-4469-9a72-bbc9b028ce14\") " pod="openstack/dnsmasq-dns-f6dd467cf-6c45s" Jan 28 07:03:58 crc kubenswrapper[4642]: I0128 07:03:58.678955 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0436cf41-376d-4469-9a72-bbc9b028ce14-ovsdbserver-sb\") pod \"dnsmasq-dns-f6dd467cf-6c45s\" (UID: \"0436cf41-376d-4469-9a72-bbc9b028ce14\") " pod="openstack/dnsmasq-dns-f6dd467cf-6c45s" Jan 28 07:03:58 crc kubenswrapper[4642]: I0128 07:03:58.689022 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-tn5p6"] Jan 28 07:03:58 crc kubenswrapper[4642]: I0128 07:03:58.689999 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-tn5p6" Jan 28 07:03:58 crc kubenswrapper[4642]: I0128 07:03:58.692944 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 28 07:03:58 crc kubenswrapper[4642]: I0128 07:03:58.692952 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 28 07:03:58 crc kubenswrapper[4642]: I0128 07:03:58.693059 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 28 07:03:58 crc kubenswrapper[4642]: I0128 07:03:58.693151 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 28 07:03:58 crc kubenswrapper[4642]: I0128 07:03:58.693159 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-lm99t" Jan 28 07:03:58 crc kubenswrapper[4642]: I0128 07:03:58.700486 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-tn5p6"] Jan 28 07:03:58 crc kubenswrapper[4642]: I0128 07:03:58.780699 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51c6952a-9ec5-4332-9e8c-27087d153b33-scripts\") pod \"keystone-bootstrap-tn5p6\" (UID: \"51c6952a-9ec5-4332-9e8c-27087d153b33\") " pod="openstack/keystone-bootstrap-tn5p6" Jan 28 07:03:58 crc kubenswrapper[4642]: I0128 07:03:58.781067 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/51c6952a-9ec5-4332-9e8c-27087d153b33-credential-keys\") pod \"keystone-bootstrap-tn5p6\" (UID: \"51c6952a-9ec5-4332-9e8c-27087d153b33\") " pod="openstack/keystone-bootstrap-tn5p6" Jan 28 07:03:58 crc kubenswrapper[4642]: I0128 07:03:58.781178 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0436cf41-376d-4469-9a72-bbc9b028ce14-ovsdbserver-sb\") pod \"dnsmasq-dns-f6dd467cf-6c45s\" (UID: \"0436cf41-376d-4469-9a72-bbc9b028ce14\") " pod="openstack/dnsmasq-dns-f6dd467cf-6c45s" Jan 28 07:03:58 crc kubenswrapper[4642]: I0128 07:03:58.781274 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/51c6952a-9ec5-4332-9e8c-27087d153b33-fernet-keys\") pod \"keystone-bootstrap-tn5p6\" (UID: \"51c6952a-9ec5-4332-9e8c-27087d153b33\") " pod="openstack/keystone-bootstrap-tn5p6" Jan 28 07:03:58 crc kubenswrapper[4642]: I0128 07:03:58.781365 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7gk7\" (UniqueName: \"kubernetes.io/projected/51c6952a-9ec5-4332-9e8c-27087d153b33-kube-api-access-h7gk7\") pod \"keystone-bootstrap-tn5p6\" (UID: \"51c6952a-9ec5-4332-9e8c-27087d153b33\") " pod="openstack/keystone-bootstrap-tn5p6" Jan 28 07:03:58 crc kubenswrapper[4642]: I0128 07:03:58.781452 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0436cf41-376d-4469-9a72-bbc9b028ce14-dns-svc\") pod \"dnsmasq-dns-f6dd467cf-6c45s\" (UID: \"0436cf41-376d-4469-9a72-bbc9b028ce14\") " pod="openstack/dnsmasq-dns-f6dd467cf-6c45s" Jan 28 07:03:58 crc kubenswrapper[4642]: I0128 07:03:58.781661 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0436cf41-376d-4469-9a72-bbc9b028ce14-ovsdbserver-nb\") pod \"dnsmasq-dns-f6dd467cf-6c45s\" (UID: \"0436cf41-376d-4469-9a72-bbc9b028ce14\") " pod="openstack/dnsmasq-dns-f6dd467cf-6c45s" Jan 28 07:03:58 crc kubenswrapper[4642]: I0128 07:03:58.781796 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfqvv\" (UniqueName: \"kubernetes.io/projected/0436cf41-376d-4469-9a72-bbc9b028ce14-kube-api-access-sfqvv\") pod \"dnsmasq-dns-f6dd467cf-6c45s\" (UID: \"0436cf41-376d-4469-9a72-bbc9b028ce14\") " pod="openstack/dnsmasq-dns-f6dd467cf-6c45s" Jan 28 07:03:58 crc kubenswrapper[4642]: I0128 07:03:58.781872 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0436cf41-376d-4469-9a72-bbc9b028ce14-dns-swift-storage-0\") pod \"dnsmasq-dns-f6dd467cf-6c45s\" (UID: \"0436cf41-376d-4469-9a72-bbc9b028ce14\") " pod="openstack/dnsmasq-dns-f6dd467cf-6c45s" Jan 28 07:03:58 crc kubenswrapper[4642]: I0128 07:03:58.781939 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0436cf41-376d-4469-9a72-bbc9b028ce14-config\") pod \"dnsmasq-dns-f6dd467cf-6c45s\" (UID: \"0436cf41-376d-4469-9a72-bbc9b028ce14\") " pod="openstack/dnsmasq-dns-f6dd467cf-6c45s" Jan 28 07:03:58 crc kubenswrapper[4642]: I0128 07:03:58.782012 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51c6952a-9ec5-4332-9e8c-27087d153b33-combined-ca-bundle\") pod \"keystone-bootstrap-tn5p6\" (UID: \"51c6952a-9ec5-4332-9e8c-27087d153b33\") " pod="openstack/keystone-bootstrap-tn5p6" Jan 28 07:03:58 crc kubenswrapper[4642]: I0128 07:03:58.782080 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51c6952a-9ec5-4332-9e8c-27087d153b33-config-data\") pod \"keystone-bootstrap-tn5p6\" (UID: \"51c6952a-9ec5-4332-9e8c-27087d153b33\") " pod="openstack/keystone-bootstrap-tn5p6" Jan 28 07:03:58 crc kubenswrapper[4642]: I0128 07:03:58.784397 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0436cf41-376d-4469-9a72-bbc9b028ce14-dns-svc\") pod \"dnsmasq-dns-f6dd467cf-6c45s\" (UID: \"0436cf41-376d-4469-9a72-bbc9b028ce14\") " pod="openstack/dnsmasq-dns-f6dd467cf-6c45s" Jan 28 07:03:58 crc kubenswrapper[4642]: I0128 07:03:58.784437 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0436cf41-376d-4469-9a72-bbc9b028ce14-ovsdbserver-nb\") pod \"dnsmasq-dns-f6dd467cf-6c45s\" (UID: \"0436cf41-376d-4469-9a72-bbc9b028ce14\") " pod="openstack/dnsmasq-dns-f6dd467cf-6c45s" Jan 28 07:03:58 crc kubenswrapper[4642]: I0128 07:03:58.785724 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0436cf41-376d-4469-9a72-bbc9b028ce14-ovsdbserver-sb\") pod \"dnsmasq-dns-f6dd467cf-6c45s\" (UID: \"0436cf41-376d-4469-9a72-bbc9b028ce14\") " pod="openstack/dnsmasq-dns-f6dd467cf-6c45s" Jan 28 07:03:58 crc kubenswrapper[4642]: I0128 07:03:58.786022 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0436cf41-376d-4469-9a72-bbc9b028ce14-dns-swift-storage-0\") pod \"dnsmasq-dns-f6dd467cf-6c45s\" (UID: \"0436cf41-376d-4469-9a72-bbc9b028ce14\") " pod="openstack/dnsmasq-dns-f6dd467cf-6c45s" Jan 28 07:03:58 crc kubenswrapper[4642]: I0128 07:03:58.786126 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0436cf41-376d-4469-9a72-bbc9b028ce14-config\") pod \"dnsmasq-dns-f6dd467cf-6c45s\" (UID: \"0436cf41-376d-4469-9a72-bbc9b028ce14\") " pod="openstack/dnsmasq-dns-f6dd467cf-6c45s" Jan 28 07:03:58 crc kubenswrapper[4642]: I0128 07:03:58.790164 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 28 07:03:58 crc kubenswrapper[4642]: I0128 07:03:58.791886 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 07:03:58 crc kubenswrapper[4642]: I0128 07:03:58.793334 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 28 07:03:58 crc kubenswrapper[4642]: I0128 07:03:58.793523 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 28 07:03:58 crc kubenswrapper[4642]: I0128 07:03:58.799158 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 07:03:58 crc kubenswrapper[4642]: I0128 07:03:58.803378 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfqvv\" (UniqueName: \"kubernetes.io/projected/0436cf41-376d-4469-9a72-bbc9b028ce14-kube-api-access-sfqvv\") pod \"dnsmasq-dns-f6dd467cf-6c45s\" (UID: \"0436cf41-376d-4469-9a72-bbc9b028ce14\") " pod="openstack/dnsmasq-dns-f6dd467cf-6c45s" Jan 28 07:03:58 crc kubenswrapper[4642]: I0128 07:03:58.863124 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-ds4qn"] Jan 28 07:03:58 crc kubenswrapper[4642]: I0128 07:03:58.864532 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-ds4qn" Jan 28 07:03:58 crc kubenswrapper[4642]: I0128 07:03:58.885165 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51c6952a-9ec5-4332-9e8c-27087d153b33-combined-ca-bundle\") pod \"keystone-bootstrap-tn5p6\" (UID: \"51c6952a-9ec5-4332-9e8c-27087d153b33\") " pod="openstack/keystone-bootstrap-tn5p6" Jan 28 07:03:58 crc kubenswrapper[4642]: I0128 07:03:58.885237 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51c6952a-9ec5-4332-9e8c-27087d153b33-config-data\") pod \"keystone-bootstrap-tn5p6\" (UID: \"51c6952a-9ec5-4332-9e8c-27087d153b33\") " pod="openstack/keystone-bootstrap-tn5p6" Jan 28 07:03:58 crc kubenswrapper[4642]: I0128 07:03:58.885285 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51c6952a-9ec5-4332-9e8c-27087d153b33-scripts\") pod \"keystone-bootstrap-tn5p6\" (UID: \"51c6952a-9ec5-4332-9e8c-27087d153b33\") " pod="openstack/keystone-bootstrap-tn5p6" Jan 28 07:03:58 crc kubenswrapper[4642]: I0128 07:03:58.885349 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/51c6952a-9ec5-4332-9e8c-27087d153b33-credential-keys\") pod \"keystone-bootstrap-tn5p6\" (UID: \"51c6952a-9ec5-4332-9e8c-27087d153b33\") " pod="openstack/keystone-bootstrap-tn5p6" Jan 28 07:03:58 crc kubenswrapper[4642]: I0128 07:03:58.885878 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/51c6952a-9ec5-4332-9e8c-27087d153b33-fernet-keys\") pod \"keystone-bootstrap-tn5p6\" (UID: \"51c6952a-9ec5-4332-9e8c-27087d153b33\") " pod="openstack/keystone-bootstrap-tn5p6" Jan 28 07:03:58 crc kubenswrapper[4642]: I0128 07:03:58.885957 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7gk7\" (UniqueName: \"kubernetes.io/projected/51c6952a-9ec5-4332-9e8c-27087d153b33-kube-api-access-h7gk7\") pod \"keystone-bootstrap-tn5p6\" (UID: \"51c6952a-9ec5-4332-9e8c-27087d153b33\") " pod="openstack/keystone-bootstrap-tn5p6" Jan 28 07:03:58 crc kubenswrapper[4642]: I0128 07:03:58.886634 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-68q8l"] Jan 28 07:03:58 crc kubenswrapper[4642]: I0128 07:03:58.888909 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51c6952a-9ec5-4332-9e8c-27087d153b33-config-data\") pod \"keystone-bootstrap-tn5p6\" (UID: \"51c6952a-9ec5-4332-9e8c-27087d153b33\") " pod="openstack/keystone-bootstrap-tn5p6" Jan 28 07:03:58 crc kubenswrapper[4642]: I0128 07:03:58.889996 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/51c6952a-9ec5-4332-9e8c-27087d153b33-fernet-keys\") pod \"keystone-bootstrap-tn5p6\" (UID: \"51c6952a-9ec5-4332-9e8c-27087d153b33\") " pod="openstack/keystone-bootstrap-tn5p6" Jan 28 07:03:58 crc kubenswrapper[4642]: I0128 07:03:58.898746 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/51c6952a-9ec5-4332-9e8c-27087d153b33-credential-keys\") pod \"keystone-bootstrap-tn5p6\" (UID: \"51c6952a-9ec5-4332-9e8c-27087d153b33\") " pod="openstack/keystone-bootstrap-tn5p6" Jan 28 07:03:58 crc kubenswrapper[4642]: I0128 07:03:58.900082 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-68q8l" Jan 28 07:03:58 crc kubenswrapper[4642]: I0128 07:03:58.906313 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51c6952a-9ec5-4332-9e8c-27087d153b33-combined-ca-bundle\") pod \"keystone-bootstrap-tn5p6\" (UID: \"51c6952a-9ec5-4332-9e8c-27087d153b33\") " pod="openstack/keystone-bootstrap-tn5p6" Jan 28 07:03:58 crc kubenswrapper[4642]: I0128 07:03:58.925795 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51c6952a-9ec5-4332-9e8c-27087d153b33-scripts\") pod \"keystone-bootstrap-tn5p6\" (UID: \"51c6952a-9ec5-4332-9e8c-27087d153b33\") " pod="openstack/keystone-bootstrap-tn5p6" Jan 28 07:03:58 crc kubenswrapper[4642]: I0128 07:03:58.943373 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f6dd467cf-6c45s" Jan 28 07:03:58 crc kubenswrapper[4642]: I0128 07:03:58.946033 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 28 07:03:58 crc kubenswrapper[4642]: I0128 07:03:58.946345 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-5z67g" Jan 28 07:03:58 crc kubenswrapper[4642]: I0128 07:03:58.946517 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 28 07:03:58 crc kubenswrapper[4642]: I0128 07:03:58.946650 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 28 07:03:58 crc kubenswrapper[4642]: I0128 07:03:58.946775 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-8bgcg" Jan 28 07:03:58 crc kubenswrapper[4642]: I0128 07:03:58.946920 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 28 07:03:58 crc kubenswrapper[4642]: I0128 07:03:58.948374 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-ds4qn"] Jan 28 07:03:58 crc kubenswrapper[4642]: I0128 07:03:58.962976 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7gk7\" (UniqueName: \"kubernetes.io/projected/51c6952a-9ec5-4332-9e8c-27087d153b33-kube-api-access-h7gk7\") pod \"keystone-bootstrap-tn5p6\" (UID: \"51c6952a-9ec5-4332-9e8c-27087d153b33\") " pod="openstack/keystone-bootstrap-tn5p6" Jan 28 07:03:58 crc kubenswrapper[4642]: I0128 07:03:58.987736 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b1bc0b13-73e3-41ef-a7a5-c5da7462234e-config\") pod \"neutron-db-sync-ds4qn\" (UID: \"b1bc0b13-73e3-41ef-a7a5-c5da7462234e\") " pod="openstack/neutron-db-sync-ds4qn" Jan 28 07:03:58 crc kubenswrapper[4642]: I0128 07:03:58.987799 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dc45k\" (UniqueName: \"kubernetes.io/projected/201fcbe3-af3e-4f46-bd0f-07693de10229-kube-api-access-dc45k\") pod \"ceilometer-0\" (UID: \"201fcbe3-af3e-4f46-bd0f-07693de10229\") " pod="openstack/ceilometer-0" Jan 28 07:03:58 crc kubenswrapper[4642]: I0128 07:03:58.987851 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/201fcbe3-af3e-4f46-bd0f-07693de10229-log-httpd\") pod \"ceilometer-0\" (UID: \"201fcbe3-af3e-4f46-bd0f-07693de10229\") " pod="openstack/ceilometer-0" Jan 28 07:03:58 crc kubenswrapper[4642]: I0128 07:03:58.987869 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1bc0b13-73e3-41ef-a7a5-c5da7462234e-combined-ca-bundle\") pod \"neutron-db-sync-ds4qn\" (UID: \"b1bc0b13-73e3-41ef-a7a5-c5da7462234e\") " pod="openstack/neutron-db-sync-ds4qn" Jan 28 07:03:58 crc kubenswrapper[4642]: I0128 07:03:58.987884 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/201fcbe3-af3e-4f46-bd0f-07693de10229-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"201fcbe3-af3e-4f46-bd0f-07693de10229\") " pod="openstack/ceilometer-0" Jan 28 07:03:58 crc kubenswrapper[4642]: I0128 07:03:58.987900 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/201fcbe3-af3e-4f46-bd0f-07693de10229-config-data\") pod \"ceilometer-0\" (UID: \"201fcbe3-af3e-4f46-bd0f-07693de10229\") " pod="openstack/ceilometer-0" Jan 28 07:03:58 crc kubenswrapper[4642]: I0128 07:03:58.987920 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9qsb\" (UniqueName: \"kubernetes.io/projected/b1bc0b13-73e3-41ef-a7a5-c5da7462234e-kube-api-access-v9qsb\") pod \"neutron-db-sync-ds4qn\" (UID: \"b1bc0b13-73e3-41ef-a7a5-c5da7462234e\") " pod="openstack/neutron-db-sync-ds4qn" Jan 28 07:03:58 crc kubenswrapper[4642]: I0128 07:03:58.987941 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/201fcbe3-af3e-4f46-bd0f-07693de10229-scripts\") pod \"ceilometer-0\" (UID: \"201fcbe3-af3e-4f46-bd0f-07693de10229\") " pod="openstack/ceilometer-0" Jan 28 07:03:58 crc kubenswrapper[4642]: I0128 07:03:58.987956 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/201fcbe3-af3e-4f46-bd0f-07693de10229-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"201fcbe3-af3e-4f46-bd0f-07693de10229\") " pod="openstack/ceilometer-0" Jan 28 07:03:58 crc kubenswrapper[4642]: I0128 07:03:58.987975 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/201fcbe3-af3e-4f46-bd0f-07693de10229-run-httpd\") pod \"ceilometer-0\" (UID: \"201fcbe3-af3e-4f46-bd0f-07693de10229\") " pod="openstack/ceilometer-0" Jan 28 07:03:58 crc kubenswrapper[4642]: I0128 07:03:58.996948 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-68q8l"] Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.006615 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-tn5p6" Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.049887 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-8s4pp"] Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.050898 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8s4pp" Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.055812 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-8lsqz" Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.056146 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.056284 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.076898 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f6dd467cf-6c45s"] Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.083131 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-8s4pp"] Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.091101 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/201fcbe3-af3e-4f46-bd0f-07693de10229-log-httpd\") pod \"ceilometer-0\" (UID: \"201fcbe3-af3e-4f46-bd0f-07693de10229\") " pod="openstack/ceilometer-0" Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.091132 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1bc0b13-73e3-41ef-a7a5-c5da7462234e-combined-ca-bundle\") pod \"neutron-db-sync-ds4qn\" (UID: \"b1bc0b13-73e3-41ef-a7a5-c5da7462234e\") " pod="openstack/neutron-db-sync-ds4qn" Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.091153 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/201fcbe3-af3e-4f46-bd0f-07693de10229-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"201fcbe3-af3e-4f46-bd0f-07693de10229\") " pod="openstack/ceilometer-0" Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.091169 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/201fcbe3-af3e-4f46-bd0f-07693de10229-config-data\") pod \"ceilometer-0\" (UID: \"201fcbe3-af3e-4f46-bd0f-07693de10229\") " pod="openstack/ceilometer-0" Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.091212 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9qsb\" (UniqueName: \"kubernetes.io/projected/b1bc0b13-73e3-41ef-a7a5-c5da7462234e-kube-api-access-v9qsb\") pod \"neutron-db-sync-ds4qn\" (UID: \"b1bc0b13-73e3-41ef-a7a5-c5da7462234e\") " pod="openstack/neutron-db-sync-ds4qn" Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.091232 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e6aa333d-da9b-4f35-86a3-b88b6fe94ebc-etc-machine-id\") pod \"cinder-db-sync-68q8l\" (UID: \"e6aa333d-da9b-4f35-86a3-b88b6fe94ebc\") " pod="openstack/cinder-db-sync-68q8l" Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.091253 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/201fcbe3-af3e-4f46-bd0f-07693de10229-scripts\") pod \"ceilometer-0\" (UID: \"201fcbe3-af3e-4f46-bd0f-07693de10229\") " pod="openstack/ceilometer-0" Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.091271 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/201fcbe3-af3e-4f46-bd0f-07693de10229-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"201fcbe3-af3e-4f46-bd0f-07693de10229\") " pod="openstack/ceilometer-0" Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.091289 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/201fcbe3-af3e-4f46-bd0f-07693de10229-run-httpd\") pod \"ceilometer-0\" (UID: \"201fcbe3-af3e-4f46-bd0f-07693de10229\") " pod="openstack/ceilometer-0" Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.091310 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6aa333d-da9b-4f35-86a3-b88b6fe94ebc-scripts\") pod \"cinder-db-sync-68q8l\" (UID: \"e6aa333d-da9b-4f35-86a3-b88b6fe94ebc\") " pod="openstack/cinder-db-sync-68q8l" Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.091324 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6aa333d-da9b-4f35-86a3-b88b6fe94ebc-combined-ca-bundle\") pod \"cinder-db-sync-68q8l\" (UID: \"e6aa333d-da9b-4f35-86a3-b88b6fe94ebc\") " pod="openstack/cinder-db-sync-68q8l" Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.091350 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-659fz\" (UniqueName: \"kubernetes.io/projected/e6aa333d-da9b-4f35-86a3-b88b6fe94ebc-kube-api-access-659fz\") pod \"cinder-db-sync-68q8l\" (UID: \"e6aa333d-da9b-4f35-86a3-b88b6fe94ebc\") " pod="openstack/cinder-db-sync-68q8l" Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.091379 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b1bc0b13-73e3-41ef-a7a5-c5da7462234e-config\") pod \"neutron-db-sync-ds4qn\" (UID: \"b1bc0b13-73e3-41ef-a7a5-c5da7462234e\") " pod="openstack/neutron-db-sync-ds4qn" Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.091415 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dc45k\" (UniqueName: \"kubernetes.io/projected/201fcbe3-af3e-4f46-bd0f-07693de10229-kube-api-access-dc45k\") pod \"ceilometer-0\" (UID: \"201fcbe3-af3e-4f46-bd0f-07693de10229\") " pod="openstack/ceilometer-0" Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.091438 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6aa333d-da9b-4f35-86a3-b88b6fe94ebc-config-data\") pod \"cinder-db-sync-68q8l\" (UID: \"e6aa333d-da9b-4f35-86a3-b88b6fe94ebc\") " pod="openstack/cinder-db-sync-68q8l" Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.091471 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e6aa333d-da9b-4f35-86a3-b88b6fe94ebc-db-sync-config-data\") pod \"cinder-db-sync-68q8l\" (UID: \"e6aa333d-da9b-4f35-86a3-b88b6fe94ebc\") " pod="openstack/cinder-db-sync-68q8l" Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.091572 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/201fcbe3-af3e-4f46-bd0f-07693de10229-log-httpd\") pod \"ceilometer-0\" (UID: \"201fcbe3-af3e-4f46-bd0f-07693de10229\") " pod="openstack/ceilometer-0" Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.091809 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/201fcbe3-af3e-4f46-bd0f-07693de10229-run-httpd\") pod \"ceilometer-0\" (UID: \"201fcbe3-af3e-4f46-bd0f-07693de10229\") " pod="openstack/ceilometer-0" Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.094249 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b4577c76c-qlhw5"] Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.095754 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/201fcbe3-af3e-4f46-bd0f-07693de10229-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"201fcbe3-af3e-4f46-bd0f-07693de10229\") " pod="openstack/ceilometer-0" Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.095983 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b4577c76c-qlhw5" Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.096992 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/201fcbe3-af3e-4f46-bd0f-07693de10229-config-data\") pod \"ceilometer-0\" (UID: \"201fcbe3-af3e-4f46-bd0f-07693de10229\") " pod="openstack/ceilometer-0" Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.097559 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/b1bc0b13-73e3-41ef-a7a5-c5da7462234e-config\") pod \"neutron-db-sync-ds4qn\" (UID: \"b1bc0b13-73e3-41ef-a7a5-c5da7462234e\") " pod="openstack/neutron-db-sync-ds4qn" Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.099617 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1bc0b13-73e3-41ef-a7a5-c5da7462234e-combined-ca-bundle\") pod \"neutron-db-sync-ds4qn\" (UID: \"b1bc0b13-73e3-41ef-a7a5-c5da7462234e\") " pod="openstack/neutron-db-sync-ds4qn" Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.101008 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/201fcbe3-af3e-4f46-bd0f-07693de10229-scripts\") pod \"ceilometer-0\" (UID: \"201fcbe3-af3e-4f46-bd0f-07693de10229\") " pod="openstack/ceilometer-0" Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.106122 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/201fcbe3-af3e-4f46-bd0f-07693de10229-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"201fcbe3-af3e-4f46-bd0f-07693de10229\") " pod="openstack/ceilometer-0" Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.108169 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dc45k\" (UniqueName: \"kubernetes.io/projected/201fcbe3-af3e-4f46-bd0f-07693de10229-kube-api-access-dc45k\") pod \"ceilometer-0\" (UID: \"201fcbe3-af3e-4f46-bd0f-07693de10229\") " pod="openstack/ceilometer-0" Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.111632 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b4577c76c-qlhw5"] Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.112870 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9qsb\" (UniqueName: \"kubernetes.io/projected/b1bc0b13-73e3-41ef-a7a5-c5da7462234e-kube-api-access-v9qsb\") pod \"neutron-db-sync-ds4qn\" (UID: \"b1bc0b13-73e3-41ef-a7a5-c5da7462234e\") " pod="openstack/neutron-db-sync-ds4qn" Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.114880 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-vtvtp"] Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.116070 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-vtvtp" Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.124329 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-xlw9c" Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.124595 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.124691 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-vtvtp"] Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.134760 4642 generic.go:334] "Generic (PLEG): container finished" podID="455075b1-b6a1-4b44-b1a7-fc86ed3fc8e0" containerID="ee5d93f01fa5bb07bc234a89d646ecd0e5b52c4158e76237be7999a9b57da439" exitCode=0 Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.134804 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-kxtxh" event={"ID":"455075b1-b6a1-4b44-b1a7-fc86ed3fc8e0","Type":"ContainerDied","Data":"ee5d93f01fa5bb07bc234a89d646ecd0e5b52c4158e76237be7999a9b57da439"} Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.143868 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.193155 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-659fz\" (UniqueName: \"kubernetes.io/projected/e6aa333d-da9b-4f35-86a3-b88b6fe94ebc-kube-api-access-659fz\") pod \"cinder-db-sync-68q8l\" (UID: \"e6aa333d-da9b-4f35-86a3-b88b6fe94ebc\") " pod="openstack/cinder-db-sync-68q8l" Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.193214 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e57d0d61-2e5c-4cd6-be95-3e3c6f102f7e-config-data\") pod \"placement-db-sync-8s4pp\" (UID: \"e57d0d61-2e5c-4cd6-be95-3e3c6f102f7e\") " pod="openstack/placement-db-sync-8s4pp" Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.193254 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e57d0d61-2e5c-4cd6-be95-3e3c6f102f7e-logs\") pod \"placement-db-sync-8s4pp\" (UID: \"e57d0d61-2e5c-4cd6-be95-3e3c6f102f7e\") " pod="openstack/placement-db-sync-8s4pp" Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.193283 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fswvz\" (UniqueName: \"kubernetes.io/projected/e57d0d61-2e5c-4cd6-be95-3e3c6f102f7e-kube-api-access-fswvz\") pod \"placement-db-sync-8s4pp\" (UID: \"e57d0d61-2e5c-4cd6-be95-3e3c6f102f7e\") " pod="openstack/placement-db-sync-8s4pp" Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.193304 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6aa333d-da9b-4f35-86a3-b88b6fe94ebc-config-data\") pod \"cinder-db-sync-68q8l\" (UID: \"e6aa333d-da9b-4f35-86a3-b88b6fe94ebc\") " pod="openstack/cinder-db-sync-68q8l" Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.193329 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e6aa333d-da9b-4f35-86a3-b88b6fe94ebc-db-sync-config-data\") pod \"cinder-db-sync-68q8l\" (UID: \"e6aa333d-da9b-4f35-86a3-b88b6fe94ebc\") " pod="openstack/cinder-db-sync-68q8l" Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.193357 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e57d0d61-2e5c-4cd6-be95-3e3c6f102f7e-scripts\") pod \"placement-db-sync-8s4pp\" (UID: \"e57d0d61-2e5c-4cd6-be95-3e3c6f102f7e\") " pod="openstack/placement-db-sync-8s4pp" Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.193375 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e6aa333d-da9b-4f35-86a3-b88b6fe94ebc-etc-machine-id\") pod \"cinder-db-sync-68q8l\" (UID: \"e6aa333d-da9b-4f35-86a3-b88b6fe94ebc\") " pod="openstack/cinder-db-sync-68q8l" Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.193404 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e57d0d61-2e5c-4cd6-be95-3e3c6f102f7e-combined-ca-bundle\") pod \"placement-db-sync-8s4pp\" (UID: \"e57d0d61-2e5c-4cd6-be95-3e3c6f102f7e\") " pod="openstack/placement-db-sync-8s4pp" Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.193421 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6aa333d-da9b-4f35-86a3-b88b6fe94ebc-scripts\") pod \"cinder-db-sync-68q8l\" (UID: \"e6aa333d-da9b-4f35-86a3-b88b6fe94ebc\") " pod="openstack/cinder-db-sync-68q8l" Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.193435 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6aa333d-da9b-4f35-86a3-b88b6fe94ebc-combined-ca-bundle\") pod \"cinder-db-sync-68q8l\" (UID: \"e6aa333d-da9b-4f35-86a3-b88b6fe94ebc\") " pod="openstack/cinder-db-sync-68q8l" Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.193935 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e6aa333d-da9b-4f35-86a3-b88b6fe94ebc-etc-machine-id\") pod \"cinder-db-sync-68q8l\" (UID: \"e6aa333d-da9b-4f35-86a3-b88b6fe94ebc\") " pod="openstack/cinder-db-sync-68q8l" Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.197419 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e6aa333d-da9b-4f35-86a3-b88b6fe94ebc-db-sync-config-data\") pod \"cinder-db-sync-68q8l\" (UID: \"e6aa333d-da9b-4f35-86a3-b88b6fe94ebc\") " pod="openstack/cinder-db-sync-68q8l" Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.197805 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6aa333d-da9b-4f35-86a3-b88b6fe94ebc-combined-ca-bundle\") pod \"cinder-db-sync-68q8l\" (UID: \"e6aa333d-da9b-4f35-86a3-b88b6fe94ebc\") " pod="openstack/cinder-db-sync-68q8l" Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.198895 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6aa333d-da9b-4f35-86a3-b88b6fe94ebc-config-data\") pod \"cinder-db-sync-68q8l\" (UID: \"e6aa333d-da9b-4f35-86a3-b88b6fe94ebc\") " pod="openstack/cinder-db-sync-68q8l" Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.200009 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6aa333d-da9b-4f35-86a3-b88b6fe94ebc-scripts\") pod \"cinder-db-sync-68q8l\" (UID: \"e6aa333d-da9b-4f35-86a3-b88b6fe94ebc\") " pod="openstack/cinder-db-sync-68q8l" Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.207396 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-659fz\" (UniqueName: \"kubernetes.io/projected/e6aa333d-da9b-4f35-86a3-b88b6fe94ebc-kube-api-access-659fz\") pod \"cinder-db-sync-68q8l\" (UID: \"e6aa333d-da9b-4f35-86a3-b88b6fe94ebc\") " pod="openstack/cinder-db-sync-68q8l" Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.295339 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjm8z\" (UniqueName: \"kubernetes.io/projected/7867b3ca-d714-457b-a06e-4189d33c1332-kube-api-access-jjm8z\") pod \"dnsmasq-dns-5b4577c76c-qlhw5\" (UID: \"7867b3ca-d714-457b-a06e-4189d33c1332\") " pod="openstack/dnsmasq-dns-5b4577c76c-qlhw5" Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.295387 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e57d0d61-2e5c-4cd6-be95-3e3c6f102f7e-logs\") pod \"placement-db-sync-8s4pp\" (UID: \"e57d0d61-2e5c-4cd6-be95-3e3c6f102f7e\") " pod="openstack/placement-db-sync-8s4pp" Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.295422 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7867b3ca-d714-457b-a06e-4189d33c1332-dns-svc\") pod \"dnsmasq-dns-5b4577c76c-qlhw5\" (UID: \"7867b3ca-d714-457b-a06e-4189d33c1332\") " pod="openstack/dnsmasq-dns-5b4577c76c-qlhw5" Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.295504 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fswvz\" (UniqueName: \"kubernetes.io/projected/e57d0d61-2e5c-4cd6-be95-3e3c6f102f7e-kube-api-access-fswvz\") pod \"placement-db-sync-8s4pp\" (UID: \"e57d0d61-2e5c-4cd6-be95-3e3c6f102f7e\") " pod="openstack/placement-db-sync-8s4pp" Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.295542 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7867b3ca-d714-457b-a06e-4189d33c1332-config\") pod \"dnsmasq-dns-5b4577c76c-qlhw5\" (UID: \"7867b3ca-d714-457b-a06e-4189d33c1332\") " pod="openstack/dnsmasq-dns-5b4577c76c-qlhw5" Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.295605 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/914420d1-9b01-49b9-962d-405cc170a061-db-sync-config-data\") pod \"barbican-db-sync-vtvtp\" (UID: \"914420d1-9b01-49b9-962d-405cc170a061\") " pod="openstack/barbican-db-sync-vtvtp" Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.295627 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7867b3ca-d714-457b-a06e-4189d33c1332-ovsdbserver-sb\") pod \"dnsmasq-dns-5b4577c76c-qlhw5\" (UID: \"7867b3ca-d714-457b-a06e-4189d33c1332\") " pod="openstack/dnsmasq-dns-5b4577c76c-qlhw5" Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.295668 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e57d0d61-2e5c-4cd6-be95-3e3c6f102f7e-scripts\") pod \"placement-db-sync-8s4pp\" (UID: \"e57d0d61-2e5c-4cd6-be95-3e3c6f102f7e\") " pod="openstack/placement-db-sync-8s4pp" Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.295714 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7867b3ca-d714-457b-a06e-4189d33c1332-dns-swift-storage-0\") pod \"dnsmasq-dns-5b4577c76c-qlhw5\" (UID: \"7867b3ca-d714-457b-a06e-4189d33c1332\") " pod="openstack/dnsmasq-dns-5b4577c76c-qlhw5" Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.295748 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e57d0d61-2e5c-4cd6-be95-3e3c6f102f7e-combined-ca-bundle\") pod \"placement-db-sync-8s4pp\" (UID: \"e57d0d61-2e5c-4cd6-be95-3e3c6f102f7e\") " pod="openstack/placement-db-sync-8s4pp" Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.295761 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e57d0d61-2e5c-4cd6-be95-3e3c6f102f7e-logs\") pod \"placement-db-sync-8s4pp\" (UID: \"e57d0d61-2e5c-4cd6-be95-3e3c6f102f7e\") " pod="openstack/placement-db-sync-8s4pp" Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.295775 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzn7r\" (UniqueName: \"kubernetes.io/projected/914420d1-9b01-49b9-962d-405cc170a061-kube-api-access-nzn7r\") pod \"barbican-db-sync-vtvtp\" (UID: \"914420d1-9b01-49b9-962d-405cc170a061\") " pod="openstack/barbican-db-sync-vtvtp" Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.296495 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7867b3ca-d714-457b-a06e-4189d33c1332-ovsdbserver-nb\") pod \"dnsmasq-dns-5b4577c76c-qlhw5\" (UID: \"7867b3ca-d714-457b-a06e-4189d33c1332\") " pod="openstack/dnsmasq-dns-5b4577c76c-qlhw5" Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.296583 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e57d0d61-2e5c-4cd6-be95-3e3c6f102f7e-config-data\") pod \"placement-db-sync-8s4pp\" (UID: \"e57d0d61-2e5c-4cd6-be95-3e3c6f102f7e\") " pod="openstack/placement-db-sync-8s4pp" Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.296635 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/914420d1-9b01-49b9-962d-405cc170a061-combined-ca-bundle\") pod \"barbican-db-sync-vtvtp\" (UID: \"914420d1-9b01-49b9-962d-405cc170a061\") " pod="openstack/barbican-db-sync-vtvtp" Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.299726 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e57d0d61-2e5c-4cd6-be95-3e3c6f102f7e-scripts\") pod \"placement-db-sync-8s4pp\" (UID: \"e57d0d61-2e5c-4cd6-be95-3e3c6f102f7e\") " pod="openstack/placement-db-sync-8s4pp" Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.300549 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e57d0d61-2e5c-4cd6-be95-3e3c6f102f7e-combined-ca-bundle\") pod \"placement-db-sync-8s4pp\" (UID: \"e57d0d61-2e5c-4cd6-be95-3e3c6f102f7e\") " pod="openstack/placement-db-sync-8s4pp" Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.304761 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e57d0d61-2e5c-4cd6-be95-3e3c6f102f7e-config-data\") pod \"placement-db-sync-8s4pp\" (UID: \"e57d0d61-2e5c-4cd6-be95-3e3c6f102f7e\") " pod="openstack/placement-db-sync-8s4pp" Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.306493 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-ds4qn" Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.311144 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fswvz\" (UniqueName: \"kubernetes.io/projected/e57d0d61-2e5c-4cd6-be95-3e3c6f102f7e-kube-api-access-fswvz\") pod \"placement-db-sync-8s4pp\" (UID: \"e57d0d61-2e5c-4cd6-be95-3e3c6f102f7e\") " pod="openstack/placement-db-sync-8s4pp" Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.327510 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-68q8l" Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.366926 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8s4pp" Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.400808 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7867b3ca-d714-457b-a06e-4189d33c1332-dns-swift-storage-0\") pod \"dnsmasq-dns-5b4577c76c-qlhw5\" (UID: \"7867b3ca-d714-457b-a06e-4189d33c1332\") " pod="openstack/dnsmasq-dns-5b4577c76c-qlhw5" Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.400961 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzn7r\" (UniqueName: \"kubernetes.io/projected/914420d1-9b01-49b9-962d-405cc170a061-kube-api-access-nzn7r\") pod \"barbican-db-sync-vtvtp\" (UID: \"914420d1-9b01-49b9-962d-405cc170a061\") " pod="openstack/barbican-db-sync-vtvtp" Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.401032 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7867b3ca-d714-457b-a06e-4189d33c1332-ovsdbserver-nb\") pod \"dnsmasq-dns-5b4577c76c-qlhw5\" (UID: \"7867b3ca-d714-457b-a06e-4189d33c1332\") " pod="openstack/dnsmasq-dns-5b4577c76c-qlhw5" Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.401240 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/914420d1-9b01-49b9-962d-405cc170a061-combined-ca-bundle\") pod \"barbican-db-sync-vtvtp\" (UID: \"914420d1-9b01-49b9-962d-405cc170a061\") " pod="openstack/barbican-db-sync-vtvtp" Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.401296 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjm8z\" (UniqueName: \"kubernetes.io/projected/7867b3ca-d714-457b-a06e-4189d33c1332-kube-api-access-jjm8z\") pod \"dnsmasq-dns-5b4577c76c-qlhw5\" (UID: \"7867b3ca-d714-457b-a06e-4189d33c1332\") " pod="openstack/dnsmasq-dns-5b4577c76c-qlhw5" Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.401374 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7867b3ca-d714-457b-a06e-4189d33c1332-dns-svc\") pod \"dnsmasq-dns-5b4577c76c-qlhw5\" (UID: \"7867b3ca-d714-457b-a06e-4189d33c1332\") " pod="openstack/dnsmasq-dns-5b4577c76c-qlhw5" Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.401452 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7867b3ca-d714-457b-a06e-4189d33c1332-config\") pod \"dnsmasq-dns-5b4577c76c-qlhw5\" (UID: \"7867b3ca-d714-457b-a06e-4189d33c1332\") " pod="openstack/dnsmasq-dns-5b4577c76c-qlhw5" Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.401563 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/914420d1-9b01-49b9-962d-405cc170a061-db-sync-config-data\") pod \"barbican-db-sync-vtvtp\" (UID: \"914420d1-9b01-49b9-962d-405cc170a061\") " pod="openstack/barbican-db-sync-vtvtp" Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.401617 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7867b3ca-d714-457b-a06e-4189d33c1332-ovsdbserver-sb\") pod \"dnsmasq-dns-5b4577c76c-qlhw5\" (UID: \"7867b3ca-d714-457b-a06e-4189d33c1332\") " pod="openstack/dnsmasq-dns-5b4577c76c-qlhw5" Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.401824 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7867b3ca-d714-457b-a06e-4189d33c1332-dns-swift-storage-0\") pod \"dnsmasq-dns-5b4577c76c-qlhw5\" (UID: \"7867b3ca-d714-457b-a06e-4189d33c1332\") " pod="openstack/dnsmasq-dns-5b4577c76c-qlhw5" Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.402612 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7867b3ca-d714-457b-a06e-4189d33c1332-ovsdbserver-nb\") pod \"dnsmasq-dns-5b4577c76c-qlhw5\" (UID: \"7867b3ca-d714-457b-a06e-4189d33c1332\") " pod="openstack/dnsmasq-dns-5b4577c76c-qlhw5" Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.402830 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7867b3ca-d714-457b-a06e-4189d33c1332-ovsdbserver-sb\") pod \"dnsmasq-dns-5b4577c76c-qlhw5\" (UID: \"7867b3ca-d714-457b-a06e-4189d33c1332\") " pod="openstack/dnsmasq-dns-5b4577c76c-qlhw5" Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.403339 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7867b3ca-d714-457b-a06e-4189d33c1332-config\") pod \"dnsmasq-dns-5b4577c76c-qlhw5\" (UID: \"7867b3ca-d714-457b-a06e-4189d33c1332\") " pod="openstack/dnsmasq-dns-5b4577c76c-qlhw5" Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.403556 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7867b3ca-d714-457b-a06e-4189d33c1332-dns-svc\") pod \"dnsmasq-dns-5b4577c76c-qlhw5\" (UID: \"7867b3ca-d714-457b-a06e-4189d33c1332\") " pod="openstack/dnsmasq-dns-5b4577c76c-qlhw5" Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.405571 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/914420d1-9b01-49b9-962d-405cc170a061-combined-ca-bundle\") pod \"barbican-db-sync-vtvtp\" (UID: \"914420d1-9b01-49b9-962d-405cc170a061\") " pod="openstack/barbican-db-sync-vtvtp" Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.407703 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/914420d1-9b01-49b9-962d-405cc170a061-db-sync-config-data\") pod \"barbican-db-sync-vtvtp\" (UID: \"914420d1-9b01-49b9-962d-405cc170a061\") " pod="openstack/barbican-db-sync-vtvtp" Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.416837 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjm8z\" (UniqueName: \"kubernetes.io/projected/7867b3ca-d714-457b-a06e-4189d33c1332-kube-api-access-jjm8z\") pod \"dnsmasq-dns-5b4577c76c-qlhw5\" (UID: \"7867b3ca-d714-457b-a06e-4189d33c1332\") " pod="openstack/dnsmasq-dns-5b4577c76c-qlhw5" Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.421395 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b4577c76c-qlhw5" Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.422100 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzn7r\" (UniqueName: \"kubernetes.io/projected/914420d1-9b01-49b9-962d-405cc170a061-kube-api-access-nzn7r\") pod \"barbican-db-sync-vtvtp\" (UID: \"914420d1-9b01-49b9-962d-405cc170a061\") " pod="openstack/barbican-db-sync-vtvtp" Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.444388 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-vtvtp" Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.458549 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-tn5p6"] Jan 28 07:03:59 crc kubenswrapper[4642]: W0128 07:03:59.489845 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod51c6952a_9ec5_4332_9e8c_27087d153b33.slice/crio-6cc4d69941c7f6a60e94bf05e81050a3e3fb506266279995654e0530277c2ad5 WatchSource:0}: Error finding container 6cc4d69941c7f6a60e94bf05e81050a3e3fb506266279995654e0530277c2ad5: Status 404 returned error can't find the container with id 6cc4d69941c7f6a60e94bf05e81050a3e3fb506266279995654e0530277c2ad5 Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.529093 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f6dd467cf-6c45s"] Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.599476 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.758093 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-68q8l"] Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.834720 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-ds4qn"] Jan 28 07:03:59 crc kubenswrapper[4642]: W0128 07:03:59.835667 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb1bc0b13_73e3_41ef_a7a5_c5da7462234e.slice/crio-c1eafa5e88d1784deb6ef4690ba6304690e2f8627f7e4f8ac6aa7a6dabf8e32a WatchSource:0}: Error finding container c1eafa5e88d1784deb6ef4690ba6304690e2f8627f7e4f8ac6aa7a6dabf8e32a: Status 404 returned error can't find the container with id c1eafa5e88d1784deb6ef4690ba6304690e2f8627f7e4f8ac6aa7a6dabf8e32a Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.922827 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b4577c76c-qlhw5"] Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.929013 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-vtvtp"] Jan 28 07:03:59 crc kubenswrapper[4642]: W0128 07:03:59.932475 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod914420d1_9b01_49b9_962d_405cc170a061.slice/crio-f3395f0ddadb95a5571e6ea0f821ed8ad1c04f3f5ea3f0da9aeb61198292459a WatchSource:0}: Error finding container f3395f0ddadb95a5571e6ea0f821ed8ad1c04f3f5ea3f0da9aeb61198292459a: Status 404 returned error can't find the container with id f3395f0ddadb95a5571e6ea0f821ed8ad1c04f3f5ea3f0da9aeb61198292459a Jan 28 07:03:59 crc kubenswrapper[4642]: W0128 07:03:59.933824 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode57d0d61_2e5c_4cd6_be95_3e3c6f102f7e.slice/crio-2fc064d9d8769464bb0bb94324f2d8ae3a917ccb76d4f6780ab925f0648a7e8e WatchSource:0}: Error finding container 2fc064d9d8769464bb0bb94324f2d8ae3a917ccb76d4f6780ab925f0648a7e8e: Status 404 returned error can't find the container with id 2fc064d9d8769464bb0bb94324f2d8ae3a917ccb76d4f6780ab925f0648a7e8e Jan 28 07:03:59 crc kubenswrapper[4642]: I0128 07:03:59.933836 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-8s4pp"] Jan 28 07:04:00 crc kubenswrapper[4642]: I0128 07:04:00.148736 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8s4pp" event={"ID":"e57d0d61-2e5c-4cd6-be95-3e3c6f102f7e","Type":"ContainerStarted","Data":"2fc064d9d8769464bb0bb94324f2d8ae3a917ccb76d4f6780ab925f0648a7e8e"} Jan 28 07:04:00 crc kubenswrapper[4642]: I0128 07:04:00.152284 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-vtvtp" event={"ID":"914420d1-9b01-49b9-962d-405cc170a061","Type":"ContainerStarted","Data":"f3395f0ddadb95a5571e6ea0f821ed8ad1c04f3f5ea3f0da9aeb61198292459a"} Jan 28 07:04:00 crc kubenswrapper[4642]: I0128 07:04:00.153827 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-tn5p6" event={"ID":"51c6952a-9ec5-4332-9e8c-27087d153b33","Type":"ContainerStarted","Data":"ca388336b7a530c569f20022ecdf0d4bd6ae9d866ffeaf2de9f3ec0865c0b1ba"} Jan 28 07:04:00 crc kubenswrapper[4642]: I0128 07:04:00.153862 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-tn5p6" event={"ID":"51c6952a-9ec5-4332-9e8c-27087d153b33","Type":"ContainerStarted","Data":"6cc4d69941c7f6a60e94bf05e81050a3e3fb506266279995654e0530277c2ad5"} Jan 28 07:04:00 crc kubenswrapper[4642]: I0128 07:04:00.156509 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-68q8l" event={"ID":"e6aa333d-da9b-4f35-86a3-b88b6fe94ebc","Type":"ContainerStarted","Data":"b3c4b4a8793104f62bd36792eb30d3fed9eebcab37605b4605206e780cf8ffbb"} Jan 28 07:04:00 crc kubenswrapper[4642]: I0128 07:04:00.160596 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b4577c76c-qlhw5" event={"ID":"7867b3ca-d714-457b-a06e-4189d33c1332","Type":"ContainerStarted","Data":"cfb7ca5c6a23f9ebae0672640eaffd1b0133049b35787507f61df803ebce5c93"} Jan 28 07:04:00 crc kubenswrapper[4642]: I0128 07:04:00.162053 4642 generic.go:334] "Generic (PLEG): container finished" podID="0436cf41-376d-4469-9a72-bbc9b028ce14" containerID="457fd1c4b62227cc596ac442dc012a4262aa25314778852b57eda48cc4550db1" exitCode=0 Jan 28 07:04:00 crc kubenswrapper[4642]: I0128 07:04:00.162106 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f6dd467cf-6c45s" event={"ID":"0436cf41-376d-4469-9a72-bbc9b028ce14","Type":"ContainerDied","Data":"457fd1c4b62227cc596ac442dc012a4262aa25314778852b57eda48cc4550db1"} Jan 28 07:04:00 crc kubenswrapper[4642]: I0128 07:04:00.162122 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f6dd467cf-6c45s" event={"ID":"0436cf41-376d-4469-9a72-bbc9b028ce14","Type":"ContainerStarted","Data":"25aafd519995b47dbd6d1d10e2cbd6d3df89e604a810455df3fd9fbb33ad5990"} Jan 28 07:04:00 crc kubenswrapper[4642]: I0128 07:04:00.168031 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-ds4qn" event={"ID":"b1bc0b13-73e3-41ef-a7a5-c5da7462234e","Type":"ContainerStarted","Data":"c1eafa5e88d1784deb6ef4690ba6304690e2f8627f7e4f8ac6aa7a6dabf8e32a"} Jan 28 07:04:00 crc kubenswrapper[4642]: I0128 07:04:00.171030 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"201fcbe3-af3e-4f46-bd0f-07693de10229","Type":"ContainerStarted","Data":"d144cb525a349523ee3bb14504c31b8e1a0c6d9d2d8ad6eb9be210030d682afd"} Jan 28 07:04:00 crc kubenswrapper[4642]: I0128 07:04:00.182861 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-tn5p6" podStartSLOduration=2.182846495 podStartE2EDuration="2.182846495s" podCreationTimestamp="2026-01-28 07:03:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 07:04:00.175907502 +0000 UTC m=+963.407996310" watchObservedRunningTime="2026-01-28 07:04:00.182846495 +0000 UTC m=+963.414935305" Jan 28 07:04:00 crc kubenswrapper[4642]: I0128 07:04:00.202060 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-ds4qn" podStartSLOduration=2.202040394 podStartE2EDuration="2.202040394s" podCreationTimestamp="2026-01-28 07:03:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 07:04:00.196611752 +0000 UTC m=+963.428700561" watchObservedRunningTime="2026-01-28 07:04:00.202040394 +0000 UTC m=+963.434129203" Jan 28 07:04:00 crc kubenswrapper[4642]: I0128 07:04:00.504126 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f6dd467cf-6c45s" Jan 28 07:04:00 crc kubenswrapper[4642]: I0128 07:04:00.532927 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 07:04:00 crc kubenswrapper[4642]: I0128 07:04:00.547015 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-kxtxh" Jan 28 07:04:00 crc kubenswrapper[4642]: I0128 07:04:00.626391 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0436cf41-376d-4469-9a72-bbc9b028ce14-config\") pod \"0436cf41-376d-4469-9a72-bbc9b028ce14\" (UID: \"0436cf41-376d-4469-9a72-bbc9b028ce14\") " Jan 28 07:04:00 crc kubenswrapper[4642]: I0128 07:04:00.626448 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0436cf41-376d-4469-9a72-bbc9b028ce14-ovsdbserver-sb\") pod \"0436cf41-376d-4469-9a72-bbc9b028ce14\" (UID: \"0436cf41-376d-4469-9a72-bbc9b028ce14\") " Jan 28 07:04:00 crc kubenswrapper[4642]: I0128 07:04:00.626498 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0436cf41-376d-4469-9a72-bbc9b028ce14-dns-svc\") pod \"0436cf41-376d-4469-9a72-bbc9b028ce14\" (UID: \"0436cf41-376d-4469-9a72-bbc9b028ce14\") " Jan 28 07:04:00 crc kubenswrapper[4642]: I0128 07:04:00.626590 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0436cf41-376d-4469-9a72-bbc9b028ce14-ovsdbserver-nb\") pod \"0436cf41-376d-4469-9a72-bbc9b028ce14\" (UID: \"0436cf41-376d-4469-9a72-bbc9b028ce14\") " Jan 28 07:04:00 crc kubenswrapper[4642]: I0128 07:04:00.626667 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sfqvv\" (UniqueName: \"kubernetes.io/projected/0436cf41-376d-4469-9a72-bbc9b028ce14-kube-api-access-sfqvv\") pod \"0436cf41-376d-4469-9a72-bbc9b028ce14\" (UID: \"0436cf41-376d-4469-9a72-bbc9b028ce14\") " Jan 28 07:04:00 crc kubenswrapper[4642]: I0128 07:04:00.626692 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0436cf41-376d-4469-9a72-bbc9b028ce14-dns-swift-storage-0\") pod \"0436cf41-376d-4469-9a72-bbc9b028ce14\" (UID: \"0436cf41-376d-4469-9a72-bbc9b028ce14\") " Jan 28 07:04:00 crc kubenswrapper[4642]: I0128 07:04:00.647297 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0436cf41-376d-4469-9a72-bbc9b028ce14-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0436cf41-376d-4469-9a72-bbc9b028ce14" (UID: "0436cf41-376d-4469-9a72-bbc9b028ce14"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:04:00 crc kubenswrapper[4642]: I0128 07:04:00.650431 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0436cf41-376d-4469-9a72-bbc9b028ce14-kube-api-access-sfqvv" (OuterVolumeSpecName: "kube-api-access-sfqvv") pod "0436cf41-376d-4469-9a72-bbc9b028ce14" (UID: "0436cf41-376d-4469-9a72-bbc9b028ce14"). InnerVolumeSpecName "kube-api-access-sfqvv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:04:00 crc kubenswrapper[4642]: I0128 07:04:00.650811 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0436cf41-376d-4469-9a72-bbc9b028ce14-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0436cf41-376d-4469-9a72-bbc9b028ce14" (UID: "0436cf41-376d-4469-9a72-bbc9b028ce14"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:04:00 crc kubenswrapper[4642]: I0128 07:04:00.650982 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0436cf41-376d-4469-9a72-bbc9b028ce14-config" (OuterVolumeSpecName: "config") pod "0436cf41-376d-4469-9a72-bbc9b028ce14" (UID: "0436cf41-376d-4469-9a72-bbc9b028ce14"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:04:00 crc kubenswrapper[4642]: I0128 07:04:00.654658 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0436cf41-376d-4469-9a72-bbc9b028ce14-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0436cf41-376d-4469-9a72-bbc9b028ce14" (UID: "0436cf41-376d-4469-9a72-bbc9b028ce14"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:04:00 crc kubenswrapper[4642]: I0128 07:04:00.666342 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0436cf41-376d-4469-9a72-bbc9b028ce14-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0436cf41-376d-4469-9a72-bbc9b028ce14" (UID: "0436cf41-376d-4469-9a72-bbc9b028ce14"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:04:00 crc kubenswrapper[4642]: I0128 07:04:00.729295 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bjtr2\" (UniqueName: \"kubernetes.io/projected/455075b1-b6a1-4b44-b1a7-fc86ed3fc8e0-kube-api-access-bjtr2\") pod \"455075b1-b6a1-4b44-b1a7-fc86ed3fc8e0\" (UID: \"455075b1-b6a1-4b44-b1a7-fc86ed3fc8e0\") " Jan 28 07:04:00 crc kubenswrapper[4642]: I0128 07:04:00.729536 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/455075b1-b6a1-4b44-b1a7-fc86ed3fc8e0-config-data\") pod \"455075b1-b6a1-4b44-b1a7-fc86ed3fc8e0\" (UID: \"455075b1-b6a1-4b44-b1a7-fc86ed3fc8e0\") " Jan 28 07:04:00 crc kubenswrapper[4642]: I0128 07:04:00.729760 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/455075b1-b6a1-4b44-b1a7-fc86ed3fc8e0-combined-ca-bundle\") pod \"455075b1-b6a1-4b44-b1a7-fc86ed3fc8e0\" (UID: \"455075b1-b6a1-4b44-b1a7-fc86ed3fc8e0\") " Jan 28 07:04:00 crc kubenswrapper[4642]: I0128 07:04:00.729894 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/455075b1-b6a1-4b44-b1a7-fc86ed3fc8e0-db-sync-config-data\") pod \"455075b1-b6a1-4b44-b1a7-fc86ed3fc8e0\" (UID: \"455075b1-b6a1-4b44-b1a7-fc86ed3fc8e0\") " Jan 28 07:04:00 crc kubenswrapper[4642]: I0128 07:04:00.730641 4642 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0436cf41-376d-4469-9a72-bbc9b028ce14-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:00 crc kubenswrapper[4642]: I0128 07:04:00.730669 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sfqvv\" (UniqueName: \"kubernetes.io/projected/0436cf41-376d-4469-9a72-bbc9b028ce14-kube-api-access-sfqvv\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:00 crc kubenswrapper[4642]: I0128 07:04:00.730684 4642 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0436cf41-376d-4469-9a72-bbc9b028ce14-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:00 crc kubenswrapper[4642]: I0128 07:04:00.730699 4642 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0436cf41-376d-4469-9a72-bbc9b028ce14-config\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:00 crc kubenswrapper[4642]: I0128 07:04:00.730709 4642 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0436cf41-376d-4469-9a72-bbc9b028ce14-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:00 crc kubenswrapper[4642]: I0128 07:04:00.730720 4642 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0436cf41-376d-4469-9a72-bbc9b028ce14-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:00 crc kubenswrapper[4642]: I0128 07:04:00.732915 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/455075b1-b6a1-4b44-b1a7-fc86ed3fc8e0-kube-api-access-bjtr2" (OuterVolumeSpecName: "kube-api-access-bjtr2") pod "455075b1-b6a1-4b44-b1a7-fc86ed3fc8e0" (UID: "455075b1-b6a1-4b44-b1a7-fc86ed3fc8e0"). InnerVolumeSpecName "kube-api-access-bjtr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:04:00 crc kubenswrapper[4642]: I0128 07:04:00.733432 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/455075b1-b6a1-4b44-b1a7-fc86ed3fc8e0-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "455075b1-b6a1-4b44-b1a7-fc86ed3fc8e0" (UID: "455075b1-b6a1-4b44-b1a7-fc86ed3fc8e0"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:04:00 crc kubenswrapper[4642]: I0128 07:04:00.750840 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/455075b1-b6a1-4b44-b1a7-fc86ed3fc8e0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "455075b1-b6a1-4b44-b1a7-fc86ed3fc8e0" (UID: "455075b1-b6a1-4b44-b1a7-fc86ed3fc8e0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:04:00 crc kubenswrapper[4642]: I0128 07:04:00.767345 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/455075b1-b6a1-4b44-b1a7-fc86ed3fc8e0-config-data" (OuterVolumeSpecName: "config-data") pod "455075b1-b6a1-4b44-b1a7-fc86ed3fc8e0" (UID: "455075b1-b6a1-4b44-b1a7-fc86ed3fc8e0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:04:00 crc kubenswrapper[4642]: I0128 07:04:00.832535 4642 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/455075b1-b6a1-4b44-b1a7-fc86ed3fc8e0-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:00 crc kubenswrapper[4642]: I0128 07:04:00.832560 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bjtr2\" (UniqueName: \"kubernetes.io/projected/455075b1-b6a1-4b44-b1a7-fc86ed3fc8e0-kube-api-access-bjtr2\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:00 crc kubenswrapper[4642]: I0128 07:04:00.832577 4642 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/455075b1-b6a1-4b44-b1a7-fc86ed3fc8e0-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:00 crc kubenswrapper[4642]: I0128 07:04:00.832588 4642 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/455075b1-b6a1-4b44-b1a7-fc86ed3fc8e0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:01 crc kubenswrapper[4642]: I0128 07:04:01.201399 4642 generic.go:334] "Generic (PLEG): container finished" podID="7867b3ca-d714-457b-a06e-4189d33c1332" containerID="30534759fa2262d1d9bf28ea23460dc1bcbe261c516e9cd9fb6b98da7a72d25d" exitCode=0 Jan 28 07:04:01 crc kubenswrapper[4642]: I0128 07:04:01.201481 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b4577c76c-qlhw5" event={"ID":"7867b3ca-d714-457b-a06e-4189d33c1332","Type":"ContainerDied","Data":"30534759fa2262d1d9bf28ea23460dc1bcbe261c516e9cd9fb6b98da7a72d25d"} Jan 28 07:04:01 crc kubenswrapper[4642]: I0128 07:04:01.211912 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f6dd467cf-6c45s" Jan 28 07:04:01 crc kubenswrapper[4642]: I0128 07:04:01.211924 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f6dd467cf-6c45s" event={"ID":"0436cf41-376d-4469-9a72-bbc9b028ce14","Type":"ContainerDied","Data":"25aafd519995b47dbd6d1d10e2cbd6d3df89e604a810455df3fd9fbb33ad5990"} Jan 28 07:04:01 crc kubenswrapper[4642]: I0128 07:04:01.211984 4642 scope.go:117] "RemoveContainer" containerID="457fd1c4b62227cc596ac442dc012a4262aa25314778852b57eda48cc4550db1" Jan 28 07:04:01 crc kubenswrapper[4642]: I0128 07:04:01.216441 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-ds4qn" event={"ID":"b1bc0b13-73e3-41ef-a7a5-c5da7462234e","Type":"ContainerStarted","Data":"4e4263ac1dc87a98ebb025c676004fd7e691a0a0b2599f225402e05a3235f616"} Jan 28 07:04:01 crc kubenswrapper[4642]: I0128 07:04:01.218569 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-kxtxh" Jan 28 07:04:01 crc kubenswrapper[4642]: I0128 07:04:01.218833 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-kxtxh" event={"ID":"455075b1-b6a1-4b44-b1a7-fc86ed3fc8e0","Type":"ContainerDied","Data":"ed892deccdc1c86cf86ca174db104e3027b21901eff2fdbb1771d43016e09b10"} Jan 28 07:04:01 crc kubenswrapper[4642]: I0128 07:04:01.219100 4642 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed892deccdc1c86cf86ca174db104e3027b21901eff2fdbb1771d43016e09b10" Jan 28 07:04:01 crc kubenswrapper[4642]: I0128 07:04:01.406759 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f6dd467cf-6c45s"] Jan 28 07:04:01 crc kubenswrapper[4642]: I0128 07:04:01.417900 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-f6dd467cf-6c45s"] Jan 28 07:04:01 crc kubenswrapper[4642]: I0128 07:04:01.527350 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b4577c76c-qlhw5"] Jan 28 07:04:01 crc kubenswrapper[4642]: I0128 07:04:01.603285 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-9cf86d859-sjnx7"] Jan 28 07:04:01 crc kubenswrapper[4642]: E0128 07:04:01.603949 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0436cf41-376d-4469-9a72-bbc9b028ce14" containerName="init" Jan 28 07:04:01 crc kubenswrapper[4642]: I0128 07:04:01.603963 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="0436cf41-376d-4469-9a72-bbc9b028ce14" containerName="init" Jan 28 07:04:01 crc kubenswrapper[4642]: E0128 07:04:01.603977 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="455075b1-b6a1-4b44-b1a7-fc86ed3fc8e0" containerName="glance-db-sync" Jan 28 07:04:01 crc kubenswrapper[4642]: I0128 07:04:01.603983 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="455075b1-b6a1-4b44-b1a7-fc86ed3fc8e0" containerName="glance-db-sync" Jan 28 07:04:01 crc kubenswrapper[4642]: I0128 07:04:01.604299 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="455075b1-b6a1-4b44-b1a7-fc86ed3fc8e0" containerName="glance-db-sync" Jan 28 07:04:01 crc kubenswrapper[4642]: I0128 07:04:01.604318 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="0436cf41-376d-4469-9a72-bbc9b028ce14" containerName="init" Jan 28 07:04:01 crc kubenswrapper[4642]: I0128 07:04:01.606337 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9cf86d859-sjnx7" Jan 28 07:04:01 crc kubenswrapper[4642]: I0128 07:04:01.626403 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9cf86d859-sjnx7"] Jan 28 07:04:01 crc kubenswrapper[4642]: I0128 07:04:01.750747 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c278z\" (UniqueName: \"kubernetes.io/projected/4fddf6a7-1202-40a7-add6-98fc0a30fda6-kube-api-access-c278z\") pod \"dnsmasq-dns-9cf86d859-sjnx7\" (UID: \"4fddf6a7-1202-40a7-add6-98fc0a30fda6\") " pod="openstack/dnsmasq-dns-9cf86d859-sjnx7" Jan 28 07:04:01 crc kubenswrapper[4642]: I0128 07:04:01.750782 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4fddf6a7-1202-40a7-add6-98fc0a30fda6-ovsdbserver-nb\") pod \"dnsmasq-dns-9cf86d859-sjnx7\" (UID: \"4fddf6a7-1202-40a7-add6-98fc0a30fda6\") " pod="openstack/dnsmasq-dns-9cf86d859-sjnx7" Jan 28 07:04:01 crc kubenswrapper[4642]: I0128 07:04:01.750854 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4fddf6a7-1202-40a7-add6-98fc0a30fda6-dns-svc\") pod \"dnsmasq-dns-9cf86d859-sjnx7\" (UID: \"4fddf6a7-1202-40a7-add6-98fc0a30fda6\") " pod="openstack/dnsmasq-dns-9cf86d859-sjnx7" Jan 28 07:04:01 crc kubenswrapper[4642]: I0128 07:04:01.750879 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4fddf6a7-1202-40a7-add6-98fc0a30fda6-dns-swift-storage-0\") pod \"dnsmasq-dns-9cf86d859-sjnx7\" (UID: \"4fddf6a7-1202-40a7-add6-98fc0a30fda6\") " pod="openstack/dnsmasq-dns-9cf86d859-sjnx7" Jan 28 07:04:01 crc kubenswrapper[4642]: I0128 07:04:01.750902 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fddf6a7-1202-40a7-add6-98fc0a30fda6-config\") pod \"dnsmasq-dns-9cf86d859-sjnx7\" (UID: \"4fddf6a7-1202-40a7-add6-98fc0a30fda6\") " pod="openstack/dnsmasq-dns-9cf86d859-sjnx7" Jan 28 07:04:01 crc kubenswrapper[4642]: I0128 07:04:01.750922 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4fddf6a7-1202-40a7-add6-98fc0a30fda6-ovsdbserver-sb\") pod \"dnsmasq-dns-9cf86d859-sjnx7\" (UID: \"4fddf6a7-1202-40a7-add6-98fc0a30fda6\") " pod="openstack/dnsmasq-dns-9cf86d859-sjnx7" Jan 28 07:04:01 crc kubenswrapper[4642]: I0128 07:04:01.852680 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4fddf6a7-1202-40a7-add6-98fc0a30fda6-dns-svc\") pod \"dnsmasq-dns-9cf86d859-sjnx7\" (UID: \"4fddf6a7-1202-40a7-add6-98fc0a30fda6\") " pod="openstack/dnsmasq-dns-9cf86d859-sjnx7" Jan 28 07:04:01 crc kubenswrapper[4642]: I0128 07:04:01.852728 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4fddf6a7-1202-40a7-add6-98fc0a30fda6-dns-swift-storage-0\") pod \"dnsmasq-dns-9cf86d859-sjnx7\" (UID: \"4fddf6a7-1202-40a7-add6-98fc0a30fda6\") " pod="openstack/dnsmasq-dns-9cf86d859-sjnx7" Jan 28 07:04:01 crc kubenswrapper[4642]: I0128 07:04:01.852759 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fddf6a7-1202-40a7-add6-98fc0a30fda6-config\") pod \"dnsmasq-dns-9cf86d859-sjnx7\" (UID: \"4fddf6a7-1202-40a7-add6-98fc0a30fda6\") " pod="openstack/dnsmasq-dns-9cf86d859-sjnx7" Jan 28 07:04:01 crc kubenswrapper[4642]: I0128 07:04:01.852780 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4fddf6a7-1202-40a7-add6-98fc0a30fda6-ovsdbserver-sb\") pod \"dnsmasq-dns-9cf86d859-sjnx7\" (UID: \"4fddf6a7-1202-40a7-add6-98fc0a30fda6\") " pod="openstack/dnsmasq-dns-9cf86d859-sjnx7" Jan 28 07:04:01 crc kubenswrapper[4642]: I0128 07:04:01.852829 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c278z\" (UniqueName: \"kubernetes.io/projected/4fddf6a7-1202-40a7-add6-98fc0a30fda6-kube-api-access-c278z\") pod \"dnsmasq-dns-9cf86d859-sjnx7\" (UID: \"4fddf6a7-1202-40a7-add6-98fc0a30fda6\") " pod="openstack/dnsmasq-dns-9cf86d859-sjnx7" Jan 28 07:04:01 crc kubenswrapper[4642]: I0128 07:04:01.852844 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4fddf6a7-1202-40a7-add6-98fc0a30fda6-ovsdbserver-nb\") pod \"dnsmasq-dns-9cf86d859-sjnx7\" (UID: \"4fddf6a7-1202-40a7-add6-98fc0a30fda6\") " pod="openstack/dnsmasq-dns-9cf86d859-sjnx7" Jan 28 07:04:01 crc kubenswrapper[4642]: I0128 07:04:01.853522 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4fddf6a7-1202-40a7-add6-98fc0a30fda6-ovsdbserver-nb\") pod \"dnsmasq-dns-9cf86d859-sjnx7\" (UID: \"4fddf6a7-1202-40a7-add6-98fc0a30fda6\") " pod="openstack/dnsmasq-dns-9cf86d859-sjnx7" Jan 28 07:04:01 crc kubenswrapper[4642]: I0128 07:04:01.853985 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4fddf6a7-1202-40a7-add6-98fc0a30fda6-dns-svc\") pod \"dnsmasq-dns-9cf86d859-sjnx7\" (UID: \"4fddf6a7-1202-40a7-add6-98fc0a30fda6\") " pod="openstack/dnsmasq-dns-9cf86d859-sjnx7" Jan 28 07:04:01 crc kubenswrapper[4642]: I0128 07:04:01.854459 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4fddf6a7-1202-40a7-add6-98fc0a30fda6-dns-swift-storage-0\") pod \"dnsmasq-dns-9cf86d859-sjnx7\" (UID: \"4fddf6a7-1202-40a7-add6-98fc0a30fda6\") " pod="openstack/dnsmasq-dns-9cf86d859-sjnx7" Jan 28 07:04:01 crc kubenswrapper[4642]: I0128 07:04:01.855057 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fddf6a7-1202-40a7-add6-98fc0a30fda6-config\") pod \"dnsmasq-dns-9cf86d859-sjnx7\" (UID: \"4fddf6a7-1202-40a7-add6-98fc0a30fda6\") " pod="openstack/dnsmasq-dns-9cf86d859-sjnx7" Jan 28 07:04:01 crc kubenswrapper[4642]: I0128 07:04:01.855552 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4fddf6a7-1202-40a7-add6-98fc0a30fda6-ovsdbserver-sb\") pod \"dnsmasq-dns-9cf86d859-sjnx7\" (UID: \"4fddf6a7-1202-40a7-add6-98fc0a30fda6\") " pod="openstack/dnsmasq-dns-9cf86d859-sjnx7" Jan 28 07:04:01 crc kubenswrapper[4642]: I0128 07:04:01.881991 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c278z\" (UniqueName: \"kubernetes.io/projected/4fddf6a7-1202-40a7-add6-98fc0a30fda6-kube-api-access-c278z\") pod \"dnsmasq-dns-9cf86d859-sjnx7\" (UID: \"4fddf6a7-1202-40a7-add6-98fc0a30fda6\") " pod="openstack/dnsmasq-dns-9cf86d859-sjnx7" Jan 28 07:04:01 crc kubenswrapper[4642]: I0128 07:04:01.956337 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9cf86d859-sjnx7" Jan 28 07:04:02 crc kubenswrapper[4642]: I0128 07:04:02.230532 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b4577c76c-qlhw5" event={"ID":"7867b3ca-d714-457b-a06e-4189d33c1332","Type":"ContainerStarted","Data":"f499ebcf9e976cfd9d24f8922b20c507de0f2ef1b64b32e677c8603ab860d4fa"} Jan 28 07:04:02 crc kubenswrapper[4642]: I0128 07:04:02.230871 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b4577c76c-qlhw5" podUID="7867b3ca-d714-457b-a06e-4189d33c1332" containerName="dnsmasq-dns" containerID="cri-o://f499ebcf9e976cfd9d24f8922b20c507de0f2ef1b64b32e677c8603ab860d4fa" gracePeriod=10 Jan 28 07:04:02 crc kubenswrapper[4642]: I0128 07:04:02.231160 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b4577c76c-qlhw5" Jan 28 07:04:02 crc kubenswrapper[4642]: I0128 07:04:02.255085 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b4577c76c-qlhw5" podStartSLOduration=4.255060784 podStartE2EDuration="4.255060784s" podCreationTimestamp="2026-01-28 07:03:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 07:04:02.248169441 +0000 UTC m=+965.480258249" watchObservedRunningTime="2026-01-28 07:04:02.255060784 +0000 UTC m=+965.487149594" Jan 28 07:04:02 crc kubenswrapper[4642]: I0128 07:04:02.403492 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9cf86d859-sjnx7"] Jan 28 07:04:02 crc kubenswrapper[4642]: I0128 07:04:02.435701 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 07:04:02 crc kubenswrapper[4642]: I0128 07:04:02.437480 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 07:04:02 crc kubenswrapper[4642]: I0128 07:04:02.442039 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 28 07:04:02 crc kubenswrapper[4642]: I0128 07:04:02.442352 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 07:04:02 crc kubenswrapper[4642]: I0128 07:04:02.442646 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-7m5n4" Jan 28 07:04:02 crc kubenswrapper[4642]: I0128 07:04:02.444580 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 28 07:04:02 crc kubenswrapper[4642]: I0128 07:04:02.566271 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84277b74-ee89-4c86-9985-30272872cee9-logs\") pod \"glance-default-external-api-0\" (UID: \"84277b74-ee89-4c86-9985-30272872cee9\") " pod="openstack/glance-default-external-api-0" Jan 28 07:04:02 crc kubenswrapper[4642]: I0128 07:04:02.566511 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84277b74-ee89-4c86-9985-30272872cee9-scripts\") pod \"glance-default-external-api-0\" (UID: \"84277b74-ee89-4c86-9985-30272872cee9\") " pod="openstack/glance-default-external-api-0" Jan 28 07:04:02 crc kubenswrapper[4642]: I0128 07:04:02.566571 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84277b74-ee89-4c86-9985-30272872cee9-config-data\") pod \"glance-default-external-api-0\" (UID: \"84277b74-ee89-4c86-9985-30272872cee9\") " pod="openstack/glance-default-external-api-0" Jan 28 07:04:02 crc kubenswrapper[4642]: I0128 07:04:02.566601 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"84277b74-ee89-4c86-9985-30272872cee9\") " pod="openstack/glance-default-external-api-0" Jan 28 07:04:02 crc kubenswrapper[4642]: I0128 07:04:02.566692 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/84277b74-ee89-4c86-9985-30272872cee9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"84277b74-ee89-4c86-9985-30272872cee9\") " pod="openstack/glance-default-external-api-0" Jan 28 07:04:02 crc kubenswrapper[4642]: I0128 07:04:02.566707 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84277b74-ee89-4c86-9985-30272872cee9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"84277b74-ee89-4c86-9985-30272872cee9\") " pod="openstack/glance-default-external-api-0" Jan 28 07:04:02 crc kubenswrapper[4642]: I0128 07:04:02.566747 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grgh2\" (UniqueName: \"kubernetes.io/projected/84277b74-ee89-4c86-9985-30272872cee9-kube-api-access-grgh2\") pod \"glance-default-external-api-0\" (UID: \"84277b74-ee89-4c86-9985-30272872cee9\") " pod="openstack/glance-default-external-api-0" Jan 28 07:04:02 crc kubenswrapper[4642]: I0128 07:04:02.652401 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 07:04:02 crc kubenswrapper[4642]: I0128 07:04:02.653692 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 07:04:02 crc kubenswrapper[4642]: I0128 07:04:02.655913 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 28 07:04:02 crc kubenswrapper[4642]: I0128 07:04:02.669730 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84277b74-ee89-4c86-9985-30272872cee9-scripts\") pod \"glance-default-external-api-0\" (UID: \"84277b74-ee89-4c86-9985-30272872cee9\") " pod="openstack/glance-default-external-api-0" Jan 28 07:04:02 crc kubenswrapper[4642]: I0128 07:04:02.669790 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84277b74-ee89-4c86-9985-30272872cee9-config-data\") pod \"glance-default-external-api-0\" (UID: \"84277b74-ee89-4c86-9985-30272872cee9\") " pod="openstack/glance-default-external-api-0" Jan 28 07:04:02 crc kubenswrapper[4642]: I0128 07:04:02.671751 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 07:04:02 crc kubenswrapper[4642]: I0128 07:04:02.677046 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"84277b74-ee89-4c86-9985-30272872cee9\") " pod="openstack/glance-default-external-api-0" Jan 28 07:04:02 crc kubenswrapper[4642]: I0128 07:04:02.677342 4642 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"84277b74-ee89-4c86-9985-30272872cee9\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Jan 28 07:04:02 crc kubenswrapper[4642]: I0128 07:04:02.677891 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84277b74-ee89-4c86-9985-30272872cee9-scripts\") pod \"glance-default-external-api-0\" (UID: \"84277b74-ee89-4c86-9985-30272872cee9\") " pod="openstack/glance-default-external-api-0" Jan 28 07:04:02 crc kubenswrapper[4642]: I0128 07:04:02.678213 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84277b74-ee89-4c86-9985-30272872cee9-config-data\") pod \"glance-default-external-api-0\" (UID: \"84277b74-ee89-4c86-9985-30272872cee9\") " pod="openstack/glance-default-external-api-0" Jan 28 07:04:02 crc kubenswrapper[4642]: I0128 07:04:02.678379 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/84277b74-ee89-4c86-9985-30272872cee9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"84277b74-ee89-4c86-9985-30272872cee9\") " pod="openstack/glance-default-external-api-0" Jan 28 07:04:02 crc kubenswrapper[4642]: I0128 07:04:02.678398 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84277b74-ee89-4c86-9985-30272872cee9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"84277b74-ee89-4c86-9985-30272872cee9\") " pod="openstack/glance-default-external-api-0" Jan 28 07:04:02 crc kubenswrapper[4642]: I0128 07:04:02.678417 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grgh2\" (UniqueName: \"kubernetes.io/projected/84277b74-ee89-4c86-9985-30272872cee9-kube-api-access-grgh2\") pod \"glance-default-external-api-0\" (UID: \"84277b74-ee89-4c86-9985-30272872cee9\") " pod="openstack/glance-default-external-api-0" Jan 28 07:04:02 crc kubenswrapper[4642]: I0128 07:04:02.678586 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84277b74-ee89-4c86-9985-30272872cee9-logs\") pod \"glance-default-external-api-0\" (UID: \"84277b74-ee89-4c86-9985-30272872cee9\") " pod="openstack/glance-default-external-api-0" Jan 28 07:04:02 crc kubenswrapper[4642]: I0128 07:04:02.678831 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/84277b74-ee89-4c86-9985-30272872cee9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"84277b74-ee89-4c86-9985-30272872cee9\") " pod="openstack/glance-default-external-api-0" Jan 28 07:04:02 crc kubenswrapper[4642]: I0128 07:04:02.679044 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84277b74-ee89-4c86-9985-30272872cee9-logs\") pod \"glance-default-external-api-0\" (UID: \"84277b74-ee89-4c86-9985-30272872cee9\") " pod="openstack/glance-default-external-api-0" Jan 28 07:04:02 crc kubenswrapper[4642]: I0128 07:04:02.682082 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84277b74-ee89-4c86-9985-30272872cee9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"84277b74-ee89-4c86-9985-30272872cee9\") " pod="openstack/glance-default-external-api-0" Jan 28 07:04:02 crc kubenswrapper[4642]: I0128 07:04:02.701270 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grgh2\" (UniqueName: \"kubernetes.io/projected/84277b74-ee89-4c86-9985-30272872cee9-kube-api-access-grgh2\") pod \"glance-default-external-api-0\" (UID: \"84277b74-ee89-4c86-9985-30272872cee9\") " pod="openstack/glance-default-external-api-0" Jan 28 07:04:02 crc kubenswrapper[4642]: I0128 07:04:02.705024 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"84277b74-ee89-4c86-9985-30272872cee9\") " pod="openstack/glance-default-external-api-0" Jan 28 07:04:02 crc kubenswrapper[4642]: I0128 07:04:02.751835 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 07:04:02 crc kubenswrapper[4642]: I0128 07:04:02.780264 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plwmd\" (UniqueName: \"kubernetes.io/projected/b3ae5f52-55f6-47d3-936b-2af9cf3618fa-kube-api-access-plwmd\") pod \"glance-default-internal-api-0\" (UID: \"b3ae5f52-55f6-47d3-936b-2af9cf3618fa\") " pod="openstack/glance-default-internal-api-0" Jan 28 07:04:02 crc kubenswrapper[4642]: I0128 07:04:02.780306 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3ae5f52-55f6-47d3-936b-2af9cf3618fa-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b3ae5f52-55f6-47d3-936b-2af9cf3618fa\") " pod="openstack/glance-default-internal-api-0" Jan 28 07:04:02 crc kubenswrapper[4642]: I0128 07:04:02.780336 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3ae5f52-55f6-47d3-936b-2af9cf3618fa-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b3ae5f52-55f6-47d3-936b-2af9cf3618fa\") " pod="openstack/glance-default-internal-api-0" Jan 28 07:04:02 crc kubenswrapper[4642]: I0128 07:04:02.780374 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b3ae5f52-55f6-47d3-936b-2af9cf3618fa-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b3ae5f52-55f6-47d3-936b-2af9cf3618fa\") " pod="openstack/glance-default-internal-api-0" Jan 28 07:04:02 crc kubenswrapper[4642]: I0128 07:04:02.780611 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3ae5f52-55f6-47d3-936b-2af9cf3618fa-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b3ae5f52-55f6-47d3-936b-2af9cf3618fa\") " pod="openstack/glance-default-internal-api-0" Jan 28 07:04:02 crc kubenswrapper[4642]: I0128 07:04:02.780672 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3ae5f52-55f6-47d3-936b-2af9cf3618fa-logs\") pod \"glance-default-internal-api-0\" (UID: \"b3ae5f52-55f6-47d3-936b-2af9cf3618fa\") " pod="openstack/glance-default-internal-api-0" Jan 28 07:04:02 crc kubenswrapper[4642]: I0128 07:04:02.780724 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"b3ae5f52-55f6-47d3-936b-2af9cf3618fa\") " pod="openstack/glance-default-internal-api-0" Jan 28 07:04:02 crc kubenswrapper[4642]: I0128 07:04:02.882913 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3ae5f52-55f6-47d3-936b-2af9cf3618fa-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b3ae5f52-55f6-47d3-936b-2af9cf3618fa\") " pod="openstack/glance-default-internal-api-0" Jan 28 07:04:02 crc kubenswrapper[4642]: I0128 07:04:02.882968 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3ae5f52-55f6-47d3-936b-2af9cf3618fa-logs\") pod \"glance-default-internal-api-0\" (UID: \"b3ae5f52-55f6-47d3-936b-2af9cf3618fa\") " pod="openstack/glance-default-internal-api-0" Jan 28 07:04:02 crc kubenswrapper[4642]: I0128 07:04:02.883030 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"b3ae5f52-55f6-47d3-936b-2af9cf3618fa\") " pod="openstack/glance-default-internal-api-0" Jan 28 07:04:02 crc kubenswrapper[4642]: I0128 07:04:02.883109 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plwmd\" (UniqueName: \"kubernetes.io/projected/b3ae5f52-55f6-47d3-936b-2af9cf3618fa-kube-api-access-plwmd\") pod \"glance-default-internal-api-0\" (UID: \"b3ae5f52-55f6-47d3-936b-2af9cf3618fa\") " pod="openstack/glance-default-internal-api-0" Jan 28 07:04:02 crc kubenswrapper[4642]: I0128 07:04:02.883127 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3ae5f52-55f6-47d3-936b-2af9cf3618fa-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b3ae5f52-55f6-47d3-936b-2af9cf3618fa\") " pod="openstack/glance-default-internal-api-0" Jan 28 07:04:02 crc kubenswrapper[4642]: I0128 07:04:02.883147 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3ae5f52-55f6-47d3-936b-2af9cf3618fa-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b3ae5f52-55f6-47d3-936b-2af9cf3618fa\") " pod="openstack/glance-default-internal-api-0" Jan 28 07:04:02 crc kubenswrapper[4642]: I0128 07:04:02.883176 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b3ae5f52-55f6-47d3-936b-2af9cf3618fa-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b3ae5f52-55f6-47d3-936b-2af9cf3618fa\") " pod="openstack/glance-default-internal-api-0" Jan 28 07:04:02 crc kubenswrapper[4642]: I0128 07:04:02.883608 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b3ae5f52-55f6-47d3-936b-2af9cf3618fa-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b3ae5f52-55f6-47d3-936b-2af9cf3618fa\") " pod="openstack/glance-default-internal-api-0" Jan 28 07:04:02 crc kubenswrapper[4642]: I0128 07:04:02.884696 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3ae5f52-55f6-47d3-936b-2af9cf3618fa-logs\") pod \"glance-default-internal-api-0\" (UID: \"b3ae5f52-55f6-47d3-936b-2af9cf3618fa\") " pod="openstack/glance-default-internal-api-0" Jan 28 07:04:02 crc kubenswrapper[4642]: I0128 07:04:02.884784 4642 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"b3ae5f52-55f6-47d3-936b-2af9cf3618fa\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-internal-api-0" Jan 28 07:04:02 crc kubenswrapper[4642]: I0128 07:04:02.895949 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3ae5f52-55f6-47d3-936b-2af9cf3618fa-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b3ae5f52-55f6-47d3-936b-2af9cf3618fa\") " pod="openstack/glance-default-internal-api-0" Jan 28 07:04:02 crc kubenswrapper[4642]: I0128 07:04:02.901044 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3ae5f52-55f6-47d3-936b-2af9cf3618fa-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b3ae5f52-55f6-47d3-936b-2af9cf3618fa\") " pod="openstack/glance-default-internal-api-0" Jan 28 07:04:02 crc kubenswrapper[4642]: I0128 07:04:02.907164 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3ae5f52-55f6-47d3-936b-2af9cf3618fa-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b3ae5f52-55f6-47d3-936b-2af9cf3618fa\") " pod="openstack/glance-default-internal-api-0" Jan 28 07:04:02 crc kubenswrapper[4642]: I0128 07:04:02.927089 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plwmd\" (UniqueName: \"kubernetes.io/projected/b3ae5f52-55f6-47d3-936b-2af9cf3618fa-kube-api-access-plwmd\") pod \"glance-default-internal-api-0\" (UID: \"b3ae5f52-55f6-47d3-936b-2af9cf3618fa\") " pod="openstack/glance-default-internal-api-0" Jan 28 07:04:02 crc kubenswrapper[4642]: I0128 07:04:02.939454 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"b3ae5f52-55f6-47d3-936b-2af9cf3618fa\") " pod="openstack/glance-default-internal-api-0" Jan 28 07:04:02 crc kubenswrapper[4642]: I0128 07:04:02.968538 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 07:04:03 crc kubenswrapper[4642]: I0128 07:04:03.092758 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b4577c76c-qlhw5" Jan 28 07:04:03 crc kubenswrapper[4642]: I0128 07:04:03.112507 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0436cf41-376d-4469-9a72-bbc9b028ce14" path="/var/lib/kubelet/pods/0436cf41-376d-4469-9a72-bbc9b028ce14/volumes" Jan 28 07:04:03 crc kubenswrapper[4642]: I0128 07:04:03.245571 4642 generic.go:334] "Generic (PLEG): container finished" podID="51c6952a-9ec5-4332-9e8c-27087d153b33" containerID="ca388336b7a530c569f20022ecdf0d4bd6ae9d866ffeaf2de9f3ec0865c0b1ba" exitCode=0 Jan 28 07:04:03 crc kubenswrapper[4642]: I0128 07:04:03.245674 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-tn5p6" event={"ID":"51c6952a-9ec5-4332-9e8c-27087d153b33","Type":"ContainerDied","Data":"ca388336b7a530c569f20022ecdf0d4bd6ae9d866ffeaf2de9f3ec0865c0b1ba"} Jan 28 07:04:03 crc kubenswrapper[4642]: I0128 07:04:03.248121 4642 generic.go:334] "Generic (PLEG): container finished" podID="7867b3ca-d714-457b-a06e-4189d33c1332" containerID="f499ebcf9e976cfd9d24f8922b20c507de0f2ef1b64b32e677c8603ab860d4fa" exitCode=0 Jan 28 07:04:03 crc kubenswrapper[4642]: I0128 07:04:03.248163 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b4577c76c-qlhw5" event={"ID":"7867b3ca-d714-457b-a06e-4189d33c1332","Type":"ContainerDied","Data":"f499ebcf9e976cfd9d24f8922b20c507de0f2ef1b64b32e677c8603ab860d4fa"} Jan 28 07:04:03 crc kubenswrapper[4642]: I0128 07:04:03.248256 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b4577c76c-qlhw5" event={"ID":"7867b3ca-d714-457b-a06e-4189d33c1332","Type":"ContainerDied","Data":"cfb7ca5c6a23f9ebae0672640eaffd1b0133049b35787507f61df803ebce5c93"} Jan 28 07:04:03 crc kubenswrapper[4642]: I0128 07:04:03.248289 4642 scope.go:117] "RemoveContainer" containerID="f499ebcf9e976cfd9d24f8922b20c507de0f2ef1b64b32e677c8603ab860d4fa" Jan 28 07:04:03 crc kubenswrapper[4642]: I0128 07:04:03.248334 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b4577c76c-qlhw5" Jan 28 07:04:03 crc kubenswrapper[4642]: I0128 07:04:03.252521 4642 generic.go:334] "Generic (PLEG): container finished" podID="4fddf6a7-1202-40a7-add6-98fc0a30fda6" containerID="d516516936276a00583105fede7bb02d8ad7c8256070c920c72dbcac7347d2a5" exitCode=0 Jan 28 07:04:03 crc kubenswrapper[4642]: I0128 07:04:03.252556 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9cf86d859-sjnx7" event={"ID":"4fddf6a7-1202-40a7-add6-98fc0a30fda6","Type":"ContainerDied","Data":"d516516936276a00583105fede7bb02d8ad7c8256070c920c72dbcac7347d2a5"} Jan 28 07:04:03 crc kubenswrapper[4642]: I0128 07:04:03.252581 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9cf86d859-sjnx7" event={"ID":"4fddf6a7-1202-40a7-add6-98fc0a30fda6","Type":"ContainerStarted","Data":"221af6b5d7fd833f24dfb9b360c7ade3cd13b416602e8d6b3d55b16eb035c81e"} Jan 28 07:04:03 crc kubenswrapper[4642]: I0128 07:04:03.290414 4642 scope.go:117] "RemoveContainer" containerID="30534759fa2262d1d9bf28ea23460dc1bcbe261c516e9cd9fb6b98da7a72d25d" Jan 28 07:04:03 crc kubenswrapper[4642]: I0128 07:04:03.294997 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7867b3ca-d714-457b-a06e-4189d33c1332-ovsdbserver-nb\") pod \"7867b3ca-d714-457b-a06e-4189d33c1332\" (UID: \"7867b3ca-d714-457b-a06e-4189d33c1332\") " Jan 28 07:04:03 crc kubenswrapper[4642]: I0128 07:04:03.295075 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7867b3ca-d714-457b-a06e-4189d33c1332-ovsdbserver-sb\") pod \"7867b3ca-d714-457b-a06e-4189d33c1332\" (UID: \"7867b3ca-d714-457b-a06e-4189d33c1332\") " Jan 28 07:04:03 crc kubenswrapper[4642]: I0128 07:04:03.295145 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7867b3ca-d714-457b-a06e-4189d33c1332-dns-swift-storage-0\") pod \"7867b3ca-d714-457b-a06e-4189d33c1332\" (UID: \"7867b3ca-d714-457b-a06e-4189d33c1332\") " Jan 28 07:04:03 crc kubenswrapper[4642]: I0128 07:04:03.295217 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jjm8z\" (UniqueName: \"kubernetes.io/projected/7867b3ca-d714-457b-a06e-4189d33c1332-kube-api-access-jjm8z\") pod \"7867b3ca-d714-457b-a06e-4189d33c1332\" (UID: \"7867b3ca-d714-457b-a06e-4189d33c1332\") " Jan 28 07:04:03 crc kubenswrapper[4642]: I0128 07:04:03.295265 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7867b3ca-d714-457b-a06e-4189d33c1332-config\") pod \"7867b3ca-d714-457b-a06e-4189d33c1332\" (UID: \"7867b3ca-d714-457b-a06e-4189d33c1332\") " Jan 28 07:04:03 crc kubenswrapper[4642]: I0128 07:04:03.295353 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7867b3ca-d714-457b-a06e-4189d33c1332-dns-svc\") pod \"7867b3ca-d714-457b-a06e-4189d33c1332\" (UID: \"7867b3ca-d714-457b-a06e-4189d33c1332\") " Jan 28 07:04:03 crc kubenswrapper[4642]: I0128 07:04:03.321682 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7867b3ca-d714-457b-a06e-4189d33c1332-kube-api-access-jjm8z" (OuterVolumeSpecName: "kube-api-access-jjm8z") pod "7867b3ca-d714-457b-a06e-4189d33c1332" (UID: "7867b3ca-d714-457b-a06e-4189d33c1332"). InnerVolumeSpecName "kube-api-access-jjm8z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:04:03 crc kubenswrapper[4642]: I0128 07:04:03.347940 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 07:04:03 crc kubenswrapper[4642]: I0128 07:04:03.357371 4642 scope.go:117] "RemoveContainer" containerID="f499ebcf9e976cfd9d24f8922b20c507de0f2ef1b64b32e677c8603ab860d4fa" Jan 28 07:04:03 crc kubenswrapper[4642]: E0128 07:04:03.357908 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f499ebcf9e976cfd9d24f8922b20c507de0f2ef1b64b32e677c8603ab860d4fa\": container with ID starting with f499ebcf9e976cfd9d24f8922b20c507de0f2ef1b64b32e677c8603ab860d4fa not found: ID does not exist" containerID="f499ebcf9e976cfd9d24f8922b20c507de0f2ef1b64b32e677c8603ab860d4fa" Jan 28 07:04:03 crc kubenswrapper[4642]: I0128 07:04:03.357952 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f499ebcf9e976cfd9d24f8922b20c507de0f2ef1b64b32e677c8603ab860d4fa"} err="failed to get container status \"f499ebcf9e976cfd9d24f8922b20c507de0f2ef1b64b32e677c8603ab860d4fa\": rpc error: code = NotFound desc = could not find container \"f499ebcf9e976cfd9d24f8922b20c507de0f2ef1b64b32e677c8603ab860d4fa\": container with ID starting with f499ebcf9e976cfd9d24f8922b20c507de0f2ef1b64b32e677c8603ab860d4fa not found: ID does not exist" Jan 28 07:04:03 crc kubenswrapper[4642]: I0128 07:04:03.357979 4642 scope.go:117] "RemoveContainer" containerID="30534759fa2262d1d9bf28ea23460dc1bcbe261c516e9cd9fb6b98da7a72d25d" Jan 28 07:04:03 crc kubenswrapper[4642]: I0128 07:04:03.359167 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7867b3ca-d714-457b-a06e-4189d33c1332-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7867b3ca-d714-457b-a06e-4189d33c1332" (UID: "7867b3ca-d714-457b-a06e-4189d33c1332"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:04:03 crc kubenswrapper[4642]: E0128 07:04:03.359419 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30534759fa2262d1d9bf28ea23460dc1bcbe261c516e9cd9fb6b98da7a72d25d\": container with ID starting with 30534759fa2262d1d9bf28ea23460dc1bcbe261c516e9cd9fb6b98da7a72d25d not found: ID does not exist" containerID="30534759fa2262d1d9bf28ea23460dc1bcbe261c516e9cd9fb6b98da7a72d25d" Jan 28 07:04:03 crc kubenswrapper[4642]: I0128 07:04:03.359517 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30534759fa2262d1d9bf28ea23460dc1bcbe261c516e9cd9fb6b98da7a72d25d"} err="failed to get container status \"30534759fa2262d1d9bf28ea23460dc1bcbe261c516e9cd9fb6b98da7a72d25d\": rpc error: code = NotFound desc = could not find container \"30534759fa2262d1d9bf28ea23460dc1bcbe261c516e9cd9fb6b98da7a72d25d\": container with ID starting with 30534759fa2262d1d9bf28ea23460dc1bcbe261c516e9cd9fb6b98da7a72d25d not found: ID does not exist" Jan 28 07:04:03 crc kubenswrapper[4642]: W0128 07:04:03.363077 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod84277b74_ee89_4c86_9985_30272872cee9.slice/crio-fbcf11c38e5761fc82e6b4e0fa706da664869fc562de0b3f8c26907cf71e54af WatchSource:0}: Error finding container fbcf11c38e5761fc82e6b4e0fa706da664869fc562de0b3f8c26907cf71e54af: Status 404 returned error can't find the container with id fbcf11c38e5761fc82e6b4e0fa706da664869fc562de0b3f8c26907cf71e54af Jan 28 07:04:03 crc kubenswrapper[4642]: I0128 07:04:03.377319 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7867b3ca-d714-457b-a06e-4189d33c1332-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7867b3ca-d714-457b-a06e-4189d33c1332" (UID: "7867b3ca-d714-457b-a06e-4189d33c1332"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:04:03 crc kubenswrapper[4642]: I0128 07:04:03.380000 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7867b3ca-d714-457b-a06e-4189d33c1332-config" (OuterVolumeSpecName: "config") pod "7867b3ca-d714-457b-a06e-4189d33c1332" (UID: "7867b3ca-d714-457b-a06e-4189d33c1332"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:04:03 crc kubenswrapper[4642]: I0128 07:04:03.383235 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7867b3ca-d714-457b-a06e-4189d33c1332-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7867b3ca-d714-457b-a06e-4189d33c1332" (UID: "7867b3ca-d714-457b-a06e-4189d33c1332"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:04:03 crc kubenswrapper[4642]: I0128 07:04:03.389591 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7867b3ca-d714-457b-a06e-4189d33c1332-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7867b3ca-d714-457b-a06e-4189d33c1332" (UID: "7867b3ca-d714-457b-a06e-4189d33c1332"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:04:03 crc kubenswrapper[4642]: I0128 07:04:03.398353 4642 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7867b3ca-d714-457b-a06e-4189d33c1332-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:03 crc kubenswrapper[4642]: I0128 07:04:03.398374 4642 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7867b3ca-d714-457b-a06e-4189d33c1332-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:03 crc kubenswrapper[4642]: I0128 07:04:03.398384 4642 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7867b3ca-d714-457b-a06e-4189d33c1332-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:03 crc kubenswrapper[4642]: I0128 07:04:03.398392 4642 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7867b3ca-d714-457b-a06e-4189d33c1332-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:03 crc kubenswrapper[4642]: I0128 07:04:03.398400 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jjm8z\" (UniqueName: \"kubernetes.io/projected/7867b3ca-d714-457b-a06e-4189d33c1332-kube-api-access-jjm8z\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:03 crc kubenswrapper[4642]: I0128 07:04:03.398409 4642 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7867b3ca-d714-457b-a06e-4189d33c1332-config\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:03 crc kubenswrapper[4642]: I0128 07:04:03.549799 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 07:04:03 crc kubenswrapper[4642]: I0128 07:04:03.584169 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b4577c76c-qlhw5"] Jan 28 07:04:03 crc kubenswrapper[4642]: I0128 07:04:03.588973 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b4577c76c-qlhw5"] Jan 28 07:04:04 crc kubenswrapper[4642]: I0128 07:04:04.260905 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"84277b74-ee89-4c86-9985-30272872cee9","Type":"ContainerStarted","Data":"fbcf11c38e5761fc82e6b4e0fa706da664869fc562de0b3f8c26907cf71e54af"} Jan 28 07:04:04 crc kubenswrapper[4642]: I0128 07:04:04.267373 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9cf86d859-sjnx7" event={"ID":"4fddf6a7-1202-40a7-add6-98fc0a30fda6","Type":"ContainerStarted","Data":"dbb471184780ba9c493f7f11aec118d9c1d6cca84cdae62611491ff5e78207d8"} Jan 28 07:04:04 crc kubenswrapper[4642]: I0128 07:04:04.267461 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-9cf86d859-sjnx7" Jan 28 07:04:04 crc kubenswrapper[4642]: I0128 07:04:04.286057 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-9cf86d859-sjnx7" podStartSLOduration=3.286046231 podStartE2EDuration="3.286046231s" podCreationTimestamp="2026-01-28 07:04:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 07:04:04.282252294 +0000 UTC m=+967.514341104" watchObservedRunningTime="2026-01-28 07:04:04.286046231 +0000 UTC m=+967.518135041" Jan 28 07:04:05 crc kubenswrapper[4642]: I0128 07:04:05.105927 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7867b3ca-d714-457b-a06e-4189d33c1332" path="/var/lib/kubelet/pods/7867b3ca-d714-457b-a06e-4189d33c1332/volumes" Jan 28 07:04:07 crc kubenswrapper[4642]: I0128 07:04:07.295943 4642 generic.go:334] "Generic (PLEG): container finished" podID="b1bc0b13-73e3-41ef-a7a5-c5da7462234e" containerID="4e4263ac1dc87a98ebb025c676004fd7e691a0a0b2599f225402e05a3235f616" exitCode=0 Jan 28 07:04:07 crc kubenswrapper[4642]: I0128 07:04:07.296040 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-ds4qn" event={"ID":"b1bc0b13-73e3-41ef-a7a5-c5da7462234e","Type":"ContainerDied","Data":"4e4263ac1dc87a98ebb025c676004fd7e691a0a0b2599f225402e05a3235f616"} Jan 28 07:04:07 crc kubenswrapper[4642]: W0128 07:04:07.459816 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb3ae5f52_55f6_47d3_936b_2af9cf3618fa.slice/crio-78a7c6fae283e8db8f3ade6588474c137364e86133a49ca11b0c0ef95381f27f WatchSource:0}: Error finding container 78a7c6fae283e8db8f3ade6588474c137364e86133a49ca11b0c0ef95381f27f: Status 404 returned error can't find the container with id 78a7c6fae283e8db8f3ade6588474c137364e86133a49ca11b0c0ef95381f27f Jan 28 07:04:07 crc kubenswrapper[4642]: I0128 07:04:07.526458 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-tn5p6" Jan 28 07:04:07 crc kubenswrapper[4642]: I0128 07:04:07.687462 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51c6952a-9ec5-4332-9e8c-27087d153b33-scripts\") pod \"51c6952a-9ec5-4332-9e8c-27087d153b33\" (UID: \"51c6952a-9ec5-4332-9e8c-27087d153b33\") " Jan 28 07:04:07 crc kubenswrapper[4642]: I0128 07:04:07.687540 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/51c6952a-9ec5-4332-9e8c-27087d153b33-fernet-keys\") pod \"51c6952a-9ec5-4332-9e8c-27087d153b33\" (UID: \"51c6952a-9ec5-4332-9e8c-27087d153b33\") " Jan 28 07:04:07 crc kubenswrapper[4642]: I0128 07:04:07.687602 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/51c6952a-9ec5-4332-9e8c-27087d153b33-credential-keys\") pod \"51c6952a-9ec5-4332-9e8c-27087d153b33\" (UID: \"51c6952a-9ec5-4332-9e8c-27087d153b33\") " Jan 28 07:04:07 crc kubenswrapper[4642]: I0128 07:04:07.687627 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51c6952a-9ec5-4332-9e8c-27087d153b33-combined-ca-bundle\") pod \"51c6952a-9ec5-4332-9e8c-27087d153b33\" (UID: \"51c6952a-9ec5-4332-9e8c-27087d153b33\") " Jan 28 07:04:07 crc kubenswrapper[4642]: I0128 07:04:07.687689 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h7gk7\" (UniqueName: \"kubernetes.io/projected/51c6952a-9ec5-4332-9e8c-27087d153b33-kube-api-access-h7gk7\") pod \"51c6952a-9ec5-4332-9e8c-27087d153b33\" (UID: \"51c6952a-9ec5-4332-9e8c-27087d153b33\") " Jan 28 07:04:07 crc kubenswrapper[4642]: I0128 07:04:07.687778 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51c6952a-9ec5-4332-9e8c-27087d153b33-config-data\") pod \"51c6952a-9ec5-4332-9e8c-27087d153b33\" (UID: \"51c6952a-9ec5-4332-9e8c-27087d153b33\") " Jan 28 07:04:07 crc kubenswrapper[4642]: I0128 07:04:07.694941 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51c6952a-9ec5-4332-9e8c-27087d153b33-scripts" (OuterVolumeSpecName: "scripts") pod "51c6952a-9ec5-4332-9e8c-27087d153b33" (UID: "51c6952a-9ec5-4332-9e8c-27087d153b33"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:04:07 crc kubenswrapper[4642]: I0128 07:04:07.694990 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51c6952a-9ec5-4332-9e8c-27087d153b33-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "51c6952a-9ec5-4332-9e8c-27087d153b33" (UID: "51c6952a-9ec5-4332-9e8c-27087d153b33"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:04:07 crc kubenswrapper[4642]: I0128 07:04:07.695239 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51c6952a-9ec5-4332-9e8c-27087d153b33-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "51c6952a-9ec5-4332-9e8c-27087d153b33" (UID: "51c6952a-9ec5-4332-9e8c-27087d153b33"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:04:07 crc kubenswrapper[4642]: I0128 07:04:07.702040 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51c6952a-9ec5-4332-9e8c-27087d153b33-kube-api-access-h7gk7" (OuterVolumeSpecName: "kube-api-access-h7gk7") pod "51c6952a-9ec5-4332-9e8c-27087d153b33" (UID: "51c6952a-9ec5-4332-9e8c-27087d153b33"). InnerVolumeSpecName "kube-api-access-h7gk7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:04:07 crc kubenswrapper[4642]: I0128 07:04:07.713553 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51c6952a-9ec5-4332-9e8c-27087d153b33-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "51c6952a-9ec5-4332-9e8c-27087d153b33" (UID: "51c6952a-9ec5-4332-9e8c-27087d153b33"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:04:07 crc kubenswrapper[4642]: I0128 07:04:07.720562 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51c6952a-9ec5-4332-9e8c-27087d153b33-config-data" (OuterVolumeSpecName: "config-data") pod "51c6952a-9ec5-4332-9e8c-27087d153b33" (UID: "51c6952a-9ec5-4332-9e8c-27087d153b33"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:04:07 crc kubenswrapper[4642]: I0128 07:04:07.790588 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h7gk7\" (UniqueName: \"kubernetes.io/projected/51c6952a-9ec5-4332-9e8c-27087d153b33-kube-api-access-h7gk7\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:07 crc kubenswrapper[4642]: I0128 07:04:07.790620 4642 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51c6952a-9ec5-4332-9e8c-27087d153b33-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:07 crc kubenswrapper[4642]: I0128 07:04:07.790632 4642 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51c6952a-9ec5-4332-9e8c-27087d153b33-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:07 crc kubenswrapper[4642]: I0128 07:04:07.790642 4642 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/51c6952a-9ec5-4332-9e8c-27087d153b33-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:07 crc kubenswrapper[4642]: I0128 07:04:07.790653 4642 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/51c6952a-9ec5-4332-9e8c-27087d153b33-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:07 crc kubenswrapper[4642]: I0128 07:04:07.790665 4642 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51c6952a-9ec5-4332-9e8c-27087d153b33-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:08 crc kubenswrapper[4642]: I0128 07:04:08.310393 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-tn5p6" event={"ID":"51c6952a-9ec5-4332-9e8c-27087d153b33","Type":"ContainerDied","Data":"6cc4d69941c7f6a60e94bf05e81050a3e3fb506266279995654e0530277c2ad5"} Jan 28 07:04:08 crc kubenswrapper[4642]: I0128 07:04:08.310803 4642 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6cc4d69941c7f6a60e94bf05e81050a3e3fb506266279995654e0530277c2ad5" Jan 28 07:04:08 crc kubenswrapper[4642]: I0128 07:04:08.310444 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-tn5p6" Jan 28 07:04:08 crc kubenswrapper[4642]: I0128 07:04:08.317519 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"84277b74-ee89-4c86-9985-30272872cee9","Type":"ContainerStarted","Data":"0b064c397a3c3aa31d95c9c8c13b5dbd42dcc82a8b22b5b4da684b7a38c2e618"} Jan 28 07:04:08 crc kubenswrapper[4642]: I0128 07:04:08.324152 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b3ae5f52-55f6-47d3-936b-2af9cf3618fa","Type":"ContainerStarted","Data":"78a7c6fae283e8db8f3ade6588474c137364e86133a49ca11b0c0ef95381f27f"} Jan 28 07:04:08 crc kubenswrapper[4642]: I0128 07:04:08.575291 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 07:04:08 crc kubenswrapper[4642]: I0128 07:04:08.675260 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-tn5p6"] Jan 28 07:04:08 crc kubenswrapper[4642]: I0128 07:04:08.698647 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-tn5p6"] Jan 28 07:04:08 crc kubenswrapper[4642]: I0128 07:04:08.719340 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 07:04:08 crc kubenswrapper[4642]: I0128 07:04:08.748229 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-cpmkw"] Jan 28 07:04:08 crc kubenswrapper[4642]: E0128 07:04:08.748517 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7867b3ca-d714-457b-a06e-4189d33c1332" containerName="init" Jan 28 07:04:08 crc kubenswrapper[4642]: I0128 07:04:08.748534 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="7867b3ca-d714-457b-a06e-4189d33c1332" containerName="init" Jan 28 07:04:08 crc kubenswrapper[4642]: E0128 07:04:08.748558 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7867b3ca-d714-457b-a06e-4189d33c1332" containerName="dnsmasq-dns" Jan 28 07:04:08 crc kubenswrapper[4642]: I0128 07:04:08.748564 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="7867b3ca-d714-457b-a06e-4189d33c1332" containerName="dnsmasq-dns" Jan 28 07:04:08 crc kubenswrapper[4642]: E0128 07:04:08.748583 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51c6952a-9ec5-4332-9e8c-27087d153b33" containerName="keystone-bootstrap" Jan 28 07:04:08 crc kubenswrapper[4642]: I0128 07:04:08.748590 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="51c6952a-9ec5-4332-9e8c-27087d153b33" containerName="keystone-bootstrap" Jan 28 07:04:08 crc kubenswrapper[4642]: I0128 07:04:08.748717 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="7867b3ca-d714-457b-a06e-4189d33c1332" containerName="dnsmasq-dns" Jan 28 07:04:08 crc kubenswrapper[4642]: I0128 07:04:08.748732 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="51c6952a-9ec5-4332-9e8c-27087d153b33" containerName="keystone-bootstrap" Jan 28 07:04:08 crc kubenswrapper[4642]: I0128 07:04:08.749152 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-cpmkw" Jan 28 07:04:08 crc kubenswrapper[4642]: I0128 07:04:08.752409 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 28 07:04:08 crc kubenswrapper[4642]: I0128 07:04:08.757790 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 28 07:04:08 crc kubenswrapper[4642]: I0128 07:04:08.758045 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-lm99t" Jan 28 07:04:08 crc kubenswrapper[4642]: I0128 07:04:08.758083 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 28 07:04:08 crc kubenswrapper[4642]: I0128 07:04:08.758350 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 28 07:04:08 crc kubenswrapper[4642]: I0128 07:04:08.765040 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-cpmkw"] Jan 28 07:04:08 crc kubenswrapper[4642]: I0128 07:04:08.924663 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fae0170c-94f4-4e36-b99d-8d183fb5b6e1-combined-ca-bundle\") pod \"keystone-bootstrap-cpmkw\" (UID: \"fae0170c-94f4-4e36-b99d-8d183fb5b6e1\") " pod="openstack/keystone-bootstrap-cpmkw" Jan 28 07:04:08 crc kubenswrapper[4642]: I0128 07:04:08.924711 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fae0170c-94f4-4e36-b99d-8d183fb5b6e1-credential-keys\") pod \"keystone-bootstrap-cpmkw\" (UID: \"fae0170c-94f4-4e36-b99d-8d183fb5b6e1\") " pod="openstack/keystone-bootstrap-cpmkw" Jan 28 07:04:08 crc kubenswrapper[4642]: I0128 07:04:08.924739 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fae0170c-94f4-4e36-b99d-8d183fb5b6e1-config-data\") pod \"keystone-bootstrap-cpmkw\" (UID: \"fae0170c-94f4-4e36-b99d-8d183fb5b6e1\") " pod="openstack/keystone-bootstrap-cpmkw" Jan 28 07:04:08 crc kubenswrapper[4642]: I0128 07:04:08.924848 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rf2xd\" (UniqueName: \"kubernetes.io/projected/fae0170c-94f4-4e36-b99d-8d183fb5b6e1-kube-api-access-rf2xd\") pod \"keystone-bootstrap-cpmkw\" (UID: \"fae0170c-94f4-4e36-b99d-8d183fb5b6e1\") " pod="openstack/keystone-bootstrap-cpmkw" Jan 28 07:04:08 crc kubenswrapper[4642]: I0128 07:04:08.925013 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fae0170c-94f4-4e36-b99d-8d183fb5b6e1-scripts\") pod \"keystone-bootstrap-cpmkw\" (UID: \"fae0170c-94f4-4e36-b99d-8d183fb5b6e1\") " pod="openstack/keystone-bootstrap-cpmkw" Jan 28 07:04:08 crc kubenswrapper[4642]: I0128 07:04:08.925073 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fae0170c-94f4-4e36-b99d-8d183fb5b6e1-fernet-keys\") pod \"keystone-bootstrap-cpmkw\" (UID: \"fae0170c-94f4-4e36-b99d-8d183fb5b6e1\") " pod="openstack/keystone-bootstrap-cpmkw" Jan 28 07:04:09 crc kubenswrapper[4642]: I0128 07:04:09.027202 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rf2xd\" (UniqueName: \"kubernetes.io/projected/fae0170c-94f4-4e36-b99d-8d183fb5b6e1-kube-api-access-rf2xd\") pod \"keystone-bootstrap-cpmkw\" (UID: \"fae0170c-94f4-4e36-b99d-8d183fb5b6e1\") " pod="openstack/keystone-bootstrap-cpmkw" Jan 28 07:04:09 crc kubenswrapper[4642]: I0128 07:04:09.027260 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fae0170c-94f4-4e36-b99d-8d183fb5b6e1-scripts\") pod \"keystone-bootstrap-cpmkw\" (UID: \"fae0170c-94f4-4e36-b99d-8d183fb5b6e1\") " pod="openstack/keystone-bootstrap-cpmkw" Jan 28 07:04:09 crc kubenswrapper[4642]: I0128 07:04:09.027284 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fae0170c-94f4-4e36-b99d-8d183fb5b6e1-fernet-keys\") pod \"keystone-bootstrap-cpmkw\" (UID: \"fae0170c-94f4-4e36-b99d-8d183fb5b6e1\") " pod="openstack/keystone-bootstrap-cpmkw" Jan 28 07:04:09 crc kubenswrapper[4642]: I0128 07:04:09.027353 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fae0170c-94f4-4e36-b99d-8d183fb5b6e1-combined-ca-bundle\") pod \"keystone-bootstrap-cpmkw\" (UID: \"fae0170c-94f4-4e36-b99d-8d183fb5b6e1\") " pod="openstack/keystone-bootstrap-cpmkw" Jan 28 07:04:09 crc kubenswrapper[4642]: I0128 07:04:09.027369 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fae0170c-94f4-4e36-b99d-8d183fb5b6e1-credential-keys\") pod \"keystone-bootstrap-cpmkw\" (UID: \"fae0170c-94f4-4e36-b99d-8d183fb5b6e1\") " pod="openstack/keystone-bootstrap-cpmkw" Jan 28 07:04:09 crc kubenswrapper[4642]: I0128 07:04:09.027385 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fae0170c-94f4-4e36-b99d-8d183fb5b6e1-config-data\") pod \"keystone-bootstrap-cpmkw\" (UID: \"fae0170c-94f4-4e36-b99d-8d183fb5b6e1\") " pod="openstack/keystone-bootstrap-cpmkw" Jan 28 07:04:09 crc kubenswrapper[4642]: I0128 07:04:09.033270 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fae0170c-94f4-4e36-b99d-8d183fb5b6e1-combined-ca-bundle\") pod \"keystone-bootstrap-cpmkw\" (UID: \"fae0170c-94f4-4e36-b99d-8d183fb5b6e1\") " pod="openstack/keystone-bootstrap-cpmkw" Jan 28 07:04:09 crc kubenswrapper[4642]: I0128 07:04:09.033598 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fae0170c-94f4-4e36-b99d-8d183fb5b6e1-config-data\") pod \"keystone-bootstrap-cpmkw\" (UID: \"fae0170c-94f4-4e36-b99d-8d183fb5b6e1\") " pod="openstack/keystone-bootstrap-cpmkw" Jan 28 07:04:09 crc kubenswrapper[4642]: I0128 07:04:09.033834 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fae0170c-94f4-4e36-b99d-8d183fb5b6e1-scripts\") pod \"keystone-bootstrap-cpmkw\" (UID: \"fae0170c-94f4-4e36-b99d-8d183fb5b6e1\") " pod="openstack/keystone-bootstrap-cpmkw" Jan 28 07:04:09 crc kubenswrapper[4642]: I0128 07:04:09.034204 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fae0170c-94f4-4e36-b99d-8d183fb5b6e1-fernet-keys\") pod \"keystone-bootstrap-cpmkw\" (UID: \"fae0170c-94f4-4e36-b99d-8d183fb5b6e1\") " pod="openstack/keystone-bootstrap-cpmkw" Jan 28 07:04:09 crc kubenswrapper[4642]: I0128 07:04:09.034810 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fae0170c-94f4-4e36-b99d-8d183fb5b6e1-credential-keys\") pod \"keystone-bootstrap-cpmkw\" (UID: \"fae0170c-94f4-4e36-b99d-8d183fb5b6e1\") " pod="openstack/keystone-bootstrap-cpmkw" Jan 28 07:04:09 crc kubenswrapper[4642]: I0128 07:04:09.040278 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rf2xd\" (UniqueName: \"kubernetes.io/projected/fae0170c-94f4-4e36-b99d-8d183fb5b6e1-kube-api-access-rf2xd\") pod \"keystone-bootstrap-cpmkw\" (UID: \"fae0170c-94f4-4e36-b99d-8d183fb5b6e1\") " pod="openstack/keystone-bootstrap-cpmkw" Jan 28 07:04:09 crc kubenswrapper[4642]: I0128 07:04:09.065587 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-cpmkw" Jan 28 07:04:09 crc kubenswrapper[4642]: I0128 07:04:09.106323 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51c6952a-9ec5-4332-9e8c-27087d153b33" path="/var/lib/kubelet/pods/51c6952a-9ec5-4332-9e8c-27087d153b33/volumes" Jan 28 07:04:09 crc kubenswrapper[4642]: I0128 07:04:09.599842 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-ds4qn" Jan 28 07:04:09 crc kubenswrapper[4642]: I0128 07:04:09.735767 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1bc0b13-73e3-41ef-a7a5-c5da7462234e-combined-ca-bundle\") pod \"b1bc0b13-73e3-41ef-a7a5-c5da7462234e\" (UID: \"b1bc0b13-73e3-41ef-a7a5-c5da7462234e\") " Jan 28 07:04:09 crc kubenswrapper[4642]: I0128 07:04:09.735815 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b1bc0b13-73e3-41ef-a7a5-c5da7462234e-config\") pod \"b1bc0b13-73e3-41ef-a7a5-c5da7462234e\" (UID: \"b1bc0b13-73e3-41ef-a7a5-c5da7462234e\") " Jan 28 07:04:09 crc kubenswrapper[4642]: I0128 07:04:09.735874 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v9qsb\" (UniqueName: \"kubernetes.io/projected/b1bc0b13-73e3-41ef-a7a5-c5da7462234e-kube-api-access-v9qsb\") pod \"b1bc0b13-73e3-41ef-a7a5-c5da7462234e\" (UID: \"b1bc0b13-73e3-41ef-a7a5-c5da7462234e\") " Jan 28 07:04:09 crc kubenswrapper[4642]: I0128 07:04:09.740164 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1bc0b13-73e3-41ef-a7a5-c5da7462234e-kube-api-access-v9qsb" (OuterVolumeSpecName: "kube-api-access-v9qsb") pod "b1bc0b13-73e3-41ef-a7a5-c5da7462234e" (UID: "b1bc0b13-73e3-41ef-a7a5-c5da7462234e"). InnerVolumeSpecName "kube-api-access-v9qsb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:04:09 crc kubenswrapper[4642]: I0128 07:04:09.758439 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1bc0b13-73e3-41ef-a7a5-c5da7462234e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b1bc0b13-73e3-41ef-a7a5-c5da7462234e" (UID: "b1bc0b13-73e3-41ef-a7a5-c5da7462234e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:04:09 crc kubenswrapper[4642]: I0128 07:04:09.758717 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1bc0b13-73e3-41ef-a7a5-c5da7462234e-config" (OuterVolumeSpecName: "config") pod "b1bc0b13-73e3-41ef-a7a5-c5da7462234e" (UID: "b1bc0b13-73e3-41ef-a7a5-c5da7462234e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:04:09 crc kubenswrapper[4642]: I0128 07:04:09.838912 4642 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1bc0b13-73e3-41ef-a7a5-c5da7462234e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:09 crc kubenswrapper[4642]: I0128 07:04:09.838976 4642 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/b1bc0b13-73e3-41ef-a7a5-c5da7462234e-config\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:09 crc kubenswrapper[4642]: I0128 07:04:09.838990 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v9qsb\" (UniqueName: \"kubernetes.io/projected/b1bc0b13-73e3-41ef-a7a5-c5da7462234e-kube-api-access-v9qsb\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:10 crc kubenswrapper[4642]: I0128 07:04:10.340589 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-ds4qn" event={"ID":"b1bc0b13-73e3-41ef-a7a5-c5da7462234e","Type":"ContainerDied","Data":"c1eafa5e88d1784deb6ef4690ba6304690e2f8627f7e4f8ac6aa7a6dabf8e32a"} Jan 28 07:04:10 crc kubenswrapper[4642]: I0128 07:04:10.340620 4642 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1eafa5e88d1784deb6ef4690ba6304690e2f8627f7e4f8ac6aa7a6dabf8e32a" Jan 28 07:04:10 crc kubenswrapper[4642]: I0128 07:04:10.340669 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-ds4qn" Jan 28 07:04:10 crc kubenswrapper[4642]: I0128 07:04:10.686216 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-cpmkw"] Jan 28 07:04:10 crc kubenswrapper[4642]: I0128 07:04:10.823214 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9cf86d859-sjnx7"] Jan 28 07:04:10 crc kubenswrapper[4642]: I0128 07:04:10.823401 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-9cf86d859-sjnx7" podUID="4fddf6a7-1202-40a7-add6-98fc0a30fda6" containerName="dnsmasq-dns" containerID="cri-o://dbb471184780ba9c493f7f11aec118d9c1d6cca84cdae62611491ff5e78207d8" gracePeriod=10 Jan 28 07:04:10 crc kubenswrapper[4642]: I0128 07:04:10.824646 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-9cf86d859-sjnx7" Jan 28 07:04:10 crc kubenswrapper[4642]: I0128 07:04:10.852137 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d9f567799-j8ft2"] Jan 28 07:04:10 crc kubenswrapper[4642]: E0128 07:04:10.852576 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1bc0b13-73e3-41ef-a7a5-c5da7462234e" containerName="neutron-db-sync" Jan 28 07:04:10 crc kubenswrapper[4642]: I0128 07:04:10.852641 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1bc0b13-73e3-41ef-a7a5-c5da7462234e" containerName="neutron-db-sync" Jan 28 07:04:10 crc kubenswrapper[4642]: I0128 07:04:10.852852 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1bc0b13-73e3-41ef-a7a5-c5da7462234e" containerName="neutron-db-sync" Jan 28 07:04:10 crc kubenswrapper[4642]: I0128 07:04:10.853607 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d9f567799-j8ft2" Jan 28 07:04:10 crc kubenswrapper[4642]: I0128 07:04:10.871137 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d9f567799-j8ft2"] Jan 28 07:04:10 crc kubenswrapper[4642]: I0128 07:04:10.955063 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5586f7766d-jr6js"] Jan 28 07:04:10 crc kubenswrapper[4642]: I0128 07:04:10.956176 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5586f7766d-jr6js" Jan 28 07:04:10 crc kubenswrapper[4642]: I0128 07:04:10.962765 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 28 07:04:10 crc kubenswrapper[4642]: I0128 07:04:10.962901 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 28 07:04:10 crc kubenswrapper[4642]: I0128 07:04:10.963023 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-8bgcg" Jan 28 07:04:10 crc kubenswrapper[4642]: I0128 07:04:10.963132 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 28 07:04:10 crc kubenswrapper[4642]: I0128 07:04:10.963124 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fj7l2\" (UniqueName: \"kubernetes.io/projected/901e4414-3465-41d0-a4d6-cc041e6e4319-kube-api-access-fj7l2\") pod \"dnsmasq-dns-7d9f567799-j8ft2\" (UID: \"901e4414-3465-41d0-a4d6-cc041e6e4319\") " pod="openstack/dnsmasq-dns-7d9f567799-j8ft2" Jan 28 07:04:10 crc kubenswrapper[4642]: I0128 07:04:10.963276 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/901e4414-3465-41d0-a4d6-cc041e6e4319-ovsdbserver-sb\") pod \"dnsmasq-dns-7d9f567799-j8ft2\" (UID: \"901e4414-3465-41d0-a4d6-cc041e6e4319\") " pod="openstack/dnsmasq-dns-7d9f567799-j8ft2" Jan 28 07:04:10 crc kubenswrapper[4642]: I0128 07:04:10.963305 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/901e4414-3465-41d0-a4d6-cc041e6e4319-dns-svc\") pod \"dnsmasq-dns-7d9f567799-j8ft2\" (UID: \"901e4414-3465-41d0-a4d6-cc041e6e4319\") " pod="openstack/dnsmasq-dns-7d9f567799-j8ft2" Jan 28 07:04:10 crc kubenswrapper[4642]: I0128 07:04:10.963408 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/901e4414-3465-41d0-a4d6-cc041e6e4319-config\") pod \"dnsmasq-dns-7d9f567799-j8ft2\" (UID: \"901e4414-3465-41d0-a4d6-cc041e6e4319\") " pod="openstack/dnsmasq-dns-7d9f567799-j8ft2" Jan 28 07:04:10 crc kubenswrapper[4642]: I0128 07:04:10.963424 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/901e4414-3465-41d0-a4d6-cc041e6e4319-dns-swift-storage-0\") pod \"dnsmasq-dns-7d9f567799-j8ft2\" (UID: \"901e4414-3465-41d0-a4d6-cc041e6e4319\") " pod="openstack/dnsmasq-dns-7d9f567799-j8ft2" Jan 28 07:04:10 crc kubenswrapper[4642]: I0128 07:04:10.963443 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/901e4414-3465-41d0-a4d6-cc041e6e4319-ovsdbserver-nb\") pod \"dnsmasq-dns-7d9f567799-j8ft2\" (UID: \"901e4414-3465-41d0-a4d6-cc041e6e4319\") " pod="openstack/dnsmasq-dns-7d9f567799-j8ft2" Jan 28 07:04:10 crc kubenswrapper[4642]: I0128 07:04:10.982989 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5586f7766d-jr6js"] Jan 28 07:04:11 crc kubenswrapper[4642]: I0128 07:04:11.064975 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/901e4414-3465-41d0-a4d6-cc041e6e4319-ovsdbserver-sb\") pod \"dnsmasq-dns-7d9f567799-j8ft2\" (UID: \"901e4414-3465-41d0-a4d6-cc041e6e4319\") " pod="openstack/dnsmasq-dns-7d9f567799-j8ft2" Jan 28 07:04:11 crc kubenswrapper[4642]: I0128 07:04:11.065013 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6ba95f7c-c848-4c02-b886-357bbd0f1223-config\") pod \"neutron-5586f7766d-jr6js\" (UID: \"6ba95f7c-c848-4c02-b886-357bbd0f1223\") " pod="openstack/neutron-5586f7766d-jr6js" Jan 28 07:04:11 crc kubenswrapper[4642]: I0128 07:04:11.065044 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/901e4414-3465-41d0-a4d6-cc041e6e4319-dns-svc\") pod \"dnsmasq-dns-7d9f567799-j8ft2\" (UID: \"901e4414-3465-41d0-a4d6-cc041e6e4319\") " pod="openstack/dnsmasq-dns-7d9f567799-j8ft2" Jan 28 07:04:11 crc kubenswrapper[4642]: I0128 07:04:11.065077 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ba95f7c-c848-4c02-b886-357bbd0f1223-ovndb-tls-certs\") pod \"neutron-5586f7766d-jr6js\" (UID: \"6ba95f7c-c848-4c02-b886-357bbd0f1223\") " pod="openstack/neutron-5586f7766d-jr6js" Jan 28 07:04:11 crc kubenswrapper[4642]: I0128 07:04:11.065095 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ba95f7c-c848-4c02-b886-357bbd0f1223-combined-ca-bundle\") pod \"neutron-5586f7766d-jr6js\" (UID: \"6ba95f7c-c848-4c02-b886-357bbd0f1223\") " pod="openstack/neutron-5586f7766d-jr6js" Jan 28 07:04:11 crc kubenswrapper[4642]: I0128 07:04:11.065132 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/901e4414-3465-41d0-a4d6-cc041e6e4319-config\") pod \"dnsmasq-dns-7d9f567799-j8ft2\" (UID: \"901e4414-3465-41d0-a4d6-cc041e6e4319\") " pod="openstack/dnsmasq-dns-7d9f567799-j8ft2" Jan 28 07:04:11 crc kubenswrapper[4642]: I0128 07:04:11.065148 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/901e4414-3465-41d0-a4d6-cc041e6e4319-dns-swift-storage-0\") pod \"dnsmasq-dns-7d9f567799-j8ft2\" (UID: \"901e4414-3465-41d0-a4d6-cc041e6e4319\") " pod="openstack/dnsmasq-dns-7d9f567799-j8ft2" Jan 28 07:04:11 crc kubenswrapper[4642]: I0128 07:04:11.065166 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/901e4414-3465-41d0-a4d6-cc041e6e4319-ovsdbserver-nb\") pod \"dnsmasq-dns-7d9f567799-j8ft2\" (UID: \"901e4414-3465-41d0-a4d6-cc041e6e4319\") " pod="openstack/dnsmasq-dns-7d9f567799-j8ft2" Jan 28 07:04:11 crc kubenswrapper[4642]: I0128 07:04:11.065195 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hmhp\" (UniqueName: \"kubernetes.io/projected/6ba95f7c-c848-4c02-b886-357bbd0f1223-kube-api-access-6hmhp\") pod \"neutron-5586f7766d-jr6js\" (UID: \"6ba95f7c-c848-4c02-b886-357bbd0f1223\") " pod="openstack/neutron-5586f7766d-jr6js" Jan 28 07:04:11 crc kubenswrapper[4642]: I0128 07:04:11.065231 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6ba95f7c-c848-4c02-b886-357bbd0f1223-httpd-config\") pod \"neutron-5586f7766d-jr6js\" (UID: \"6ba95f7c-c848-4c02-b886-357bbd0f1223\") " pod="openstack/neutron-5586f7766d-jr6js" Jan 28 07:04:11 crc kubenswrapper[4642]: I0128 07:04:11.065268 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fj7l2\" (UniqueName: \"kubernetes.io/projected/901e4414-3465-41d0-a4d6-cc041e6e4319-kube-api-access-fj7l2\") pod \"dnsmasq-dns-7d9f567799-j8ft2\" (UID: \"901e4414-3465-41d0-a4d6-cc041e6e4319\") " pod="openstack/dnsmasq-dns-7d9f567799-j8ft2" Jan 28 07:04:11 crc kubenswrapper[4642]: I0128 07:04:11.066310 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/901e4414-3465-41d0-a4d6-cc041e6e4319-dns-swift-storage-0\") pod \"dnsmasq-dns-7d9f567799-j8ft2\" (UID: \"901e4414-3465-41d0-a4d6-cc041e6e4319\") " pod="openstack/dnsmasq-dns-7d9f567799-j8ft2" Jan 28 07:04:11 crc kubenswrapper[4642]: I0128 07:04:11.066363 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/901e4414-3465-41d0-a4d6-cc041e6e4319-ovsdbserver-sb\") pod \"dnsmasq-dns-7d9f567799-j8ft2\" (UID: \"901e4414-3465-41d0-a4d6-cc041e6e4319\") " pod="openstack/dnsmasq-dns-7d9f567799-j8ft2" Jan 28 07:04:11 crc kubenswrapper[4642]: I0128 07:04:11.067253 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/901e4414-3465-41d0-a4d6-cc041e6e4319-dns-svc\") pod \"dnsmasq-dns-7d9f567799-j8ft2\" (UID: \"901e4414-3465-41d0-a4d6-cc041e6e4319\") " pod="openstack/dnsmasq-dns-7d9f567799-j8ft2" Jan 28 07:04:11 crc kubenswrapper[4642]: I0128 07:04:11.067580 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/901e4414-3465-41d0-a4d6-cc041e6e4319-ovsdbserver-nb\") pod \"dnsmasq-dns-7d9f567799-j8ft2\" (UID: \"901e4414-3465-41d0-a4d6-cc041e6e4319\") " pod="openstack/dnsmasq-dns-7d9f567799-j8ft2" Jan 28 07:04:11 crc kubenswrapper[4642]: I0128 07:04:11.067619 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/901e4414-3465-41d0-a4d6-cc041e6e4319-config\") pod \"dnsmasq-dns-7d9f567799-j8ft2\" (UID: \"901e4414-3465-41d0-a4d6-cc041e6e4319\") " pod="openstack/dnsmasq-dns-7d9f567799-j8ft2" Jan 28 07:04:11 crc kubenswrapper[4642]: I0128 07:04:11.091230 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fj7l2\" (UniqueName: \"kubernetes.io/projected/901e4414-3465-41d0-a4d6-cc041e6e4319-kube-api-access-fj7l2\") pod \"dnsmasq-dns-7d9f567799-j8ft2\" (UID: \"901e4414-3465-41d0-a4d6-cc041e6e4319\") " pod="openstack/dnsmasq-dns-7d9f567799-j8ft2" Jan 28 07:04:11 crc kubenswrapper[4642]: I0128 07:04:11.169969 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6ba95f7c-c848-4c02-b886-357bbd0f1223-config\") pod \"neutron-5586f7766d-jr6js\" (UID: \"6ba95f7c-c848-4c02-b886-357bbd0f1223\") " pod="openstack/neutron-5586f7766d-jr6js" Jan 28 07:04:11 crc kubenswrapper[4642]: I0128 07:04:11.171146 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ba95f7c-c848-4c02-b886-357bbd0f1223-ovndb-tls-certs\") pod \"neutron-5586f7766d-jr6js\" (UID: \"6ba95f7c-c848-4c02-b886-357bbd0f1223\") " pod="openstack/neutron-5586f7766d-jr6js" Jan 28 07:04:11 crc kubenswrapper[4642]: I0128 07:04:11.171169 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ba95f7c-c848-4c02-b886-357bbd0f1223-combined-ca-bundle\") pod \"neutron-5586f7766d-jr6js\" (UID: \"6ba95f7c-c848-4c02-b886-357bbd0f1223\") " pod="openstack/neutron-5586f7766d-jr6js" Jan 28 07:04:11 crc kubenswrapper[4642]: I0128 07:04:11.171270 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hmhp\" (UniqueName: \"kubernetes.io/projected/6ba95f7c-c848-4c02-b886-357bbd0f1223-kube-api-access-6hmhp\") pod \"neutron-5586f7766d-jr6js\" (UID: \"6ba95f7c-c848-4c02-b886-357bbd0f1223\") " pod="openstack/neutron-5586f7766d-jr6js" Jan 28 07:04:11 crc kubenswrapper[4642]: I0128 07:04:11.171339 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6ba95f7c-c848-4c02-b886-357bbd0f1223-httpd-config\") pod \"neutron-5586f7766d-jr6js\" (UID: \"6ba95f7c-c848-4c02-b886-357bbd0f1223\") " pod="openstack/neutron-5586f7766d-jr6js" Jan 28 07:04:11 crc kubenswrapper[4642]: I0128 07:04:11.175695 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d9f567799-j8ft2" Jan 28 07:04:11 crc kubenswrapper[4642]: I0128 07:04:11.181770 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ba95f7c-c848-4c02-b886-357bbd0f1223-ovndb-tls-certs\") pod \"neutron-5586f7766d-jr6js\" (UID: \"6ba95f7c-c848-4c02-b886-357bbd0f1223\") " pod="openstack/neutron-5586f7766d-jr6js" Jan 28 07:04:11 crc kubenswrapper[4642]: I0128 07:04:11.182772 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/6ba95f7c-c848-4c02-b886-357bbd0f1223-config\") pod \"neutron-5586f7766d-jr6js\" (UID: \"6ba95f7c-c848-4c02-b886-357bbd0f1223\") " pod="openstack/neutron-5586f7766d-jr6js" Jan 28 07:04:11 crc kubenswrapper[4642]: I0128 07:04:11.187872 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hmhp\" (UniqueName: \"kubernetes.io/projected/6ba95f7c-c848-4c02-b886-357bbd0f1223-kube-api-access-6hmhp\") pod \"neutron-5586f7766d-jr6js\" (UID: \"6ba95f7c-c848-4c02-b886-357bbd0f1223\") " pod="openstack/neutron-5586f7766d-jr6js" Jan 28 07:04:11 crc kubenswrapper[4642]: I0128 07:04:11.203353 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6ba95f7c-c848-4c02-b886-357bbd0f1223-httpd-config\") pod \"neutron-5586f7766d-jr6js\" (UID: \"6ba95f7c-c848-4c02-b886-357bbd0f1223\") " pod="openstack/neutron-5586f7766d-jr6js" Jan 28 07:04:11 crc kubenswrapper[4642]: I0128 07:04:11.205937 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ba95f7c-c848-4c02-b886-357bbd0f1223-combined-ca-bundle\") pod \"neutron-5586f7766d-jr6js\" (UID: \"6ba95f7c-c848-4c02-b886-357bbd0f1223\") " pod="openstack/neutron-5586f7766d-jr6js" Jan 28 07:04:11 crc kubenswrapper[4642]: I0128 07:04:11.294676 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5586f7766d-jr6js" Jan 28 07:04:11 crc kubenswrapper[4642]: I0128 07:04:11.370834 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9cf86d859-sjnx7" Jan 28 07:04:11 crc kubenswrapper[4642]: I0128 07:04:11.371838 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-vtvtp" event={"ID":"914420d1-9b01-49b9-962d-405cc170a061","Type":"ContainerStarted","Data":"ee9f1b4e9320eb073e04d76be36b698bc5d17488f1ccb1870e6b94381f2890ad"} Jan 28 07:04:11 crc kubenswrapper[4642]: I0128 07:04:11.383215 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-cpmkw" event={"ID":"fae0170c-94f4-4e36-b99d-8d183fb5b6e1","Type":"ContainerStarted","Data":"a302f4c04cd9e0202f6bed62be7fa466b5300d3d7ce24cbcb7fc8bbf566f124b"} Jan 28 07:04:11 crc kubenswrapper[4642]: I0128 07:04:11.383244 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-cpmkw" event={"ID":"fae0170c-94f4-4e36-b99d-8d183fb5b6e1","Type":"ContainerStarted","Data":"3a5cefedb57c1169c457381c92fb797276024c885de5894a90d2950b9755185a"} Jan 28 07:04:11 crc kubenswrapper[4642]: I0128 07:04:11.387574 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"84277b74-ee89-4c86-9985-30272872cee9","Type":"ContainerStarted","Data":"34ea78f22a78b7c0944266680ec2f6a01f282d9ba73b9c296ab4f3198809ac8a"} Jan 28 07:04:11 crc kubenswrapper[4642]: I0128 07:04:11.387760 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="84277b74-ee89-4c86-9985-30272872cee9" containerName="glance-log" containerID="cri-o://0b064c397a3c3aa31d95c9c8c13b5dbd42dcc82a8b22b5b4da684b7a38c2e618" gracePeriod=30 Jan 28 07:04:11 crc kubenswrapper[4642]: I0128 07:04:11.389264 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="84277b74-ee89-4c86-9985-30272872cee9" containerName="glance-httpd" containerID="cri-o://34ea78f22a78b7c0944266680ec2f6a01f282d9ba73b9c296ab4f3198809ac8a" gracePeriod=30 Jan 28 07:04:11 crc kubenswrapper[4642]: I0128 07:04:11.426446 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b3ae5f52-55f6-47d3-936b-2af9cf3618fa","Type":"ContainerStarted","Data":"e767fdefcdaef61fe9fa9884fe9ae7063406a2f671dd3f9933423487d0cc313f"} Jan 28 07:04:11 crc kubenswrapper[4642]: I0128 07:04:11.420954 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-vtvtp" podStartSLOduration=2.055019688 podStartE2EDuration="12.420941017s" podCreationTimestamp="2026-01-28 07:03:59 +0000 UTC" firstStartedPulling="2026-01-28 07:03:59.935301279 +0000 UTC m=+963.167390088" lastFinishedPulling="2026-01-28 07:04:10.301222608 +0000 UTC m=+973.533311417" observedRunningTime="2026-01-28 07:04:11.41724838 +0000 UTC m=+974.649337189" watchObservedRunningTime="2026-01-28 07:04:11.420941017 +0000 UTC m=+974.653029826" Jan 28 07:04:11 crc kubenswrapper[4642]: I0128 07:04:11.442558 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-cpmkw" podStartSLOduration=3.442549769 podStartE2EDuration="3.442549769s" podCreationTimestamp="2026-01-28 07:04:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 07:04:11.437031517 +0000 UTC m=+974.669120326" watchObservedRunningTime="2026-01-28 07:04:11.442549769 +0000 UTC m=+974.674638578" Jan 28 07:04:11 crc kubenswrapper[4642]: I0128 07:04:11.461222 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"201fcbe3-af3e-4f46-bd0f-07693de10229","Type":"ContainerStarted","Data":"b536b7183d5a8423c1a0ed66eec87ed8e00629d2b1da21c17881a879e0873af1"} Jan 28 07:04:11 crc kubenswrapper[4642]: I0128 07:04:11.466398 4642 generic.go:334] "Generic (PLEG): container finished" podID="4fddf6a7-1202-40a7-add6-98fc0a30fda6" containerID="dbb471184780ba9c493f7f11aec118d9c1d6cca84cdae62611491ff5e78207d8" exitCode=0 Jan 28 07:04:11 crc kubenswrapper[4642]: I0128 07:04:11.466528 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9cf86d859-sjnx7" event={"ID":"4fddf6a7-1202-40a7-add6-98fc0a30fda6","Type":"ContainerDied","Data":"dbb471184780ba9c493f7f11aec118d9c1d6cca84cdae62611491ff5e78207d8"} Jan 28 07:04:11 crc kubenswrapper[4642]: I0128 07:04:11.466614 4642 scope.go:117] "RemoveContainer" containerID="dbb471184780ba9c493f7f11aec118d9c1d6cca84cdae62611491ff5e78207d8" Jan 28 07:04:11 crc kubenswrapper[4642]: I0128 07:04:11.466783 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9cf86d859-sjnx7" Jan 28 07:04:11 crc kubenswrapper[4642]: I0128 07:04:11.471326 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=10.471266953 podStartE2EDuration="10.471266953s" podCreationTimestamp="2026-01-28 07:04:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 07:04:11.45863526 +0000 UTC m=+974.690724068" watchObservedRunningTime="2026-01-28 07:04:11.471266953 +0000 UTC m=+974.703355762" Jan 28 07:04:11 crc kubenswrapper[4642]: I0128 07:04:11.475836 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8s4pp" event={"ID":"e57d0d61-2e5c-4cd6-be95-3e3c6f102f7e","Type":"ContainerStarted","Data":"8f3bfb87982d1d73c0039a593d9c0a279acc73e1ec0d3418564012febbf9c609"} Jan 28 07:04:11 crc kubenswrapper[4642]: I0128 07:04:11.480902 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4fddf6a7-1202-40a7-add6-98fc0a30fda6-ovsdbserver-nb\") pod \"4fddf6a7-1202-40a7-add6-98fc0a30fda6\" (UID: \"4fddf6a7-1202-40a7-add6-98fc0a30fda6\") " Jan 28 07:04:11 crc kubenswrapper[4642]: I0128 07:04:11.481029 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4fddf6a7-1202-40a7-add6-98fc0a30fda6-ovsdbserver-sb\") pod \"4fddf6a7-1202-40a7-add6-98fc0a30fda6\" (UID: \"4fddf6a7-1202-40a7-add6-98fc0a30fda6\") " Jan 28 07:04:11 crc kubenswrapper[4642]: I0128 07:04:11.481173 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fddf6a7-1202-40a7-add6-98fc0a30fda6-config\") pod \"4fddf6a7-1202-40a7-add6-98fc0a30fda6\" (UID: \"4fddf6a7-1202-40a7-add6-98fc0a30fda6\") " Jan 28 07:04:11 crc kubenswrapper[4642]: I0128 07:04:11.481373 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4fddf6a7-1202-40a7-add6-98fc0a30fda6-dns-svc\") pod \"4fddf6a7-1202-40a7-add6-98fc0a30fda6\" (UID: \"4fddf6a7-1202-40a7-add6-98fc0a30fda6\") " Jan 28 07:04:11 crc kubenswrapper[4642]: I0128 07:04:11.481523 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4fddf6a7-1202-40a7-add6-98fc0a30fda6-dns-swift-storage-0\") pod \"4fddf6a7-1202-40a7-add6-98fc0a30fda6\" (UID: \"4fddf6a7-1202-40a7-add6-98fc0a30fda6\") " Jan 28 07:04:11 crc kubenswrapper[4642]: I0128 07:04:11.481592 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c278z\" (UniqueName: \"kubernetes.io/projected/4fddf6a7-1202-40a7-add6-98fc0a30fda6-kube-api-access-c278z\") pod \"4fddf6a7-1202-40a7-add6-98fc0a30fda6\" (UID: \"4fddf6a7-1202-40a7-add6-98fc0a30fda6\") " Jan 28 07:04:11 crc kubenswrapper[4642]: I0128 07:04:11.500888 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fddf6a7-1202-40a7-add6-98fc0a30fda6-kube-api-access-c278z" (OuterVolumeSpecName: "kube-api-access-c278z") pod "4fddf6a7-1202-40a7-add6-98fc0a30fda6" (UID: "4fddf6a7-1202-40a7-add6-98fc0a30fda6"). InnerVolumeSpecName "kube-api-access-c278z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:04:11 crc kubenswrapper[4642]: I0128 07:04:11.511511 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-8s4pp" podStartSLOduration=3.149668626 podStartE2EDuration="13.511491325s" podCreationTimestamp="2026-01-28 07:03:58 +0000 UTC" firstStartedPulling="2026-01-28 07:03:59.936427638 +0000 UTC m=+963.168516447" lastFinishedPulling="2026-01-28 07:04:10.298250337 +0000 UTC m=+973.530339146" observedRunningTime="2026-01-28 07:04:11.497514652 +0000 UTC m=+974.729603461" watchObservedRunningTime="2026-01-28 07:04:11.511491325 +0000 UTC m=+974.743580134" Jan 28 07:04:11 crc kubenswrapper[4642]: I0128 07:04:11.525400 4642 scope.go:117] "RemoveContainer" containerID="d516516936276a00583105fede7bb02d8ad7c8256070c920c72dbcac7347d2a5" Jan 28 07:04:11 crc kubenswrapper[4642]: I0128 07:04:11.564097 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4fddf6a7-1202-40a7-add6-98fc0a30fda6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4fddf6a7-1202-40a7-add6-98fc0a30fda6" (UID: "4fddf6a7-1202-40a7-add6-98fc0a30fda6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:04:11 crc kubenswrapper[4642]: I0128 07:04:11.568805 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4fddf6a7-1202-40a7-add6-98fc0a30fda6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4fddf6a7-1202-40a7-add6-98fc0a30fda6" (UID: "4fddf6a7-1202-40a7-add6-98fc0a30fda6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:04:11 crc kubenswrapper[4642]: I0128 07:04:11.575835 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4fddf6a7-1202-40a7-add6-98fc0a30fda6-config" (OuterVolumeSpecName: "config") pod "4fddf6a7-1202-40a7-add6-98fc0a30fda6" (UID: "4fddf6a7-1202-40a7-add6-98fc0a30fda6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:04:11 crc kubenswrapper[4642]: I0128 07:04:11.581688 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4fddf6a7-1202-40a7-add6-98fc0a30fda6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4fddf6a7-1202-40a7-add6-98fc0a30fda6" (UID: "4fddf6a7-1202-40a7-add6-98fc0a30fda6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:04:11 crc kubenswrapper[4642]: I0128 07:04:11.590504 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4fddf6a7-1202-40a7-add6-98fc0a30fda6-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4fddf6a7-1202-40a7-add6-98fc0a30fda6" (UID: "4fddf6a7-1202-40a7-add6-98fc0a30fda6"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:04:11 crc kubenswrapper[4642]: I0128 07:04:11.590963 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4fddf6a7-1202-40a7-add6-98fc0a30fda6-dns-swift-storage-0\") pod \"4fddf6a7-1202-40a7-add6-98fc0a30fda6\" (UID: \"4fddf6a7-1202-40a7-add6-98fc0a30fda6\") " Jan 28 07:04:11 crc kubenswrapper[4642]: I0128 07:04:11.591510 4642 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fddf6a7-1202-40a7-add6-98fc0a30fda6-config\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:11 crc kubenswrapper[4642]: I0128 07:04:11.591526 4642 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4fddf6a7-1202-40a7-add6-98fc0a30fda6-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:11 crc kubenswrapper[4642]: I0128 07:04:11.591537 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c278z\" (UniqueName: \"kubernetes.io/projected/4fddf6a7-1202-40a7-add6-98fc0a30fda6-kube-api-access-c278z\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:11 crc kubenswrapper[4642]: I0128 07:04:11.591546 4642 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4fddf6a7-1202-40a7-add6-98fc0a30fda6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:11 crc kubenswrapper[4642]: I0128 07:04:11.591554 4642 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4fddf6a7-1202-40a7-add6-98fc0a30fda6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:11 crc kubenswrapper[4642]: W0128 07:04:11.591747 4642 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/4fddf6a7-1202-40a7-add6-98fc0a30fda6/volumes/kubernetes.io~configmap/dns-swift-storage-0 Jan 28 07:04:11 crc kubenswrapper[4642]: I0128 07:04:11.591762 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4fddf6a7-1202-40a7-add6-98fc0a30fda6-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4fddf6a7-1202-40a7-add6-98fc0a30fda6" (UID: "4fddf6a7-1202-40a7-add6-98fc0a30fda6"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:04:11 crc kubenswrapper[4642]: I0128 07:04:11.692786 4642 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4fddf6a7-1202-40a7-add6-98fc0a30fda6-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:11 crc kubenswrapper[4642]: I0128 07:04:11.750442 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d9f567799-j8ft2"] Jan 28 07:04:11 crc kubenswrapper[4642]: I0128 07:04:11.810872 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9cf86d859-sjnx7"] Jan 28 07:04:11 crc kubenswrapper[4642]: I0128 07:04:11.816827 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-9cf86d859-sjnx7"] Jan 28 07:04:11 crc kubenswrapper[4642]: I0128 07:04:11.939045 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5586f7766d-jr6js"] Jan 28 07:04:12 crc kubenswrapper[4642]: I0128 07:04:12.120158 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 07:04:12 crc kubenswrapper[4642]: I0128 07:04:12.205547 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84277b74-ee89-4c86-9985-30272872cee9-config-data\") pod \"84277b74-ee89-4c86-9985-30272872cee9\" (UID: \"84277b74-ee89-4c86-9985-30272872cee9\") " Jan 28 07:04:12 crc kubenswrapper[4642]: I0128 07:04:12.205595 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/84277b74-ee89-4c86-9985-30272872cee9-httpd-run\") pod \"84277b74-ee89-4c86-9985-30272872cee9\" (UID: \"84277b74-ee89-4c86-9985-30272872cee9\") " Jan 28 07:04:12 crc kubenswrapper[4642]: I0128 07:04:12.205646 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84277b74-ee89-4c86-9985-30272872cee9-scripts\") pod \"84277b74-ee89-4c86-9985-30272872cee9\" (UID: \"84277b74-ee89-4c86-9985-30272872cee9\") " Jan 28 07:04:12 crc kubenswrapper[4642]: I0128 07:04:12.205688 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grgh2\" (UniqueName: \"kubernetes.io/projected/84277b74-ee89-4c86-9985-30272872cee9-kube-api-access-grgh2\") pod \"84277b74-ee89-4c86-9985-30272872cee9\" (UID: \"84277b74-ee89-4c86-9985-30272872cee9\") " Jan 28 07:04:12 crc kubenswrapper[4642]: I0128 07:04:12.205704 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"84277b74-ee89-4c86-9985-30272872cee9\" (UID: \"84277b74-ee89-4c86-9985-30272872cee9\") " Jan 28 07:04:12 crc kubenswrapper[4642]: I0128 07:04:12.205755 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84277b74-ee89-4c86-9985-30272872cee9-combined-ca-bundle\") pod \"84277b74-ee89-4c86-9985-30272872cee9\" (UID: \"84277b74-ee89-4c86-9985-30272872cee9\") " Jan 28 07:04:12 crc kubenswrapper[4642]: I0128 07:04:12.205772 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84277b74-ee89-4c86-9985-30272872cee9-logs\") pod \"84277b74-ee89-4c86-9985-30272872cee9\" (UID: \"84277b74-ee89-4c86-9985-30272872cee9\") " Jan 28 07:04:12 crc kubenswrapper[4642]: I0128 07:04:12.206301 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84277b74-ee89-4c86-9985-30272872cee9-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "84277b74-ee89-4c86-9985-30272872cee9" (UID: "84277b74-ee89-4c86-9985-30272872cee9"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 07:04:12 crc kubenswrapper[4642]: I0128 07:04:12.206363 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84277b74-ee89-4c86-9985-30272872cee9-logs" (OuterVolumeSpecName: "logs") pod "84277b74-ee89-4c86-9985-30272872cee9" (UID: "84277b74-ee89-4c86-9985-30272872cee9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 07:04:12 crc kubenswrapper[4642]: I0128 07:04:12.211296 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84277b74-ee89-4c86-9985-30272872cee9-scripts" (OuterVolumeSpecName: "scripts") pod "84277b74-ee89-4c86-9985-30272872cee9" (UID: "84277b74-ee89-4c86-9985-30272872cee9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:04:12 crc kubenswrapper[4642]: I0128 07:04:12.212043 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84277b74-ee89-4c86-9985-30272872cee9-kube-api-access-grgh2" (OuterVolumeSpecName: "kube-api-access-grgh2") pod "84277b74-ee89-4c86-9985-30272872cee9" (UID: "84277b74-ee89-4c86-9985-30272872cee9"). InnerVolumeSpecName "kube-api-access-grgh2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:04:12 crc kubenswrapper[4642]: I0128 07:04:12.214112 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "84277b74-ee89-4c86-9985-30272872cee9" (UID: "84277b74-ee89-4c86-9985-30272872cee9"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 28 07:04:12 crc kubenswrapper[4642]: I0128 07:04:12.230312 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84277b74-ee89-4c86-9985-30272872cee9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "84277b74-ee89-4c86-9985-30272872cee9" (UID: "84277b74-ee89-4c86-9985-30272872cee9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:04:12 crc kubenswrapper[4642]: I0128 07:04:12.244850 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84277b74-ee89-4c86-9985-30272872cee9-config-data" (OuterVolumeSpecName: "config-data") pod "84277b74-ee89-4c86-9985-30272872cee9" (UID: "84277b74-ee89-4c86-9985-30272872cee9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:04:12 crc kubenswrapper[4642]: I0128 07:04:12.308288 4642 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84277b74-ee89-4c86-9985-30272872cee9-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:12 crc kubenswrapper[4642]: I0128 07:04:12.308317 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-grgh2\" (UniqueName: \"kubernetes.io/projected/84277b74-ee89-4c86-9985-30272872cee9-kube-api-access-grgh2\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:12 crc kubenswrapper[4642]: I0128 07:04:12.308352 4642 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Jan 28 07:04:12 crc kubenswrapper[4642]: I0128 07:04:12.308362 4642 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84277b74-ee89-4c86-9985-30272872cee9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:12 crc kubenswrapper[4642]: I0128 07:04:12.308371 4642 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84277b74-ee89-4c86-9985-30272872cee9-logs\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:12 crc kubenswrapper[4642]: I0128 07:04:12.308379 4642 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84277b74-ee89-4c86-9985-30272872cee9-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:12 crc kubenswrapper[4642]: I0128 07:04:12.308386 4642 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/84277b74-ee89-4c86-9985-30272872cee9-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:12 crc kubenswrapper[4642]: I0128 07:04:12.323913 4642 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Jan 28 07:04:12 crc kubenswrapper[4642]: I0128 07:04:12.410174 4642 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:12 crc kubenswrapper[4642]: I0128 07:04:12.496871 4642 generic.go:334] "Generic (PLEG): container finished" podID="e57d0d61-2e5c-4cd6-be95-3e3c6f102f7e" containerID="8f3bfb87982d1d73c0039a593d9c0a279acc73e1ec0d3418564012febbf9c609" exitCode=0 Jan 28 07:04:12 crc kubenswrapper[4642]: I0128 07:04:12.497167 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8s4pp" event={"ID":"e57d0d61-2e5c-4cd6-be95-3e3c6f102f7e","Type":"ContainerDied","Data":"8f3bfb87982d1d73c0039a593d9c0a279acc73e1ec0d3418564012febbf9c609"} Jan 28 07:04:12 crc kubenswrapper[4642]: I0128 07:04:12.499366 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d9f567799-j8ft2" event={"ID":"901e4414-3465-41d0-a4d6-cc041e6e4319","Type":"ContainerStarted","Data":"46355cc7c3bae8c99fa8c2ec7e44d4e07027c16308171bd32b5241dc5920fc99"} Jan 28 07:04:12 crc kubenswrapper[4642]: I0128 07:04:12.500981 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5586f7766d-jr6js" event={"ID":"6ba95f7c-c848-4c02-b886-357bbd0f1223","Type":"ContainerStarted","Data":"0b33548272365b99119db86110f1e9dae2bf010bbd78472ecca863008f673d1c"} Jan 28 07:04:12 crc kubenswrapper[4642]: I0128 07:04:12.505964 4642 generic.go:334] "Generic (PLEG): container finished" podID="84277b74-ee89-4c86-9985-30272872cee9" containerID="34ea78f22a78b7c0944266680ec2f6a01f282d9ba73b9c296ab4f3198809ac8a" exitCode=0 Jan 28 07:04:12 crc kubenswrapper[4642]: I0128 07:04:12.505989 4642 generic.go:334] "Generic (PLEG): container finished" podID="84277b74-ee89-4c86-9985-30272872cee9" containerID="0b064c397a3c3aa31d95c9c8c13b5dbd42dcc82a8b22b5b4da684b7a38c2e618" exitCode=143 Jan 28 07:04:12 crc kubenswrapper[4642]: I0128 07:04:12.506022 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"84277b74-ee89-4c86-9985-30272872cee9","Type":"ContainerDied","Data":"34ea78f22a78b7c0944266680ec2f6a01f282d9ba73b9c296ab4f3198809ac8a"} Jan 28 07:04:12 crc kubenswrapper[4642]: I0128 07:04:12.506041 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"84277b74-ee89-4c86-9985-30272872cee9","Type":"ContainerDied","Data":"0b064c397a3c3aa31d95c9c8c13b5dbd42dcc82a8b22b5b4da684b7a38c2e618"} Jan 28 07:04:12 crc kubenswrapper[4642]: I0128 07:04:12.506051 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"84277b74-ee89-4c86-9985-30272872cee9","Type":"ContainerDied","Data":"fbcf11c38e5761fc82e6b4e0fa706da664869fc562de0b3f8c26907cf71e54af"} Jan 28 07:04:12 crc kubenswrapper[4642]: I0128 07:04:12.506065 4642 scope.go:117] "RemoveContainer" containerID="34ea78f22a78b7c0944266680ec2f6a01f282d9ba73b9c296ab4f3198809ac8a" Jan 28 07:04:12 crc kubenswrapper[4642]: I0128 07:04:12.506151 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 07:04:12 crc kubenswrapper[4642]: I0128 07:04:12.520990 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b3ae5f52-55f6-47d3-936b-2af9cf3618fa","Type":"ContainerStarted","Data":"244e855263d7c7665ae8a9e0821b3aaab2c0a5cf0b2d447bdbf68c5532a6d19d"} Jan 28 07:04:12 crc kubenswrapper[4642]: I0128 07:04:12.521080 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="b3ae5f52-55f6-47d3-936b-2af9cf3618fa" containerName="glance-log" containerID="cri-o://e767fdefcdaef61fe9fa9884fe9ae7063406a2f671dd3f9933423487d0cc313f" gracePeriod=30 Jan 28 07:04:12 crc kubenswrapper[4642]: I0128 07:04:12.521207 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="b3ae5f52-55f6-47d3-936b-2af9cf3618fa" containerName="glance-httpd" containerID="cri-o://244e855263d7c7665ae8a9e0821b3aaab2c0a5cf0b2d447bdbf68c5532a6d19d" gracePeriod=30 Jan 28 07:04:12 crc kubenswrapper[4642]: I0128 07:04:12.565384 4642 scope.go:117] "RemoveContainer" containerID="0b064c397a3c3aa31d95c9c8c13b5dbd42dcc82a8b22b5b4da684b7a38c2e618" Jan 28 07:04:12 crc kubenswrapper[4642]: I0128 07:04:12.576500 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=11.576455661 podStartE2EDuration="11.576455661s" podCreationTimestamp="2026-01-28 07:04:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 07:04:12.54507572 +0000 UTC m=+975.777164529" watchObservedRunningTime="2026-01-28 07:04:12.576455661 +0000 UTC m=+975.808544461" Jan 28 07:04:12 crc kubenswrapper[4642]: I0128 07:04:12.590268 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 07:04:12 crc kubenswrapper[4642]: I0128 07:04:12.603256 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 07:04:12 crc kubenswrapper[4642]: I0128 07:04:12.609732 4642 scope.go:117] "RemoveContainer" containerID="34ea78f22a78b7c0944266680ec2f6a01f282d9ba73b9c296ab4f3198809ac8a" Jan 28 07:04:12 crc kubenswrapper[4642]: E0128 07:04:12.612705 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34ea78f22a78b7c0944266680ec2f6a01f282d9ba73b9c296ab4f3198809ac8a\": container with ID starting with 34ea78f22a78b7c0944266680ec2f6a01f282d9ba73b9c296ab4f3198809ac8a not found: ID does not exist" containerID="34ea78f22a78b7c0944266680ec2f6a01f282d9ba73b9c296ab4f3198809ac8a" Jan 28 07:04:12 crc kubenswrapper[4642]: I0128 07:04:12.612745 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34ea78f22a78b7c0944266680ec2f6a01f282d9ba73b9c296ab4f3198809ac8a"} err="failed to get container status \"34ea78f22a78b7c0944266680ec2f6a01f282d9ba73b9c296ab4f3198809ac8a\": rpc error: code = NotFound desc = could not find container \"34ea78f22a78b7c0944266680ec2f6a01f282d9ba73b9c296ab4f3198809ac8a\": container with ID starting with 34ea78f22a78b7c0944266680ec2f6a01f282d9ba73b9c296ab4f3198809ac8a not found: ID does not exist" Jan 28 07:04:12 crc kubenswrapper[4642]: I0128 07:04:12.612778 4642 scope.go:117] "RemoveContainer" containerID="0b064c397a3c3aa31d95c9c8c13b5dbd42dcc82a8b22b5b4da684b7a38c2e618" Jan 28 07:04:12 crc kubenswrapper[4642]: I0128 07:04:12.612869 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 07:04:12 crc kubenswrapper[4642]: E0128 07:04:12.613258 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84277b74-ee89-4c86-9985-30272872cee9" containerName="glance-httpd" Jan 28 07:04:12 crc kubenswrapper[4642]: I0128 07:04:12.613270 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="84277b74-ee89-4c86-9985-30272872cee9" containerName="glance-httpd" Jan 28 07:04:12 crc kubenswrapper[4642]: E0128 07:04:12.613293 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fddf6a7-1202-40a7-add6-98fc0a30fda6" containerName="init" Jan 28 07:04:12 crc kubenswrapper[4642]: I0128 07:04:12.613299 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fddf6a7-1202-40a7-add6-98fc0a30fda6" containerName="init" Jan 28 07:04:12 crc kubenswrapper[4642]: E0128 07:04:12.613306 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84277b74-ee89-4c86-9985-30272872cee9" containerName="glance-log" Jan 28 07:04:12 crc kubenswrapper[4642]: I0128 07:04:12.613311 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="84277b74-ee89-4c86-9985-30272872cee9" containerName="glance-log" Jan 28 07:04:12 crc kubenswrapper[4642]: E0128 07:04:12.613328 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fddf6a7-1202-40a7-add6-98fc0a30fda6" containerName="dnsmasq-dns" Jan 28 07:04:12 crc kubenswrapper[4642]: I0128 07:04:12.613334 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fddf6a7-1202-40a7-add6-98fc0a30fda6" containerName="dnsmasq-dns" Jan 28 07:04:12 crc kubenswrapper[4642]: I0128 07:04:12.613495 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fddf6a7-1202-40a7-add6-98fc0a30fda6" containerName="dnsmasq-dns" Jan 28 07:04:12 crc kubenswrapper[4642]: I0128 07:04:12.613509 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="84277b74-ee89-4c86-9985-30272872cee9" containerName="glance-httpd" Jan 28 07:04:12 crc kubenswrapper[4642]: I0128 07:04:12.613524 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="84277b74-ee89-4c86-9985-30272872cee9" containerName="glance-log" Jan 28 07:04:12 crc kubenswrapper[4642]: E0128 07:04:12.613892 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b064c397a3c3aa31d95c9c8c13b5dbd42dcc82a8b22b5b4da684b7a38c2e618\": container with ID starting with 0b064c397a3c3aa31d95c9c8c13b5dbd42dcc82a8b22b5b4da684b7a38c2e618 not found: ID does not exist" containerID="0b064c397a3c3aa31d95c9c8c13b5dbd42dcc82a8b22b5b4da684b7a38c2e618" Jan 28 07:04:12 crc kubenswrapper[4642]: I0128 07:04:12.613936 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b064c397a3c3aa31d95c9c8c13b5dbd42dcc82a8b22b5b4da684b7a38c2e618"} err="failed to get container status \"0b064c397a3c3aa31d95c9c8c13b5dbd42dcc82a8b22b5b4da684b7a38c2e618\": rpc error: code = NotFound desc = could not find container \"0b064c397a3c3aa31d95c9c8c13b5dbd42dcc82a8b22b5b4da684b7a38c2e618\": container with ID starting with 0b064c397a3c3aa31d95c9c8c13b5dbd42dcc82a8b22b5b4da684b7a38c2e618 not found: ID does not exist" Jan 28 07:04:12 crc kubenswrapper[4642]: I0128 07:04:12.613974 4642 scope.go:117] "RemoveContainer" containerID="34ea78f22a78b7c0944266680ec2f6a01f282d9ba73b9c296ab4f3198809ac8a" Jan 28 07:04:12 crc kubenswrapper[4642]: I0128 07:04:12.614405 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 07:04:12 crc kubenswrapper[4642]: I0128 07:04:12.615828 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34ea78f22a78b7c0944266680ec2f6a01f282d9ba73b9c296ab4f3198809ac8a"} err="failed to get container status \"34ea78f22a78b7c0944266680ec2f6a01f282d9ba73b9c296ab4f3198809ac8a\": rpc error: code = NotFound desc = could not find container \"34ea78f22a78b7c0944266680ec2f6a01f282d9ba73b9c296ab4f3198809ac8a\": container with ID starting with 34ea78f22a78b7c0944266680ec2f6a01f282d9ba73b9c296ab4f3198809ac8a not found: ID does not exist" Jan 28 07:04:12 crc kubenswrapper[4642]: I0128 07:04:12.615853 4642 scope.go:117] "RemoveContainer" containerID="0b064c397a3c3aa31d95c9c8c13b5dbd42dcc82a8b22b5b4da684b7a38c2e618" Jan 28 07:04:12 crc kubenswrapper[4642]: I0128 07:04:12.616143 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 28 07:04:12 crc kubenswrapper[4642]: I0128 07:04:12.616525 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 28 07:04:12 crc kubenswrapper[4642]: I0128 07:04:12.617879 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 07:04:12 crc kubenswrapper[4642]: I0128 07:04:12.618228 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b064c397a3c3aa31d95c9c8c13b5dbd42dcc82a8b22b5b4da684b7a38c2e618"} err="failed to get container status \"0b064c397a3c3aa31d95c9c8c13b5dbd42dcc82a8b22b5b4da684b7a38c2e618\": rpc error: code = NotFound desc = could not find container \"0b064c397a3c3aa31d95c9c8c13b5dbd42dcc82a8b22b5b4da684b7a38c2e618\": container with ID starting with 0b064c397a3c3aa31d95c9c8c13b5dbd42dcc82a8b22b5b4da684b7a38c2e618 not found: ID does not exist" Jan 28 07:04:12 crc kubenswrapper[4642]: I0128 07:04:12.718104 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/15853341-a13e-4f13-a998-9026f9034213-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"15853341-a13e-4f13-a998-9026f9034213\") " pod="openstack/glance-default-external-api-0" Jan 28 07:04:12 crc kubenswrapper[4642]: I0128 07:04:12.718396 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15853341-a13e-4f13-a998-9026f9034213-scripts\") pod \"glance-default-external-api-0\" (UID: \"15853341-a13e-4f13-a998-9026f9034213\") " pod="openstack/glance-default-external-api-0" Jan 28 07:04:12 crc kubenswrapper[4642]: I0128 07:04:12.718431 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"15853341-a13e-4f13-a998-9026f9034213\") " pod="openstack/glance-default-external-api-0" Jan 28 07:04:12 crc kubenswrapper[4642]: I0128 07:04:12.718518 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15853341-a13e-4f13-a998-9026f9034213-logs\") pod \"glance-default-external-api-0\" (UID: \"15853341-a13e-4f13-a998-9026f9034213\") " pod="openstack/glance-default-external-api-0" Jan 28 07:04:12 crc kubenswrapper[4642]: I0128 07:04:12.718549 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15853341-a13e-4f13-a998-9026f9034213-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"15853341-a13e-4f13-a998-9026f9034213\") " pod="openstack/glance-default-external-api-0" Jan 28 07:04:12 crc kubenswrapper[4642]: I0128 07:04:12.718566 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15853341-a13e-4f13-a998-9026f9034213-config-data\") pod \"glance-default-external-api-0\" (UID: \"15853341-a13e-4f13-a998-9026f9034213\") " pod="openstack/glance-default-external-api-0" Jan 28 07:04:12 crc kubenswrapper[4642]: I0128 07:04:12.718647 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/15853341-a13e-4f13-a998-9026f9034213-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"15853341-a13e-4f13-a998-9026f9034213\") " pod="openstack/glance-default-external-api-0" Jan 28 07:04:12 crc kubenswrapper[4642]: I0128 07:04:12.718674 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cndqk\" (UniqueName: \"kubernetes.io/projected/15853341-a13e-4f13-a998-9026f9034213-kube-api-access-cndqk\") pod \"glance-default-external-api-0\" (UID: \"15853341-a13e-4f13-a998-9026f9034213\") " pod="openstack/glance-default-external-api-0" Jan 28 07:04:12 crc kubenswrapper[4642]: I0128 07:04:12.822733 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/15853341-a13e-4f13-a998-9026f9034213-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"15853341-a13e-4f13-a998-9026f9034213\") " pod="openstack/glance-default-external-api-0" Jan 28 07:04:12 crc kubenswrapper[4642]: I0128 07:04:12.822810 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15853341-a13e-4f13-a998-9026f9034213-scripts\") pod \"glance-default-external-api-0\" (UID: \"15853341-a13e-4f13-a998-9026f9034213\") " pod="openstack/glance-default-external-api-0" Jan 28 07:04:12 crc kubenswrapper[4642]: I0128 07:04:12.822841 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"15853341-a13e-4f13-a998-9026f9034213\") " pod="openstack/glance-default-external-api-0" Jan 28 07:04:12 crc kubenswrapper[4642]: I0128 07:04:12.822884 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15853341-a13e-4f13-a998-9026f9034213-logs\") pod \"glance-default-external-api-0\" (UID: \"15853341-a13e-4f13-a998-9026f9034213\") " pod="openstack/glance-default-external-api-0" Jan 28 07:04:12 crc kubenswrapper[4642]: I0128 07:04:12.822902 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15853341-a13e-4f13-a998-9026f9034213-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"15853341-a13e-4f13-a998-9026f9034213\") " pod="openstack/glance-default-external-api-0" Jan 28 07:04:12 crc kubenswrapper[4642]: I0128 07:04:12.822918 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15853341-a13e-4f13-a998-9026f9034213-config-data\") pod \"glance-default-external-api-0\" (UID: \"15853341-a13e-4f13-a998-9026f9034213\") " pod="openstack/glance-default-external-api-0" Jan 28 07:04:12 crc kubenswrapper[4642]: I0128 07:04:12.822981 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/15853341-a13e-4f13-a998-9026f9034213-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"15853341-a13e-4f13-a998-9026f9034213\") " pod="openstack/glance-default-external-api-0" Jan 28 07:04:12 crc kubenswrapper[4642]: I0128 07:04:12.823005 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cndqk\" (UniqueName: \"kubernetes.io/projected/15853341-a13e-4f13-a998-9026f9034213-kube-api-access-cndqk\") pod \"glance-default-external-api-0\" (UID: \"15853341-a13e-4f13-a998-9026f9034213\") " pod="openstack/glance-default-external-api-0" Jan 28 07:04:12 crc kubenswrapper[4642]: I0128 07:04:12.823607 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/15853341-a13e-4f13-a998-9026f9034213-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"15853341-a13e-4f13-a998-9026f9034213\") " pod="openstack/glance-default-external-api-0" Jan 28 07:04:12 crc kubenswrapper[4642]: I0128 07:04:12.824505 4642 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"15853341-a13e-4f13-a998-9026f9034213\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Jan 28 07:04:12 crc kubenswrapper[4642]: I0128 07:04:12.824528 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15853341-a13e-4f13-a998-9026f9034213-logs\") pod \"glance-default-external-api-0\" (UID: \"15853341-a13e-4f13-a998-9026f9034213\") " pod="openstack/glance-default-external-api-0" Jan 28 07:04:12 crc kubenswrapper[4642]: I0128 07:04:12.830362 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15853341-a13e-4f13-a998-9026f9034213-config-data\") pod \"glance-default-external-api-0\" (UID: \"15853341-a13e-4f13-a998-9026f9034213\") " pod="openstack/glance-default-external-api-0" Jan 28 07:04:12 crc kubenswrapper[4642]: I0128 07:04:12.832031 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15853341-a13e-4f13-a998-9026f9034213-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"15853341-a13e-4f13-a998-9026f9034213\") " pod="openstack/glance-default-external-api-0" Jan 28 07:04:12 crc kubenswrapper[4642]: I0128 07:04:12.832450 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15853341-a13e-4f13-a998-9026f9034213-scripts\") pod \"glance-default-external-api-0\" (UID: \"15853341-a13e-4f13-a998-9026f9034213\") " pod="openstack/glance-default-external-api-0" Jan 28 07:04:12 crc kubenswrapper[4642]: I0128 07:04:12.832720 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/15853341-a13e-4f13-a998-9026f9034213-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"15853341-a13e-4f13-a998-9026f9034213\") " pod="openstack/glance-default-external-api-0" Jan 28 07:04:12 crc kubenswrapper[4642]: I0128 07:04:12.839020 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cndqk\" (UniqueName: \"kubernetes.io/projected/15853341-a13e-4f13-a998-9026f9034213-kube-api-access-cndqk\") pod \"glance-default-external-api-0\" (UID: \"15853341-a13e-4f13-a998-9026f9034213\") " pod="openstack/glance-default-external-api-0" Jan 28 07:04:12 crc kubenswrapper[4642]: I0128 07:04:12.866152 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"15853341-a13e-4f13-a998-9026f9034213\") " pod="openstack/glance-default-external-api-0" Jan 28 07:04:12 crc kubenswrapper[4642]: I0128 07:04:12.966413 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.123111 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fddf6a7-1202-40a7-add6-98fc0a30fda6" path="/var/lib/kubelet/pods/4fddf6a7-1202-40a7-add6-98fc0a30fda6/volumes" Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.123931 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84277b74-ee89-4c86-9985-30272872cee9" path="/var/lib/kubelet/pods/84277b74-ee89-4c86-9985-30272872cee9/volumes" Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.127581 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.237551 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3ae5f52-55f6-47d3-936b-2af9cf3618fa-logs\") pod \"b3ae5f52-55f6-47d3-936b-2af9cf3618fa\" (UID: \"b3ae5f52-55f6-47d3-936b-2af9cf3618fa\") " Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.238096 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plwmd\" (UniqueName: \"kubernetes.io/projected/b3ae5f52-55f6-47d3-936b-2af9cf3618fa-kube-api-access-plwmd\") pod \"b3ae5f52-55f6-47d3-936b-2af9cf3618fa\" (UID: \"b3ae5f52-55f6-47d3-936b-2af9cf3618fa\") " Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.238230 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3ae5f52-55f6-47d3-936b-2af9cf3618fa-combined-ca-bundle\") pod \"b3ae5f52-55f6-47d3-936b-2af9cf3618fa\" (UID: \"b3ae5f52-55f6-47d3-936b-2af9cf3618fa\") " Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.238321 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3ae5f52-55f6-47d3-936b-2af9cf3618fa-config-data\") pod \"b3ae5f52-55f6-47d3-936b-2af9cf3618fa\" (UID: \"b3ae5f52-55f6-47d3-936b-2af9cf3618fa\") " Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.238411 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3ae5f52-55f6-47d3-936b-2af9cf3618fa-scripts\") pod \"b3ae5f52-55f6-47d3-936b-2af9cf3618fa\" (UID: \"b3ae5f52-55f6-47d3-936b-2af9cf3618fa\") " Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.238441 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"b3ae5f52-55f6-47d3-936b-2af9cf3618fa\" (UID: \"b3ae5f52-55f6-47d3-936b-2af9cf3618fa\") " Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.238491 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b3ae5f52-55f6-47d3-936b-2af9cf3618fa-httpd-run\") pod \"b3ae5f52-55f6-47d3-936b-2af9cf3618fa\" (UID: \"b3ae5f52-55f6-47d3-936b-2af9cf3618fa\") " Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.240697 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3ae5f52-55f6-47d3-936b-2af9cf3618fa-logs" (OuterVolumeSpecName: "logs") pod "b3ae5f52-55f6-47d3-936b-2af9cf3618fa" (UID: "b3ae5f52-55f6-47d3-936b-2af9cf3618fa"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.242520 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3ae5f52-55f6-47d3-936b-2af9cf3618fa-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "b3ae5f52-55f6-47d3-936b-2af9cf3618fa" (UID: "b3ae5f52-55f6-47d3-936b-2af9cf3618fa"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.247732 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "b3ae5f52-55f6-47d3-936b-2af9cf3618fa" (UID: "b3ae5f52-55f6-47d3-936b-2af9cf3618fa"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.253259 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3ae5f52-55f6-47d3-936b-2af9cf3618fa-kube-api-access-plwmd" (OuterVolumeSpecName: "kube-api-access-plwmd") pod "b3ae5f52-55f6-47d3-936b-2af9cf3618fa" (UID: "b3ae5f52-55f6-47d3-936b-2af9cf3618fa"). InnerVolumeSpecName "kube-api-access-plwmd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.264482 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3ae5f52-55f6-47d3-936b-2af9cf3618fa-scripts" (OuterVolumeSpecName: "scripts") pod "b3ae5f52-55f6-47d3-936b-2af9cf3618fa" (UID: "b3ae5f52-55f6-47d3-936b-2af9cf3618fa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.283027 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3ae5f52-55f6-47d3-936b-2af9cf3618fa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b3ae5f52-55f6-47d3-936b-2af9cf3618fa" (UID: "b3ae5f52-55f6-47d3-936b-2af9cf3618fa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.297314 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6c985bd689-xq2rx"] Jan 28 07:04:13 crc kubenswrapper[4642]: E0128 07:04:13.297890 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3ae5f52-55f6-47d3-936b-2af9cf3618fa" containerName="glance-httpd" Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.297909 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3ae5f52-55f6-47d3-936b-2af9cf3618fa" containerName="glance-httpd" Jan 28 07:04:13 crc kubenswrapper[4642]: E0128 07:04:13.297936 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3ae5f52-55f6-47d3-936b-2af9cf3618fa" containerName="glance-log" Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.297943 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3ae5f52-55f6-47d3-936b-2af9cf3618fa" containerName="glance-log" Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.298141 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3ae5f52-55f6-47d3-936b-2af9cf3618fa" containerName="glance-httpd" Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.298170 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3ae5f52-55f6-47d3-936b-2af9cf3618fa" containerName="glance-log" Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.299180 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6c985bd689-xq2rx" Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.306443 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6c985bd689-xq2rx"] Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.310859 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.311107 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.342226 4642 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3ae5f52-55f6-47d3-936b-2af9cf3618fa-logs\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.342261 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-plwmd\" (UniqueName: \"kubernetes.io/projected/b3ae5f52-55f6-47d3-936b-2af9cf3618fa-kube-api-access-plwmd\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.342275 4642 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3ae5f52-55f6-47d3-936b-2af9cf3618fa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.342287 4642 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3ae5f52-55f6-47d3-936b-2af9cf3618fa-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.342331 4642 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.342343 4642 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b3ae5f52-55f6-47d3-936b-2af9cf3618fa-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.358549 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3ae5f52-55f6-47d3-936b-2af9cf3618fa-config-data" (OuterVolumeSpecName: "config-data") pod "b3ae5f52-55f6-47d3-936b-2af9cf3618fa" (UID: "b3ae5f52-55f6-47d3-936b-2af9cf3618fa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.359605 4642 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.444158 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b06aaf7d-5be1-49f9-aa27-9c5fe5408068-httpd-config\") pod \"neutron-6c985bd689-xq2rx\" (UID: \"b06aaf7d-5be1-49f9-aa27-9c5fe5408068\") " pod="openstack/neutron-6c985bd689-xq2rx" Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.444330 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b06aaf7d-5be1-49f9-aa27-9c5fe5408068-internal-tls-certs\") pod \"neutron-6c985bd689-xq2rx\" (UID: \"b06aaf7d-5be1-49f9-aa27-9c5fe5408068\") " pod="openstack/neutron-6c985bd689-xq2rx" Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.444365 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b06aaf7d-5be1-49f9-aa27-9c5fe5408068-config\") pod \"neutron-6c985bd689-xq2rx\" (UID: \"b06aaf7d-5be1-49f9-aa27-9c5fe5408068\") " pod="openstack/neutron-6c985bd689-xq2rx" Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.444410 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cd5bj\" (UniqueName: \"kubernetes.io/projected/b06aaf7d-5be1-49f9-aa27-9c5fe5408068-kube-api-access-cd5bj\") pod \"neutron-6c985bd689-xq2rx\" (UID: \"b06aaf7d-5be1-49f9-aa27-9c5fe5408068\") " pod="openstack/neutron-6c985bd689-xq2rx" Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.444527 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b06aaf7d-5be1-49f9-aa27-9c5fe5408068-combined-ca-bundle\") pod \"neutron-6c985bd689-xq2rx\" (UID: \"b06aaf7d-5be1-49f9-aa27-9c5fe5408068\") " pod="openstack/neutron-6c985bd689-xq2rx" Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.444562 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b06aaf7d-5be1-49f9-aa27-9c5fe5408068-public-tls-certs\") pod \"neutron-6c985bd689-xq2rx\" (UID: \"b06aaf7d-5be1-49f9-aa27-9c5fe5408068\") " pod="openstack/neutron-6c985bd689-xq2rx" Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.444653 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b06aaf7d-5be1-49f9-aa27-9c5fe5408068-ovndb-tls-certs\") pod \"neutron-6c985bd689-xq2rx\" (UID: \"b06aaf7d-5be1-49f9-aa27-9c5fe5408068\") " pod="openstack/neutron-6c985bd689-xq2rx" Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.444735 4642 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3ae5f52-55f6-47d3-936b-2af9cf3618fa-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.444747 4642 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.523955 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.529636 4642 generic.go:334] "Generic (PLEG): container finished" podID="901e4414-3465-41d0-a4d6-cc041e6e4319" containerID="d132fd0596b763de0025bb0059a6ec1f184d08f36e3a4724bb9f90ddd9feb264" exitCode=0 Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.529715 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d9f567799-j8ft2" event={"ID":"901e4414-3465-41d0-a4d6-cc041e6e4319","Type":"ContainerDied","Data":"d132fd0596b763de0025bb0059a6ec1f184d08f36e3a4724bb9f90ddd9feb264"} Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.531680 4642 generic.go:334] "Generic (PLEG): container finished" podID="914420d1-9b01-49b9-962d-405cc170a061" containerID="ee9f1b4e9320eb073e04d76be36b698bc5d17488f1ccb1870e6b94381f2890ad" exitCode=0 Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.531731 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-vtvtp" event={"ID":"914420d1-9b01-49b9-962d-405cc170a061","Type":"ContainerDied","Data":"ee9f1b4e9320eb073e04d76be36b698bc5d17488f1ccb1870e6b94381f2890ad"} Jan 28 07:04:13 crc kubenswrapper[4642]: W0128 07:04:13.533576 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod15853341_a13e_4f13_a998_9026f9034213.slice/crio-532e81ff6fe727a8a522238c743ea92aa193c8a7ba84f6f36081a2ed8fbe67d0 WatchSource:0}: Error finding container 532e81ff6fe727a8a522238c743ea92aa193c8a7ba84f6f36081a2ed8fbe67d0: Status 404 returned error can't find the container with id 532e81ff6fe727a8a522238c743ea92aa193c8a7ba84f6f36081a2ed8fbe67d0 Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.534197 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5586f7766d-jr6js" event={"ID":"6ba95f7c-c848-4c02-b886-357bbd0f1223","Type":"ContainerStarted","Data":"ace730037e7765af679a97c0a9ccf6eacea51057e861a67847ef6f6d76eafbfe"} Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.534236 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5586f7766d-jr6js" event={"ID":"6ba95f7c-c848-4c02-b886-357bbd0f1223","Type":"ContainerStarted","Data":"71cae0757ce0c13e45e0ae78ffa52ba7ea2c406bf65d33326d1dde217acd405c"} Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.534700 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5586f7766d-jr6js" Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.539793 4642 generic.go:334] "Generic (PLEG): container finished" podID="b3ae5f52-55f6-47d3-936b-2af9cf3618fa" containerID="244e855263d7c7665ae8a9e0821b3aaab2c0a5cf0b2d447bdbf68c5532a6d19d" exitCode=0 Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.539814 4642 generic.go:334] "Generic (PLEG): container finished" podID="b3ae5f52-55f6-47d3-936b-2af9cf3618fa" containerID="e767fdefcdaef61fe9fa9884fe9ae7063406a2f671dd3f9933423487d0cc313f" exitCode=143 Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.539885 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.539924 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b3ae5f52-55f6-47d3-936b-2af9cf3618fa","Type":"ContainerDied","Data":"244e855263d7c7665ae8a9e0821b3aaab2c0a5cf0b2d447bdbf68c5532a6d19d"} Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.539980 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b3ae5f52-55f6-47d3-936b-2af9cf3618fa","Type":"ContainerDied","Data":"e767fdefcdaef61fe9fa9884fe9ae7063406a2f671dd3f9933423487d0cc313f"} Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.539995 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b3ae5f52-55f6-47d3-936b-2af9cf3618fa","Type":"ContainerDied","Data":"78a7c6fae283e8db8f3ade6588474c137364e86133a49ca11b0c0ef95381f27f"} Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.540011 4642 scope.go:117] "RemoveContainer" containerID="244e855263d7c7665ae8a9e0821b3aaab2c0a5cf0b2d447bdbf68c5532a6d19d" Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.547942 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b06aaf7d-5be1-49f9-aa27-9c5fe5408068-ovndb-tls-certs\") pod \"neutron-6c985bd689-xq2rx\" (UID: \"b06aaf7d-5be1-49f9-aa27-9c5fe5408068\") " pod="openstack/neutron-6c985bd689-xq2rx" Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.548002 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b06aaf7d-5be1-49f9-aa27-9c5fe5408068-httpd-config\") pod \"neutron-6c985bd689-xq2rx\" (UID: \"b06aaf7d-5be1-49f9-aa27-9c5fe5408068\") " pod="openstack/neutron-6c985bd689-xq2rx" Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.548389 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b06aaf7d-5be1-49f9-aa27-9c5fe5408068-internal-tls-certs\") pod \"neutron-6c985bd689-xq2rx\" (UID: \"b06aaf7d-5be1-49f9-aa27-9c5fe5408068\") " pod="openstack/neutron-6c985bd689-xq2rx" Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.548984 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b06aaf7d-5be1-49f9-aa27-9c5fe5408068-config\") pod \"neutron-6c985bd689-xq2rx\" (UID: \"b06aaf7d-5be1-49f9-aa27-9c5fe5408068\") " pod="openstack/neutron-6c985bd689-xq2rx" Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.549009 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cd5bj\" (UniqueName: \"kubernetes.io/projected/b06aaf7d-5be1-49f9-aa27-9c5fe5408068-kube-api-access-cd5bj\") pod \"neutron-6c985bd689-xq2rx\" (UID: \"b06aaf7d-5be1-49f9-aa27-9c5fe5408068\") " pod="openstack/neutron-6c985bd689-xq2rx" Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.549054 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b06aaf7d-5be1-49f9-aa27-9c5fe5408068-combined-ca-bundle\") pod \"neutron-6c985bd689-xq2rx\" (UID: \"b06aaf7d-5be1-49f9-aa27-9c5fe5408068\") " pod="openstack/neutron-6c985bd689-xq2rx" Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.549074 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b06aaf7d-5be1-49f9-aa27-9c5fe5408068-public-tls-certs\") pod \"neutron-6c985bd689-xq2rx\" (UID: \"b06aaf7d-5be1-49f9-aa27-9c5fe5408068\") " pod="openstack/neutron-6c985bd689-xq2rx" Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.554769 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b06aaf7d-5be1-49f9-aa27-9c5fe5408068-ovndb-tls-certs\") pod \"neutron-6c985bd689-xq2rx\" (UID: \"b06aaf7d-5be1-49f9-aa27-9c5fe5408068\") " pod="openstack/neutron-6c985bd689-xq2rx" Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.556629 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b06aaf7d-5be1-49f9-aa27-9c5fe5408068-internal-tls-certs\") pod \"neutron-6c985bd689-xq2rx\" (UID: \"b06aaf7d-5be1-49f9-aa27-9c5fe5408068\") " pod="openstack/neutron-6c985bd689-xq2rx" Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.556992 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b06aaf7d-5be1-49f9-aa27-9c5fe5408068-httpd-config\") pod \"neutron-6c985bd689-xq2rx\" (UID: \"b06aaf7d-5be1-49f9-aa27-9c5fe5408068\") " pod="openstack/neutron-6c985bd689-xq2rx" Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.557136 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b06aaf7d-5be1-49f9-aa27-9c5fe5408068-combined-ca-bundle\") pod \"neutron-6c985bd689-xq2rx\" (UID: \"b06aaf7d-5be1-49f9-aa27-9c5fe5408068\") " pod="openstack/neutron-6c985bd689-xq2rx" Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.557401 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"201fcbe3-af3e-4f46-bd0f-07693de10229","Type":"ContainerStarted","Data":"0883a9883907d4acd5bfde2b91f1b7acb7d0911b19f63131597360895a3f70f0"} Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.557929 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b06aaf7d-5be1-49f9-aa27-9c5fe5408068-public-tls-certs\") pod \"neutron-6c985bd689-xq2rx\" (UID: \"b06aaf7d-5be1-49f9-aa27-9c5fe5408068\") " pod="openstack/neutron-6c985bd689-xq2rx" Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.571895 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/b06aaf7d-5be1-49f9-aa27-9c5fe5408068-config\") pod \"neutron-6c985bd689-xq2rx\" (UID: \"b06aaf7d-5be1-49f9-aa27-9c5fe5408068\") " pod="openstack/neutron-6c985bd689-xq2rx" Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.580715 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5586f7766d-jr6js" podStartSLOduration=3.580705456 podStartE2EDuration="3.580705456s" podCreationTimestamp="2026-01-28 07:04:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 07:04:13.564079506 +0000 UTC m=+976.796168315" watchObservedRunningTime="2026-01-28 07:04:13.580705456 +0000 UTC m=+976.812794255" Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.581986 4642 scope.go:117] "RemoveContainer" containerID="e767fdefcdaef61fe9fa9884fe9ae7063406a2f671dd3f9933423487d0cc313f" Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.586420 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cd5bj\" (UniqueName: \"kubernetes.io/projected/b06aaf7d-5be1-49f9-aa27-9c5fe5408068-kube-api-access-cd5bj\") pod \"neutron-6c985bd689-xq2rx\" (UID: \"b06aaf7d-5be1-49f9-aa27-9c5fe5408068\") " pod="openstack/neutron-6c985bd689-xq2rx" Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.606251 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.614318 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.621393 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.622657 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.626343 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.627869 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.627897 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.631158 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6c985bd689-xq2rx" Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.631414 4642 scope.go:117] "RemoveContainer" containerID="244e855263d7c7665ae8a9e0821b3aaab2c0a5cf0b2d447bdbf68c5532a6d19d" Jan 28 07:04:13 crc kubenswrapper[4642]: E0128 07:04:13.631931 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"244e855263d7c7665ae8a9e0821b3aaab2c0a5cf0b2d447bdbf68c5532a6d19d\": container with ID starting with 244e855263d7c7665ae8a9e0821b3aaab2c0a5cf0b2d447bdbf68c5532a6d19d not found: ID does not exist" containerID="244e855263d7c7665ae8a9e0821b3aaab2c0a5cf0b2d447bdbf68c5532a6d19d" Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.631954 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"244e855263d7c7665ae8a9e0821b3aaab2c0a5cf0b2d447bdbf68c5532a6d19d"} err="failed to get container status \"244e855263d7c7665ae8a9e0821b3aaab2c0a5cf0b2d447bdbf68c5532a6d19d\": rpc error: code = NotFound desc = could not find container \"244e855263d7c7665ae8a9e0821b3aaab2c0a5cf0b2d447bdbf68c5532a6d19d\": container with ID starting with 244e855263d7c7665ae8a9e0821b3aaab2c0a5cf0b2d447bdbf68c5532a6d19d not found: ID does not exist" Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.631973 4642 scope.go:117] "RemoveContainer" containerID="e767fdefcdaef61fe9fa9884fe9ae7063406a2f671dd3f9933423487d0cc313f" Jan 28 07:04:13 crc kubenswrapper[4642]: E0128 07:04:13.632444 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e767fdefcdaef61fe9fa9884fe9ae7063406a2f671dd3f9933423487d0cc313f\": container with ID starting with e767fdefcdaef61fe9fa9884fe9ae7063406a2f671dd3f9933423487d0cc313f not found: ID does not exist" containerID="e767fdefcdaef61fe9fa9884fe9ae7063406a2f671dd3f9933423487d0cc313f" Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.632464 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e767fdefcdaef61fe9fa9884fe9ae7063406a2f671dd3f9933423487d0cc313f"} err="failed to get container status \"e767fdefcdaef61fe9fa9884fe9ae7063406a2f671dd3f9933423487d0cc313f\": rpc error: code = NotFound desc = could not find container \"e767fdefcdaef61fe9fa9884fe9ae7063406a2f671dd3f9933423487d0cc313f\": container with ID starting with e767fdefcdaef61fe9fa9884fe9ae7063406a2f671dd3f9933423487d0cc313f not found: ID does not exist" Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.632488 4642 scope.go:117] "RemoveContainer" containerID="244e855263d7c7665ae8a9e0821b3aaab2c0a5cf0b2d447bdbf68c5532a6d19d" Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.632694 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"244e855263d7c7665ae8a9e0821b3aaab2c0a5cf0b2d447bdbf68c5532a6d19d"} err="failed to get container status \"244e855263d7c7665ae8a9e0821b3aaab2c0a5cf0b2d447bdbf68c5532a6d19d\": rpc error: code = NotFound desc = could not find container \"244e855263d7c7665ae8a9e0821b3aaab2c0a5cf0b2d447bdbf68c5532a6d19d\": container with ID starting with 244e855263d7c7665ae8a9e0821b3aaab2c0a5cf0b2d447bdbf68c5532a6d19d not found: ID does not exist" Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.632712 4642 scope.go:117] "RemoveContainer" containerID="e767fdefcdaef61fe9fa9884fe9ae7063406a2f671dd3f9933423487d0cc313f" Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.633055 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e767fdefcdaef61fe9fa9884fe9ae7063406a2f671dd3f9933423487d0cc313f"} err="failed to get container status \"e767fdefcdaef61fe9fa9884fe9ae7063406a2f671dd3f9933423487d0cc313f\": rpc error: code = NotFound desc = could not find container \"e767fdefcdaef61fe9fa9884fe9ae7063406a2f671dd3f9933423487d0cc313f\": container with ID starting with e767fdefcdaef61fe9fa9884fe9ae7063406a2f671dd3f9933423487d0cc313f not found: ID does not exist" Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.752121 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2266e1d6-2cec-4bfc-9a24-b6408860e980-config-data\") pod \"glance-default-internal-api-0\" (UID: \"2266e1d6-2cec-4bfc-9a24-b6408860e980\") " pod="openstack/glance-default-internal-api-0" Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.752175 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2266e1d6-2cec-4bfc-9a24-b6408860e980-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"2266e1d6-2cec-4bfc-9a24-b6408860e980\") " pod="openstack/glance-default-internal-api-0" Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.752228 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2266e1d6-2cec-4bfc-9a24-b6408860e980-scripts\") pod \"glance-default-internal-api-0\" (UID: \"2266e1d6-2cec-4bfc-9a24-b6408860e980\") " pod="openstack/glance-default-internal-api-0" Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.752253 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqb44\" (UniqueName: \"kubernetes.io/projected/2266e1d6-2cec-4bfc-9a24-b6408860e980-kube-api-access-xqb44\") pod \"glance-default-internal-api-0\" (UID: \"2266e1d6-2cec-4bfc-9a24-b6408860e980\") " pod="openstack/glance-default-internal-api-0" Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.752276 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"2266e1d6-2cec-4bfc-9a24-b6408860e980\") " pod="openstack/glance-default-internal-api-0" Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.752294 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2266e1d6-2cec-4bfc-9a24-b6408860e980-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"2266e1d6-2cec-4bfc-9a24-b6408860e980\") " pod="openstack/glance-default-internal-api-0" Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.752323 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2266e1d6-2cec-4bfc-9a24-b6408860e980-logs\") pod \"glance-default-internal-api-0\" (UID: \"2266e1d6-2cec-4bfc-9a24-b6408860e980\") " pod="openstack/glance-default-internal-api-0" Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.752344 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2266e1d6-2cec-4bfc-9a24-b6408860e980-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"2266e1d6-2cec-4bfc-9a24-b6408860e980\") " pod="openstack/glance-default-internal-api-0" Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.853775 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2266e1d6-2cec-4bfc-9a24-b6408860e980-scripts\") pod \"glance-default-internal-api-0\" (UID: \"2266e1d6-2cec-4bfc-9a24-b6408860e980\") " pod="openstack/glance-default-internal-api-0" Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.854082 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqb44\" (UniqueName: \"kubernetes.io/projected/2266e1d6-2cec-4bfc-9a24-b6408860e980-kube-api-access-xqb44\") pod \"glance-default-internal-api-0\" (UID: \"2266e1d6-2cec-4bfc-9a24-b6408860e980\") " pod="openstack/glance-default-internal-api-0" Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.854113 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"2266e1d6-2cec-4bfc-9a24-b6408860e980\") " pod="openstack/glance-default-internal-api-0" Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.854137 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2266e1d6-2cec-4bfc-9a24-b6408860e980-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"2266e1d6-2cec-4bfc-9a24-b6408860e980\") " pod="openstack/glance-default-internal-api-0" Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.854174 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2266e1d6-2cec-4bfc-9a24-b6408860e980-logs\") pod \"glance-default-internal-api-0\" (UID: \"2266e1d6-2cec-4bfc-9a24-b6408860e980\") " pod="openstack/glance-default-internal-api-0" Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.855098 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2266e1d6-2cec-4bfc-9a24-b6408860e980-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"2266e1d6-2cec-4bfc-9a24-b6408860e980\") " pod="openstack/glance-default-internal-api-0" Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.855238 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2266e1d6-2cec-4bfc-9a24-b6408860e980-config-data\") pod \"glance-default-internal-api-0\" (UID: \"2266e1d6-2cec-4bfc-9a24-b6408860e980\") " pod="openstack/glance-default-internal-api-0" Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.855268 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2266e1d6-2cec-4bfc-9a24-b6408860e980-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"2266e1d6-2cec-4bfc-9a24-b6408860e980\") " pod="openstack/glance-default-internal-api-0" Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.859486 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2266e1d6-2cec-4bfc-9a24-b6408860e980-scripts\") pod \"glance-default-internal-api-0\" (UID: \"2266e1d6-2cec-4bfc-9a24-b6408860e980\") " pod="openstack/glance-default-internal-api-0" Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.859496 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2266e1d6-2cec-4bfc-9a24-b6408860e980-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"2266e1d6-2cec-4bfc-9a24-b6408860e980\") " pod="openstack/glance-default-internal-api-0" Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.859952 4642 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"2266e1d6-2cec-4bfc-9a24-b6408860e980\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-internal-api-0" Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.861677 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2266e1d6-2cec-4bfc-9a24-b6408860e980-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"2266e1d6-2cec-4bfc-9a24-b6408860e980\") " pod="openstack/glance-default-internal-api-0" Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.865655 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2266e1d6-2cec-4bfc-9a24-b6408860e980-logs\") pod \"glance-default-internal-api-0\" (UID: \"2266e1d6-2cec-4bfc-9a24-b6408860e980\") " pod="openstack/glance-default-internal-api-0" Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.866583 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2266e1d6-2cec-4bfc-9a24-b6408860e980-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"2266e1d6-2cec-4bfc-9a24-b6408860e980\") " pod="openstack/glance-default-internal-api-0" Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.867006 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2266e1d6-2cec-4bfc-9a24-b6408860e980-config-data\") pod \"glance-default-internal-api-0\" (UID: \"2266e1d6-2cec-4bfc-9a24-b6408860e980\") " pod="openstack/glance-default-internal-api-0" Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.878210 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqb44\" (UniqueName: \"kubernetes.io/projected/2266e1d6-2cec-4bfc-9a24-b6408860e980-kube-api-access-xqb44\") pod \"glance-default-internal-api-0\" (UID: \"2266e1d6-2cec-4bfc-9a24-b6408860e980\") " pod="openstack/glance-default-internal-api-0" Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.891321 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"2266e1d6-2cec-4bfc-9a24-b6408860e980\") " pod="openstack/glance-default-internal-api-0" Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.895461 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8s4pp" Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.949060 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.956494 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e57d0d61-2e5c-4cd6-be95-3e3c6f102f7e-scripts\") pod \"e57d0d61-2e5c-4cd6-be95-3e3c6f102f7e\" (UID: \"e57d0d61-2e5c-4cd6-be95-3e3c6f102f7e\") " Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.956662 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e57d0d61-2e5c-4cd6-be95-3e3c6f102f7e-logs\") pod \"e57d0d61-2e5c-4cd6-be95-3e3c6f102f7e\" (UID: \"e57d0d61-2e5c-4cd6-be95-3e3c6f102f7e\") " Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.956807 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e57d0d61-2e5c-4cd6-be95-3e3c6f102f7e-config-data\") pod \"e57d0d61-2e5c-4cd6-be95-3e3c6f102f7e\" (UID: \"e57d0d61-2e5c-4cd6-be95-3e3c6f102f7e\") " Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.956866 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e57d0d61-2e5c-4cd6-be95-3e3c6f102f7e-combined-ca-bundle\") pod \"e57d0d61-2e5c-4cd6-be95-3e3c6f102f7e\" (UID: \"e57d0d61-2e5c-4cd6-be95-3e3c6f102f7e\") " Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.956965 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fswvz\" (UniqueName: \"kubernetes.io/projected/e57d0d61-2e5c-4cd6-be95-3e3c6f102f7e-kube-api-access-fswvz\") pod \"e57d0d61-2e5c-4cd6-be95-3e3c6f102f7e\" (UID: \"e57d0d61-2e5c-4cd6-be95-3e3c6f102f7e\") " Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.958286 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e57d0d61-2e5c-4cd6-be95-3e3c6f102f7e-logs" (OuterVolumeSpecName: "logs") pod "e57d0d61-2e5c-4cd6-be95-3e3c6f102f7e" (UID: "e57d0d61-2e5c-4cd6-be95-3e3c6f102f7e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.979594 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e57d0d61-2e5c-4cd6-be95-3e3c6f102f7e-config-data" (OuterVolumeSpecName: "config-data") pod "e57d0d61-2e5c-4cd6-be95-3e3c6f102f7e" (UID: "e57d0d61-2e5c-4cd6-be95-3e3c6f102f7e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.984050 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e57d0d61-2e5c-4cd6-be95-3e3c6f102f7e-kube-api-access-fswvz" (OuterVolumeSpecName: "kube-api-access-fswvz") pod "e57d0d61-2e5c-4cd6-be95-3e3c6f102f7e" (UID: "e57d0d61-2e5c-4cd6-be95-3e3c6f102f7e"). InnerVolumeSpecName "kube-api-access-fswvz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.984058 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e57d0d61-2e5c-4cd6-be95-3e3c6f102f7e-scripts" (OuterVolumeSpecName: "scripts") pod "e57d0d61-2e5c-4cd6-be95-3e3c6f102f7e" (UID: "e57d0d61-2e5c-4cd6-be95-3e3c6f102f7e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:04:13 crc kubenswrapper[4642]: I0128 07:04:13.986563 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e57d0d61-2e5c-4cd6-be95-3e3c6f102f7e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e57d0d61-2e5c-4cd6-be95-3e3c6f102f7e" (UID: "e57d0d61-2e5c-4cd6-be95-3e3c6f102f7e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:04:14 crc kubenswrapper[4642]: I0128 07:04:14.059446 4642 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e57d0d61-2e5c-4cd6-be95-3e3c6f102f7e-logs\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:14 crc kubenswrapper[4642]: I0128 07:04:14.059487 4642 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e57d0d61-2e5c-4cd6-be95-3e3c6f102f7e-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:14 crc kubenswrapper[4642]: I0128 07:04:14.059498 4642 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e57d0d61-2e5c-4cd6-be95-3e3c6f102f7e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:14 crc kubenswrapper[4642]: I0128 07:04:14.059507 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fswvz\" (UniqueName: \"kubernetes.io/projected/e57d0d61-2e5c-4cd6-be95-3e3c6f102f7e-kube-api-access-fswvz\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:14 crc kubenswrapper[4642]: I0128 07:04:14.059516 4642 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e57d0d61-2e5c-4cd6-be95-3e3c6f102f7e-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:14 crc kubenswrapper[4642]: I0128 07:04:14.280767 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6c985bd689-xq2rx"] Jan 28 07:04:14 crc kubenswrapper[4642]: I0128 07:04:14.576118 4642 generic.go:334] "Generic (PLEG): container finished" podID="fae0170c-94f4-4e36-b99d-8d183fb5b6e1" containerID="a302f4c04cd9e0202f6bed62be7fa466b5300d3d7ce24cbcb7fc8bbf566f124b" exitCode=0 Jan 28 07:04:14 crc kubenswrapper[4642]: I0128 07:04:14.576747 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-cpmkw" event={"ID":"fae0170c-94f4-4e36-b99d-8d183fb5b6e1","Type":"ContainerDied","Data":"a302f4c04cd9e0202f6bed62be7fa466b5300d3d7ce24cbcb7fc8bbf566f124b"} Jan 28 07:04:14 crc kubenswrapper[4642]: I0128 07:04:14.593337 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8s4pp" event={"ID":"e57d0d61-2e5c-4cd6-be95-3e3c6f102f7e","Type":"ContainerDied","Data":"2fc064d9d8769464bb0bb94324f2d8ae3a917ccb76d4f6780ab925f0648a7e8e"} Jan 28 07:04:14 crc kubenswrapper[4642]: I0128 07:04:14.593366 4642 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2fc064d9d8769464bb0bb94324f2d8ae3a917ccb76d4f6780ab925f0648a7e8e" Jan 28 07:04:14 crc kubenswrapper[4642]: I0128 07:04:14.593430 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8s4pp" Jan 28 07:04:14 crc kubenswrapper[4642]: I0128 07:04:14.603958 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-59fd947774-hzkdn"] Jan 28 07:04:14 crc kubenswrapper[4642]: E0128 07:04:14.604336 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e57d0d61-2e5c-4cd6-be95-3e3c6f102f7e" containerName="placement-db-sync" Jan 28 07:04:14 crc kubenswrapper[4642]: I0128 07:04:14.604348 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="e57d0d61-2e5c-4cd6-be95-3e3c6f102f7e" containerName="placement-db-sync" Jan 28 07:04:14 crc kubenswrapper[4642]: I0128 07:04:14.604534 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="e57d0d61-2e5c-4cd6-be95-3e3c6f102f7e" containerName="placement-db-sync" Jan 28 07:04:14 crc kubenswrapper[4642]: I0128 07:04:14.605385 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-59fd947774-hzkdn" Jan 28 07:04:14 crc kubenswrapper[4642]: I0128 07:04:14.610264 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 28 07:04:14 crc kubenswrapper[4642]: I0128 07:04:14.610521 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-8lsqz" Jan 28 07:04:14 crc kubenswrapper[4642]: I0128 07:04:14.610699 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 28 07:04:14 crc kubenswrapper[4642]: I0128 07:04:14.610798 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 28 07:04:14 crc kubenswrapper[4642]: I0128 07:04:14.611337 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 28 07:04:14 crc kubenswrapper[4642]: I0128 07:04:14.614622 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d9f567799-j8ft2" event={"ID":"901e4414-3465-41d0-a4d6-cc041e6e4319","Type":"ContainerStarted","Data":"1d369b4f5a0ab0a5dc9cca771c171ac02b6bdf82cdf93e51b34872b9f3ce4a03"} Jan 28 07:04:14 crc kubenswrapper[4642]: I0128 07:04:14.615815 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7d9f567799-j8ft2" Jan 28 07:04:14 crc kubenswrapper[4642]: I0128 07:04:14.626115 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"15853341-a13e-4f13-a998-9026f9034213","Type":"ContainerStarted","Data":"0fe5311386b56bf9ddcef24a17968865e8a17517d1865ad7f03114327b3756a5"} Jan 28 07:04:14 crc kubenswrapper[4642]: I0128 07:04:14.626162 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"15853341-a13e-4f13-a998-9026f9034213","Type":"ContainerStarted","Data":"532e81ff6fe727a8a522238c743ea92aa193c8a7ba84f6f36081a2ed8fbe67d0"} Jan 28 07:04:14 crc kubenswrapper[4642]: I0128 07:04:14.633971 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c985bd689-xq2rx" event={"ID":"b06aaf7d-5be1-49f9-aa27-9c5fe5408068","Type":"ContainerStarted","Data":"a0f5cbbb93779a6df185d76d6369a8fe22c26ea9e5be6ff979071014999a42d2"} Jan 28 07:04:14 crc kubenswrapper[4642]: I0128 07:04:14.634019 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c985bd689-xq2rx" event={"ID":"b06aaf7d-5be1-49f9-aa27-9c5fe5408068","Type":"ContainerStarted","Data":"91668e33200f7db7d9e54d0632173836f6399cabf45a1530365d5b7e870e07e6"} Jan 28 07:04:14 crc kubenswrapper[4642]: I0128 07:04:14.638547 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-59fd947774-hzkdn"] Jan 28 07:04:14 crc kubenswrapper[4642]: I0128 07:04:14.663044 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7d9f567799-j8ft2" podStartSLOduration=4.663025621 podStartE2EDuration="4.663025621s" podCreationTimestamp="2026-01-28 07:04:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 07:04:14.651972899 +0000 UTC m=+977.884061709" watchObservedRunningTime="2026-01-28 07:04:14.663025621 +0000 UTC m=+977.895114429" Jan 28 07:04:14 crc kubenswrapper[4642]: I0128 07:04:14.682995 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 07:04:14 crc kubenswrapper[4642]: I0128 07:04:14.684082 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e08591ac-7a27-4fc3-aaf0-b6957a9d94b5-logs\") pod \"placement-59fd947774-hzkdn\" (UID: \"e08591ac-7a27-4fc3-aaf0-b6957a9d94b5\") " pod="openstack/placement-59fd947774-hzkdn" Jan 28 07:04:14 crc kubenswrapper[4642]: I0128 07:04:14.684440 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e08591ac-7a27-4fc3-aaf0-b6957a9d94b5-internal-tls-certs\") pod \"placement-59fd947774-hzkdn\" (UID: \"e08591ac-7a27-4fc3-aaf0-b6957a9d94b5\") " pod="openstack/placement-59fd947774-hzkdn" Jan 28 07:04:14 crc kubenswrapper[4642]: I0128 07:04:14.684462 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e08591ac-7a27-4fc3-aaf0-b6957a9d94b5-public-tls-certs\") pod \"placement-59fd947774-hzkdn\" (UID: \"e08591ac-7a27-4fc3-aaf0-b6957a9d94b5\") " pod="openstack/placement-59fd947774-hzkdn" Jan 28 07:04:14 crc kubenswrapper[4642]: I0128 07:04:14.684548 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e08591ac-7a27-4fc3-aaf0-b6957a9d94b5-scripts\") pod \"placement-59fd947774-hzkdn\" (UID: \"e08591ac-7a27-4fc3-aaf0-b6957a9d94b5\") " pod="openstack/placement-59fd947774-hzkdn" Jan 28 07:04:14 crc kubenswrapper[4642]: I0128 07:04:14.684675 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7z9s\" (UniqueName: \"kubernetes.io/projected/e08591ac-7a27-4fc3-aaf0-b6957a9d94b5-kube-api-access-p7z9s\") pod \"placement-59fd947774-hzkdn\" (UID: \"e08591ac-7a27-4fc3-aaf0-b6957a9d94b5\") " pod="openstack/placement-59fd947774-hzkdn" Jan 28 07:04:14 crc kubenswrapper[4642]: I0128 07:04:14.684710 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e08591ac-7a27-4fc3-aaf0-b6957a9d94b5-config-data\") pod \"placement-59fd947774-hzkdn\" (UID: \"e08591ac-7a27-4fc3-aaf0-b6957a9d94b5\") " pod="openstack/placement-59fd947774-hzkdn" Jan 28 07:04:14 crc kubenswrapper[4642]: I0128 07:04:14.684751 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e08591ac-7a27-4fc3-aaf0-b6957a9d94b5-combined-ca-bundle\") pod \"placement-59fd947774-hzkdn\" (UID: \"e08591ac-7a27-4fc3-aaf0-b6957a9d94b5\") " pod="openstack/placement-59fd947774-hzkdn" Jan 28 07:04:14 crc kubenswrapper[4642]: W0128 07:04:14.702179 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2266e1d6_2cec_4bfc_9a24_b6408860e980.slice/crio-7b44222e576f8c1ce45730f7a6081d9eb3fac5af21415e2a6920c0cf5630721e WatchSource:0}: Error finding container 7b44222e576f8c1ce45730f7a6081d9eb3fac5af21415e2a6920c0cf5630721e: Status 404 returned error can't find the container with id 7b44222e576f8c1ce45730f7a6081d9eb3fac5af21415e2a6920c0cf5630721e Jan 28 07:04:14 crc kubenswrapper[4642]: I0128 07:04:14.786051 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e08591ac-7a27-4fc3-aaf0-b6957a9d94b5-config-data\") pod \"placement-59fd947774-hzkdn\" (UID: \"e08591ac-7a27-4fc3-aaf0-b6957a9d94b5\") " pod="openstack/placement-59fd947774-hzkdn" Jan 28 07:04:14 crc kubenswrapper[4642]: I0128 07:04:14.786095 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e08591ac-7a27-4fc3-aaf0-b6957a9d94b5-combined-ca-bundle\") pod \"placement-59fd947774-hzkdn\" (UID: \"e08591ac-7a27-4fc3-aaf0-b6957a9d94b5\") " pod="openstack/placement-59fd947774-hzkdn" Jan 28 07:04:14 crc kubenswrapper[4642]: I0128 07:04:14.786132 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e08591ac-7a27-4fc3-aaf0-b6957a9d94b5-logs\") pod \"placement-59fd947774-hzkdn\" (UID: \"e08591ac-7a27-4fc3-aaf0-b6957a9d94b5\") " pod="openstack/placement-59fd947774-hzkdn" Jan 28 07:04:14 crc kubenswrapper[4642]: I0128 07:04:14.786153 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e08591ac-7a27-4fc3-aaf0-b6957a9d94b5-internal-tls-certs\") pod \"placement-59fd947774-hzkdn\" (UID: \"e08591ac-7a27-4fc3-aaf0-b6957a9d94b5\") " pod="openstack/placement-59fd947774-hzkdn" Jan 28 07:04:14 crc kubenswrapper[4642]: I0128 07:04:14.786170 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e08591ac-7a27-4fc3-aaf0-b6957a9d94b5-public-tls-certs\") pod \"placement-59fd947774-hzkdn\" (UID: \"e08591ac-7a27-4fc3-aaf0-b6957a9d94b5\") " pod="openstack/placement-59fd947774-hzkdn" Jan 28 07:04:14 crc kubenswrapper[4642]: I0128 07:04:14.786268 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e08591ac-7a27-4fc3-aaf0-b6957a9d94b5-scripts\") pod \"placement-59fd947774-hzkdn\" (UID: \"e08591ac-7a27-4fc3-aaf0-b6957a9d94b5\") " pod="openstack/placement-59fd947774-hzkdn" Jan 28 07:04:14 crc kubenswrapper[4642]: I0128 07:04:14.786357 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7z9s\" (UniqueName: \"kubernetes.io/projected/e08591ac-7a27-4fc3-aaf0-b6957a9d94b5-kube-api-access-p7z9s\") pod \"placement-59fd947774-hzkdn\" (UID: \"e08591ac-7a27-4fc3-aaf0-b6957a9d94b5\") " pod="openstack/placement-59fd947774-hzkdn" Jan 28 07:04:14 crc kubenswrapper[4642]: I0128 07:04:14.786705 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e08591ac-7a27-4fc3-aaf0-b6957a9d94b5-logs\") pod \"placement-59fd947774-hzkdn\" (UID: \"e08591ac-7a27-4fc3-aaf0-b6957a9d94b5\") " pod="openstack/placement-59fd947774-hzkdn" Jan 28 07:04:14 crc kubenswrapper[4642]: I0128 07:04:14.790585 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e08591ac-7a27-4fc3-aaf0-b6957a9d94b5-internal-tls-certs\") pod \"placement-59fd947774-hzkdn\" (UID: \"e08591ac-7a27-4fc3-aaf0-b6957a9d94b5\") " pod="openstack/placement-59fd947774-hzkdn" Jan 28 07:04:14 crc kubenswrapper[4642]: I0128 07:04:14.790756 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e08591ac-7a27-4fc3-aaf0-b6957a9d94b5-combined-ca-bundle\") pod \"placement-59fd947774-hzkdn\" (UID: \"e08591ac-7a27-4fc3-aaf0-b6957a9d94b5\") " pod="openstack/placement-59fd947774-hzkdn" Jan 28 07:04:14 crc kubenswrapper[4642]: I0128 07:04:14.791546 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e08591ac-7a27-4fc3-aaf0-b6957a9d94b5-public-tls-certs\") pod \"placement-59fd947774-hzkdn\" (UID: \"e08591ac-7a27-4fc3-aaf0-b6957a9d94b5\") " pod="openstack/placement-59fd947774-hzkdn" Jan 28 07:04:14 crc kubenswrapper[4642]: I0128 07:04:14.796912 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e08591ac-7a27-4fc3-aaf0-b6957a9d94b5-config-data\") pod \"placement-59fd947774-hzkdn\" (UID: \"e08591ac-7a27-4fc3-aaf0-b6957a9d94b5\") " pod="openstack/placement-59fd947774-hzkdn" Jan 28 07:04:14 crc kubenswrapper[4642]: I0128 07:04:14.808652 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7z9s\" (UniqueName: \"kubernetes.io/projected/e08591ac-7a27-4fc3-aaf0-b6957a9d94b5-kube-api-access-p7z9s\") pod \"placement-59fd947774-hzkdn\" (UID: \"e08591ac-7a27-4fc3-aaf0-b6957a9d94b5\") " pod="openstack/placement-59fd947774-hzkdn" Jan 28 07:04:14 crc kubenswrapper[4642]: I0128 07:04:14.816867 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e08591ac-7a27-4fc3-aaf0-b6957a9d94b5-scripts\") pod \"placement-59fd947774-hzkdn\" (UID: \"e08591ac-7a27-4fc3-aaf0-b6957a9d94b5\") " pod="openstack/placement-59fd947774-hzkdn" Jan 28 07:04:14 crc kubenswrapper[4642]: I0128 07:04:14.928828 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-vtvtp" Jan 28 07:04:14 crc kubenswrapper[4642]: I0128 07:04:14.935029 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-59fd947774-hzkdn" Jan 28 07:04:15 crc kubenswrapper[4642]: I0128 07:04:15.091493 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/914420d1-9b01-49b9-962d-405cc170a061-db-sync-config-data\") pod \"914420d1-9b01-49b9-962d-405cc170a061\" (UID: \"914420d1-9b01-49b9-962d-405cc170a061\") " Jan 28 07:04:15 crc kubenswrapper[4642]: I0128 07:04:15.091678 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzn7r\" (UniqueName: \"kubernetes.io/projected/914420d1-9b01-49b9-962d-405cc170a061-kube-api-access-nzn7r\") pod \"914420d1-9b01-49b9-962d-405cc170a061\" (UID: \"914420d1-9b01-49b9-962d-405cc170a061\") " Jan 28 07:04:15 crc kubenswrapper[4642]: I0128 07:04:15.091712 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/914420d1-9b01-49b9-962d-405cc170a061-combined-ca-bundle\") pod \"914420d1-9b01-49b9-962d-405cc170a061\" (UID: \"914420d1-9b01-49b9-962d-405cc170a061\") " Jan 28 07:04:15 crc kubenswrapper[4642]: I0128 07:04:15.113790 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/914420d1-9b01-49b9-962d-405cc170a061-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "914420d1-9b01-49b9-962d-405cc170a061" (UID: "914420d1-9b01-49b9-962d-405cc170a061"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:04:15 crc kubenswrapper[4642]: I0128 07:04:15.116683 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3ae5f52-55f6-47d3-936b-2af9cf3618fa" path="/var/lib/kubelet/pods/b3ae5f52-55f6-47d3-936b-2af9cf3618fa/volumes" Jan 28 07:04:15 crc kubenswrapper[4642]: I0128 07:04:15.119469 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/914420d1-9b01-49b9-962d-405cc170a061-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "914420d1-9b01-49b9-962d-405cc170a061" (UID: "914420d1-9b01-49b9-962d-405cc170a061"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:04:15 crc kubenswrapper[4642]: I0128 07:04:15.119534 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/914420d1-9b01-49b9-962d-405cc170a061-kube-api-access-nzn7r" (OuterVolumeSpecName: "kube-api-access-nzn7r") pod "914420d1-9b01-49b9-962d-405cc170a061" (UID: "914420d1-9b01-49b9-962d-405cc170a061"). InnerVolumeSpecName "kube-api-access-nzn7r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:04:15 crc kubenswrapper[4642]: I0128 07:04:15.194007 4642 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/914420d1-9b01-49b9-962d-405cc170a061-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:15 crc kubenswrapper[4642]: I0128 07:04:15.194040 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzn7r\" (UniqueName: \"kubernetes.io/projected/914420d1-9b01-49b9-962d-405cc170a061-kube-api-access-nzn7r\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:15 crc kubenswrapper[4642]: I0128 07:04:15.194050 4642 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/914420d1-9b01-49b9-962d-405cc170a061-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:15 crc kubenswrapper[4642]: I0128 07:04:15.359883 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-59fd947774-hzkdn"] Jan 28 07:04:15 crc kubenswrapper[4642]: I0128 07:04:15.679766 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c985bd689-xq2rx" event={"ID":"b06aaf7d-5be1-49f9-aa27-9c5fe5408068","Type":"ContainerStarted","Data":"5742a6deedc3126d7e407aab7fa02157f0d16d462d5dd51264f40a60c8e39bf2"} Jan 28 07:04:15 crc kubenswrapper[4642]: I0128 07:04:15.680686 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6c985bd689-xq2rx" Jan 28 07:04:15 crc kubenswrapper[4642]: I0128 07:04:15.689305 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-59fd947774-hzkdn" event={"ID":"e08591ac-7a27-4fc3-aaf0-b6957a9d94b5","Type":"ContainerStarted","Data":"922c0da609381176513cbcc8070b36be8ae94d188eb3d33565a8b12bbdc98d22"} Jan 28 07:04:15 crc kubenswrapper[4642]: I0128 07:04:15.689371 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-59fd947774-hzkdn" event={"ID":"e08591ac-7a27-4fc3-aaf0-b6957a9d94b5","Type":"ContainerStarted","Data":"f4c80458d900138aaf342e3f4c0f360e7f1e26e91aaae20de5e8742ec26c9790"} Jan 28 07:04:15 crc kubenswrapper[4642]: I0128 07:04:15.691350 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2266e1d6-2cec-4bfc-9a24-b6408860e980","Type":"ContainerStarted","Data":"15dbe696b934bd09d8341707ccd9135f43d5140feb7d1dcfa639f46c873bc41a"} Jan 28 07:04:15 crc kubenswrapper[4642]: I0128 07:04:15.691384 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2266e1d6-2cec-4bfc-9a24-b6408860e980","Type":"ContainerStarted","Data":"7b44222e576f8c1ce45730f7a6081d9eb3fac5af21415e2a6920c0cf5630721e"} Jan 28 07:04:15 crc kubenswrapper[4642]: I0128 07:04:15.701466 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6c985bd689-xq2rx" podStartSLOduration=2.7014545500000002 podStartE2EDuration="2.70145455s" podCreationTimestamp="2026-01-28 07:04:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 07:04:15.697746758 +0000 UTC m=+978.929835568" watchObservedRunningTime="2026-01-28 07:04:15.70145455 +0000 UTC m=+978.933543349" Jan 28 07:04:15 crc kubenswrapper[4642]: I0128 07:04:15.703514 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"15853341-a13e-4f13-a998-9026f9034213","Type":"ContainerStarted","Data":"aecf1be783189cb9e0bc0b617e50c966034ee267137fb04a5158161548d9a3c5"} Jan 28 07:04:15 crc kubenswrapper[4642]: I0128 07:04:15.716754 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-vtvtp" event={"ID":"914420d1-9b01-49b9-962d-405cc170a061","Type":"ContainerDied","Data":"f3395f0ddadb95a5571e6ea0f821ed8ad1c04f3f5ea3f0da9aeb61198292459a"} Jan 28 07:04:15 crc kubenswrapper[4642]: I0128 07:04:15.716807 4642 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3395f0ddadb95a5571e6ea0f821ed8ad1c04f3f5ea3f0da9aeb61198292459a" Jan 28 07:04:15 crc kubenswrapper[4642]: I0128 07:04:15.716879 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-vtvtp" Jan 28 07:04:15 crc kubenswrapper[4642]: I0128 07:04:15.721636 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-659d7fff7d-x4jhp"] Jan 28 07:04:15 crc kubenswrapper[4642]: E0128 07:04:15.721951 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="914420d1-9b01-49b9-962d-405cc170a061" containerName="barbican-db-sync" Jan 28 07:04:15 crc kubenswrapper[4642]: I0128 07:04:15.721969 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="914420d1-9b01-49b9-962d-405cc170a061" containerName="barbican-db-sync" Jan 28 07:04:15 crc kubenswrapper[4642]: I0128 07:04:15.722136 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="914420d1-9b01-49b9-962d-405cc170a061" containerName="barbican-db-sync" Jan 28 07:04:15 crc kubenswrapper[4642]: I0128 07:04:15.736041 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-659d7fff7d-x4jhp" Jan 28 07:04:15 crc kubenswrapper[4642]: I0128 07:04:15.739949 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.739924782 podStartE2EDuration="3.739924782s" podCreationTimestamp="2026-01-28 07:04:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 07:04:15.719095509 +0000 UTC m=+978.951184317" watchObservedRunningTime="2026-01-28 07:04:15.739924782 +0000 UTC m=+978.972013591" Jan 28 07:04:15 crc kubenswrapper[4642]: I0128 07:04:15.741798 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 28 07:04:15 crc kubenswrapper[4642]: I0128 07:04:15.741874 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 28 07:04:15 crc kubenswrapper[4642]: I0128 07:04:15.741978 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-xlw9c" Jan 28 07:04:15 crc kubenswrapper[4642]: I0128 07:04:15.779243 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-659d7fff7d-x4jhp"] Jan 28 07:04:15 crc kubenswrapper[4642]: I0128 07:04:15.789936 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-84d4f5dddf-lct7d"] Jan 28 07:04:15 crc kubenswrapper[4642]: I0128 07:04:15.791291 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-84d4f5dddf-lct7d" Jan 28 07:04:15 crc kubenswrapper[4642]: I0128 07:04:15.793292 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 28 07:04:15 crc kubenswrapper[4642]: I0128 07:04:15.810255 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-84d4f5dddf-lct7d"] Jan 28 07:04:15 crc kubenswrapper[4642]: I0128 07:04:15.823292 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94d4e46c-7083-4acb-8925-a9a92278e0c6-logs\") pod \"barbican-keystone-listener-659d7fff7d-x4jhp\" (UID: \"94d4e46c-7083-4acb-8925-a9a92278e0c6\") " pod="openstack/barbican-keystone-listener-659d7fff7d-x4jhp" Jan 28 07:04:15 crc kubenswrapper[4642]: I0128 07:04:15.823490 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94d4e46c-7083-4acb-8925-a9a92278e0c6-config-data\") pod \"barbican-keystone-listener-659d7fff7d-x4jhp\" (UID: \"94d4e46c-7083-4acb-8925-a9a92278e0c6\") " pod="openstack/barbican-keystone-listener-659d7fff7d-x4jhp" Jan 28 07:04:15 crc kubenswrapper[4642]: I0128 07:04:15.823536 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mc49s\" (UniqueName: \"kubernetes.io/projected/94d4e46c-7083-4acb-8925-a9a92278e0c6-kube-api-access-mc49s\") pod \"barbican-keystone-listener-659d7fff7d-x4jhp\" (UID: \"94d4e46c-7083-4acb-8925-a9a92278e0c6\") " pod="openstack/barbican-keystone-listener-659d7fff7d-x4jhp" Jan 28 07:04:15 crc kubenswrapper[4642]: I0128 07:04:15.823665 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94d4e46c-7083-4acb-8925-a9a92278e0c6-combined-ca-bundle\") pod \"barbican-keystone-listener-659d7fff7d-x4jhp\" (UID: \"94d4e46c-7083-4acb-8925-a9a92278e0c6\") " pod="openstack/barbican-keystone-listener-659d7fff7d-x4jhp" Jan 28 07:04:15 crc kubenswrapper[4642]: I0128 07:04:15.823716 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/94d4e46c-7083-4acb-8925-a9a92278e0c6-config-data-custom\") pod \"barbican-keystone-listener-659d7fff7d-x4jhp\" (UID: \"94d4e46c-7083-4acb-8925-a9a92278e0c6\") " pod="openstack/barbican-keystone-listener-659d7fff7d-x4jhp" Jan 28 07:04:15 crc kubenswrapper[4642]: I0128 07:04:15.912504 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d9f567799-j8ft2"] Jan 28 07:04:15 crc kubenswrapper[4642]: I0128 07:04:15.927806 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46dbf3f9-174e-4b72-9432-c0307d14c9ac-config-data\") pod \"barbican-worker-84d4f5dddf-lct7d\" (UID: \"46dbf3f9-174e-4b72-9432-c0307d14c9ac\") " pod="openstack/barbican-worker-84d4f5dddf-lct7d" Jan 28 07:04:15 crc kubenswrapper[4642]: I0128 07:04:15.927865 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94d4e46c-7083-4acb-8925-a9a92278e0c6-combined-ca-bundle\") pod \"barbican-keystone-listener-659d7fff7d-x4jhp\" (UID: \"94d4e46c-7083-4acb-8925-a9a92278e0c6\") " pod="openstack/barbican-keystone-listener-659d7fff7d-x4jhp" Jan 28 07:04:15 crc kubenswrapper[4642]: I0128 07:04:15.927901 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/94d4e46c-7083-4acb-8925-a9a92278e0c6-config-data-custom\") pod \"barbican-keystone-listener-659d7fff7d-x4jhp\" (UID: \"94d4e46c-7083-4acb-8925-a9a92278e0c6\") " pod="openstack/barbican-keystone-listener-659d7fff7d-x4jhp" Jan 28 07:04:15 crc kubenswrapper[4642]: I0128 07:04:15.927942 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94d4e46c-7083-4acb-8925-a9a92278e0c6-logs\") pod \"barbican-keystone-listener-659d7fff7d-x4jhp\" (UID: \"94d4e46c-7083-4acb-8925-a9a92278e0c6\") " pod="openstack/barbican-keystone-listener-659d7fff7d-x4jhp" Jan 28 07:04:15 crc kubenswrapper[4642]: I0128 07:04:15.927961 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkpv9\" (UniqueName: \"kubernetes.io/projected/46dbf3f9-174e-4b72-9432-c0307d14c9ac-kube-api-access-kkpv9\") pod \"barbican-worker-84d4f5dddf-lct7d\" (UID: \"46dbf3f9-174e-4b72-9432-c0307d14c9ac\") " pod="openstack/barbican-worker-84d4f5dddf-lct7d" Jan 28 07:04:15 crc kubenswrapper[4642]: I0128 07:04:15.927989 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46dbf3f9-174e-4b72-9432-c0307d14c9ac-combined-ca-bundle\") pod \"barbican-worker-84d4f5dddf-lct7d\" (UID: \"46dbf3f9-174e-4b72-9432-c0307d14c9ac\") " pod="openstack/barbican-worker-84d4f5dddf-lct7d" Jan 28 07:04:15 crc kubenswrapper[4642]: I0128 07:04:15.928013 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/46dbf3f9-174e-4b72-9432-c0307d14c9ac-config-data-custom\") pod \"barbican-worker-84d4f5dddf-lct7d\" (UID: \"46dbf3f9-174e-4b72-9432-c0307d14c9ac\") " pod="openstack/barbican-worker-84d4f5dddf-lct7d" Jan 28 07:04:15 crc kubenswrapper[4642]: I0128 07:04:15.928035 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94d4e46c-7083-4acb-8925-a9a92278e0c6-config-data\") pod \"barbican-keystone-listener-659d7fff7d-x4jhp\" (UID: \"94d4e46c-7083-4acb-8925-a9a92278e0c6\") " pod="openstack/barbican-keystone-listener-659d7fff7d-x4jhp" Jan 28 07:04:15 crc kubenswrapper[4642]: I0128 07:04:15.928120 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mc49s\" (UniqueName: \"kubernetes.io/projected/94d4e46c-7083-4acb-8925-a9a92278e0c6-kube-api-access-mc49s\") pod \"barbican-keystone-listener-659d7fff7d-x4jhp\" (UID: \"94d4e46c-7083-4acb-8925-a9a92278e0c6\") " pod="openstack/barbican-keystone-listener-659d7fff7d-x4jhp" Jan 28 07:04:15 crc kubenswrapper[4642]: I0128 07:04:15.928269 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46dbf3f9-174e-4b72-9432-c0307d14c9ac-logs\") pod \"barbican-worker-84d4f5dddf-lct7d\" (UID: \"46dbf3f9-174e-4b72-9432-c0307d14c9ac\") " pod="openstack/barbican-worker-84d4f5dddf-lct7d" Jan 28 07:04:15 crc kubenswrapper[4642]: I0128 07:04:15.928645 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94d4e46c-7083-4acb-8925-a9a92278e0c6-logs\") pod \"barbican-keystone-listener-659d7fff7d-x4jhp\" (UID: \"94d4e46c-7083-4acb-8925-a9a92278e0c6\") " pod="openstack/barbican-keystone-listener-659d7fff7d-x4jhp" Jan 28 07:04:15 crc kubenswrapper[4642]: I0128 07:04:15.935048 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94d4e46c-7083-4acb-8925-a9a92278e0c6-combined-ca-bundle\") pod \"barbican-keystone-listener-659d7fff7d-x4jhp\" (UID: \"94d4e46c-7083-4acb-8925-a9a92278e0c6\") " pod="openstack/barbican-keystone-listener-659d7fff7d-x4jhp" Jan 28 07:04:15 crc kubenswrapper[4642]: I0128 07:04:15.937683 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/94d4e46c-7083-4acb-8925-a9a92278e0c6-config-data-custom\") pod \"barbican-keystone-listener-659d7fff7d-x4jhp\" (UID: \"94d4e46c-7083-4acb-8925-a9a92278e0c6\") " pod="openstack/barbican-keystone-listener-659d7fff7d-x4jhp" Jan 28 07:04:15 crc kubenswrapper[4642]: I0128 07:04:15.944746 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94d4e46c-7083-4acb-8925-a9a92278e0c6-config-data\") pod \"barbican-keystone-listener-659d7fff7d-x4jhp\" (UID: \"94d4e46c-7083-4acb-8925-a9a92278e0c6\") " pod="openstack/barbican-keystone-listener-659d7fff7d-x4jhp" Jan 28 07:04:15 crc kubenswrapper[4642]: I0128 07:04:15.952643 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5d574698ff-4mg6j"] Jan 28 07:04:15 crc kubenswrapper[4642]: I0128 07:04:15.953011 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mc49s\" (UniqueName: \"kubernetes.io/projected/94d4e46c-7083-4acb-8925-a9a92278e0c6-kube-api-access-mc49s\") pod \"barbican-keystone-listener-659d7fff7d-x4jhp\" (UID: \"94d4e46c-7083-4acb-8925-a9a92278e0c6\") " pod="openstack/barbican-keystone-listener-659d7fff7d-x4jhp" Jan 28 07:04:15 crc kubenswrapper[4642]: I0128 07:04:15.954107 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d574698ff-4mg6j" Jan 28 07:04:15 crc kubenswrapper[4642]: I0128 07:04:15.974714 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6586c6444d-858tc"] Jan 28 07:04:15 crc kubenswrapper[4642]: I0128 07:04:15.976139 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6586c6444d-858tc" Jan 28 07:04:15 crc kubenswrapper[4642]: I0128 07:04:15.979170 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 28 07:04:15 crc kubenswrapper[4642]: I0128 07:04:15.993559 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d574698ff-4mg6j"] Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.007557 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6586c6444d-858tc"] Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.033735 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46dbf3f9-174e-4b72-9432-c0307d14c9ac-logs\") pod \"barbican-worker-84d4f5dddf-lct7d\" (UID: \"46dbf3f9-174e-4b72-9432-c0307d14c9ac\") " pod="openstack/barbican-worker-84d4f5dddf-lct7d" Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.033791 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4ktt\" (UniqueName: \"kubernetes.io/projected/ff037186-7ca7-4860-a8ae-0d3b84abe5da-kube-api-access-z4ktt\") pod \"dnsmasq-dns-5d574698ff-4mg6j\" (UID: \"ff037186-7ca7-4860-a8ae-0d3b84abe5da\") " pod="openstack/dnsmasq-dns-5d574698ff-4mg6j" Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.033822 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46dbf3f9-174e-4b72-9432-c0307d14c9ac-config-data\") pod \"barbican-worker-84d4f5dddf-lct7d\" (UID: \"46dbf3f9-174e-4b72-9432-c0307d14c9ac\") " pod="openstack/barbican-worker-84d4f5dddf-lct7d" Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.033910 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkpv9\" (UniqueName: \"kubernetes.io/projected/46dbf3f9-174e-4b72-9432-c0307d14c9ac-kube-api-access-kkpv9\") pod \"barbican-worker-84d4f5dddf-lct7d\" (UID: \"46dbf3f9-174e-4b72-9432-c0307d14c9ac\") " pod="openstack/barbican-worker-84d4f5dddf-lct7d" Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.033939 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ff037186-7ca7-4860-a8ae-0d3b84abe5da-ovsdbserver-sb\") pod \"dnsmasq-dns-5d574698ff-4mg6j\" (UID: \"ff037186-7ca7-4860-a8ae-0d3b84abe5da\") " pod="openstack/dnsmasq-dns-5d574698ff-4mg6j" Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.033960 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff037186-7ca7-4860-a8ae-0d3b84abe5da-config\") pod \"dnsmasq-dns-5d574698ff-4mg6j\" (UID: \"ff037186-7ca7-4860-a8ae-0d3b84abe5da\") " pod="openstack/dnsmasq-dns-5d574698ff-4mg6j" Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.033985 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46dbf3f9-174e-4b72-9432-c0307d14c9ac-combined-ca-bundle\") pod \"barbican-worker-84d4f5dddf-lct7d\" (UID: \"46dbf3f9-174e-4b72-9432-c0307d14c9ac\") " pod="openstack/barbican-worker-84d4f5dddf-lct7d" Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.034031 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ff037186-7ca7-4860-a8ae-0d3b84abe5da-dns-swift-storage-0\") pod \"dnsmasq-dns-5d574698ff-4mg6j\" (UID: \"ff037186-7ca7-4860-a8ae-0d3b84abe5da\") " pod="openstack/dnsmasq-dns-5d574698ff-4mg6j" Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.034064 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/46dbf3f9-174e-4b72-9432-c0307d14c9ac-config-data-custom\") pod \"barbican-worker-84d4f5dddf-lct7d\" (UID: \"46dbf3f9-174e-4b72-9432-c0307d14c9ac\") " pod="openstack/barbican-worker-84d4f5dddf-lct7d" Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.034097 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff037186-7ca7-4860-a8ae-0d3b84abe5da-dns-svc\") pod \"dnsmasq-dns-5d574698ff-4mg6j\" (UID: \"ff037186-7ca7-4860-a8ae-0d3b84abe5da\") " pod="openstack/dnsmasq-dns-5d574698ff-4mg6j" Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.034138 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ff037186-7ca7-4860-a8ae-0d3b84abe5da-ovsdbserver-nb\") pod \"dnsmasq-dns-5d574698ff-4mg6j\" (UID: \"ff037186-7ca7-4860-a8ae-0d3b84abe5da\") " pod="openstack/dnsmasq-dns-5d574698ff-4mg6j" Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.036996 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46dbf3f9-174e-4b72-9432-c0307d14c9ac-logs\") pod \"barbican-worker-84d4f5dddf-lct7d\" (UID: \"46dbf3f9-174e-4b72-9432-c0307d14c9ac\") " pod="openstack/barbican-worker-84d4f5dddf-lct7d" Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.040266 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46dbf3f9-174e-4b72-9432-c0307d14c9ac-combined-ca-bundle\") pod \"barbican-worker-84d4f5dddf-lct7d\" (UID: \"46dbf3f9-174e-4b72-9432-c0307d14c9ac\") " pod="openstack/barbican-worker-84d4f5dddf-lct7d" Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.053800 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/46dbf3f9-174e-4b72-9432-c0307d14c9ac-config-data-custom\") pod \"barbican-worker-84d4f5dddf-lct7d\" (UID: \"46dbf3f9-174e-4b72-9432-c0307d14c9ac\") " pod="openstack/barbican-worker-84d4f5dddf-lct7d" Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.054145 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46dbf3f9-174e-4b72-9432-c0307d14c9ac-config-data\") pod \"barbican-worker-84d4f5dddf-lct7d\" (UID: \"46dbf3f9-174e-4b72-9432-c0307d14c9ac\") " pod="openstack/barbican-worker-84d4f5dddf-lct7d" Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.054888 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkpv9\" (UniqueName: \"kubernetes.io/projected/46dbf3f9-174e-4b72-9432-c0307d14c9ac-kube-api-access-kkpv9\") pod \"barbican-worker-84d4f5dddf-lct7d\" (UID: \"46dbf3f9-174e-4b72-9432-c0307d14c9ac\") " pod="openstack/barbican-worker-84d4f5dddf-lct7d" Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.080743 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-659d7fff7d-x4jhp" Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.132093 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-84d4f5dddf-lct7d" Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.161617 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/747300e5-abfb-4f04-90b4-a88ab60f1a5f-config-data\") pod \"barbican-api-6586c6444d-858tc\" (UID: \"747300e5-abfb-4f04-90b4-a88ab60f1a5f\") " pod="openstack/barbican-api-6586c6444d-858tc" Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.161655 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4ktt\" (UniqueName: \"kubernetes.io/projected/ff037186-7ca7-4860-a8ae-0d3b84abe5da-kube-api-access-z4ktt\") pod \"dnsmasq-dns-5d574698ff-4mg6j\" (UID: \"ff037186-7ca7-4860-a8ae-0d3b84abe5da\") " pod="openstack/dnsmasq-dns-5d574698ff-4mg6j" Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.161673 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/747300e5-abfb-4f04-90b4-a88ab60f1a5f-combined-ca-bundle\") pod \"barbican-api-6586c6444d-858tc\" (UID: \"747300e5-abfb-4f04-90b4-a88ab60f1a5f\") " pod="openstack/barbican-api-6586c6444d-858tc" Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.161730 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fld52\" (UniqueName: \"kubernetes.io/projected/747300e5-abfb-4f04-90b4-a88ab60f1a5f-kube-api-access-fld52\") pod \"barbican-api-6586c6444d-858tc\" (UID: \"747300e5-abfb-4f04-90b4-a88ab60f1a5f\") " pod="openstack/barbican-api-6586c6444d-858tc" Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.161789 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ff037186-7ca7-4860-a8ae-0d3b84abe5da-ovsdbserver-sb\") pod \"dnsmasq-dns-5d574698ff-4mg6j\" (UID: \"ff037186-7ca7-4860-a8ae-0d3b84abe5da\") " pod="openstack/dnsmasq-dns-5d574698ff-4mg6j" Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.161809 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff037186-7ca7-4860-a8ae-0d3b84abe5da-config\") pod \"dnsmasq-dns-5d574698ff-4mg6j\" (UID: \"ff037186-7ca7-4860-a8ae-0d3b84abe5da\") " pod="openstack/dnsmasq-dns-5d574698ff-4mg6j" Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.161831 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ff037186-7ca7-4860-a8ae-0d3b84abe5da-dns-swift-storage-0\") pod \"dnsmasq-dns-5d574698ff-4mg6j\" (UID: \"ff037186-7ca7-4860-a8ae-0d3b84abe5da\") " pod="openstack/dnsmasq-dns-5d574698ff-4mg6j" Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.161848 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/747300e5-abfb-4f04-90b4-a88ab60f1a5f-logs\") pod \"barbican-api-6586c6444d-858tc\" (UID: \"747300e5-abfb-4f04-90b4-a88ab60f1a5f\") " pod="openstack/barbican-api-6586c6444d-858tc" Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.161874 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/747300e5-abfb-4f04-90b4-a88ab60f1a5f-config-data-custom\") pod \"barbican-api-6586c6444d-858tc\" (UID: \"747300e5-abfb-4f04-90b4-a88ab60f1a5f\") " pod="openstack/barbican-api-6586c6444d-858tc" Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.161893 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff037186-7ca7-4860-a8ae-0d3b84abe5da-dns-svc\") pod \"dnsmasq-dns-5d574698ff-4mg6j\" (UID: \"ff037186-7ca7-4860-a8ae-0d3b84abe5da\") " pod="openstack/dnsmasq-dns-5d574698ff-4mg6j" Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.161925 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ff037186-7ca7-4860-a8ae-0d3b84abe5da-ovsdbserver-nb\") pod \"dnsmasq-dns-5d574698ff-4mg6j\" (UID: \"ff037186-7ca7-4860-a8ae-0d3b84abe5da\") " pod="openstack/dnsmasq-dns-5d574698ff-4mg6j" Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.162645 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ff037186-7ca7-4860-a8ae-0d3b84abe5da-ovsdbserver-nb\") pod \"dnsmasq-dns-5d574698ff-4mg6j\" (UID: \"ff037186-7ca7-4860-a8ae-0d3b84abe5da\") " pod="openstack/dnsmasq-dns-5d574698ff-4mg6j" Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.162933 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ff037186-7ca7-4860-a8ae-0d3b84abe5da-dns-swift-storage-0\") pod \"dnsmasq-dns-5d574698ff-4mg6j\" (UID: \"ff037186-7ca7-4860-a8ae-0d3b84abe5da\") " pod="openstack/dnsmasq-dns-5d574698ff-4mg6j" Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.167465 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ff037186-7ca7-4860-a8ae-0d3b84abe5da-ovsdbserver-sb\") pod \"dnsmasq-dns-5d574698ff-4mg6j\" (UID: \"ff037186-7ca7-4860-a8ae-0d3b84abe5da\") " pod="openstack/dnsmasq-dns-5d574698ff-4mg6j" Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.167761 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff037186-7ca7-4860-a8ae-0d3b84abe5da-dns-svc\") pod \"dnsmasq-dns-5d574698ff-4mg6j\" (UID: \"ff037186-7ca7-4860-a8ae-0d3b84abe5da\") " pod="openstack/dnsmasq-dns-5d574698ff-4mg6j" Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.167886 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff037186-7ca7-4860-a8ae-0d3b84abe5da-config\") pod \"dnsmasq-dns-5d574698ff-4mg6j\" (UID: \"ff037186-7ca7-4860-a8ae-0d3b84abe5da\") " pod="openstack/dnsmasq-dns-5d574698ff-4mg6j" Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.177156 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4ktt\" (UniqueName: \"kubernetes.io/projected/ff037186-7ca7-4860-a8ae-0d3b84abe5da-kube-api-access-z4ktt\") pod \"dnsmasq-dns-5d574698ff-4mg6j\" (UID: \"ff037186-7ca7-4860-a8ae-0d3b84abe5da\") " pod="openstack/dnsmasq-dns-5d574698ff-4mg6j" Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.254358 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-cpmkw" Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.265344 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fld52\" (UniqueName: \"kubernetes.io/projected/747300e5-abfb-4f04-90b4-a88ab60f1a5f-kube-api-access-fld52\") pod \"barbican-api-6586c6444d-858tc\" (UID: \"747300e5-abfb-4f04-90b4-a88ab60f1a5f\") " pod="openstack/barbican-api-6586c6444d-858tc" Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.265499 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/747300e5-abfb-4f04-90b4-a88ab60f1a5f-logs\") pod \"barbican-api-6586c6444d-858tc\" (UID: \"747300e5-abfb-4f04-90b4-a88ab60f1a5f\") " pod="openstack/barbican-api-6586c6444d-858tc" Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.265567 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/747300e5-abfb-4f04-90b4-a88ab60f1a5f-config-data-custom\") pod \"barbican-api-6586c6444d-858tc\" (UID: \"747300e5-abfb-4f04-90b4-a88ab60f1a5f\") " pod="openstack/barbican-api-6586c6444d-858tc" Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.265687 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/747300e5-abfb-4f04-90b4-a88ab60f1a5f-config-data\") pod \"barbican-api-6586c6444d-858tc\" (UID: \"747300e5-abfb-4f04-90b4-a88ab60f1a5f\") " pod="openstack/barbican-api-6586c6444d-858tc" Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.265710 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/747300e5-abfb-4f04-90b4-a88ab60f1a5f-combined-ca-bundle\") pod \"barbican-api-6586c6444d-858tc\" (UID: \"747300e5-abfb-4f04-90b4-a88ab60f1a5f\") " pod="openstack/barbican-api-6586c6444d-858tc" Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.268246 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/747300e5-abfb-4f04-90b4-a88ab60f1a5f-logs\") pod \"barbican-api-6586c6444d-858tc\" (UID: \"747300e5-abfb-4f04-90b4-a88ab60f1a5f\") " pod="openstack/barbican-api-6586c6444d-858tc" Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.273252 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/747300e5-abfb-4f04-90b4-a88ab60f1a5f-config-data-custom\") pod \"barbican-api-6586c6444d-858tc\" (UID: \"747300e5-abfb-4f04-90b4-a88ab60f1a5f\") " pod="openstack/barbican-api-6586c6444d-858tc" Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.274085 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/747300e5-abfb-4f04-90b4-a88ab60f1a5f-config-data\") pod \"barbican-api-6586c6444d-858tc\" (UID: \"747300e5-abfb-4f04-90b4-a88ab60f1a5f\") " pod="openstack/barbican-api-6586c6444d-858tc" Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.277979 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/747300e5-abfb-4f04-90b4-a88ab60f1a5f-combined-ca-bundle\") pod \"barbican-api-6586c6444d-858tc\" (UID: \"747300e5-abfb-4f04-90b4-a88ab60f1a5f\") " pod="openstack/barbican-api-6586c6444d-858tc" Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.284738 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d574698ff-4mg6j" Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.287674 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fld52\" (UniqueName: \"kubernetes.io/projected/747300e5-abfb-4f04-90b4-a88ab60f1a5f-kube-api-access-fld52\") pod \"barbican-api-6586c6444d-858tc\" (UID: \"747300e5-abfb-4f04-90b4-a88ab60f1a5f\") " pod="openstack/barbican-api-6586c6444d-858tc" Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.296078 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6586c6444d-858tc" Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.366675 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fae0170c-94f4-4e36-b99d-8d183fb5b6e1-scripts\") pod \"fae0170c-94f4-4e36-b99d-8d183fb5b6e1\" (UID: \"fae0170c-94f4-4e36-b99d-8d183fb5b6e1\") " Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.366731 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fae0170c-94f4-4e36-b99d-8d183fb5b6e1-combined-ca-bundle\") pod \"fae0170c-94f4-4e36-b99d-8d183fb5b6e1\" (UID: \"fae0170c-94f4-4e36-b99d-8d183fb5b6e1\") " Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.366943 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fae0170c-94f4-4e36-b99d-8d183fb5b6e1-credential-keys\") pod \"fae0170c-94f4-4e36-b99d-8d183fb5b6e1\" (UID: \"fae0170c-94f4-4e36-b99d-8d183fb5b6e1\") " Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.366967 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rf2xd\" (UniqueName: \"kubernetes.io/projected/fae0170c-94f4-4e36-b99d-8d183fb5b6e1-kube-api-access-rf2xd\") pod \"fae0170c-94f4-4e36-b99d-8d183fb5b6e1\" (UID: \"fae0170c-94f4-4e36-b99d-8d183fb5b6e1\") " Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.367009 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fae0170c-94f4-4e36-b99d-8d183fb5b6e1-config-data\") pod \"fae0170c-94f4-4e36-b99d-8d183fb5b6e1\" (UID: \"fae0170c-94f4-4e36-b99d-8d183fb5b6e1\") " Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.367026 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fae0170c-94f4-4e36-b99d-8d183fb5b6e1-fernet-keys\") pod \"fae0170c-94f4-4e36-b99d-8d183fb5b6e1\" (UID: \"fae0170c-94f4-4e36-b99d-8d183fb5b6e1\") " Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.373323 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fae0170c-94f4-4e36-b99d-8d183fb5b6e1-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "fae0170c-94f4-4e36-b99d-8d183fb5b6e1" (UID: "fae0170c-94f4-4e36-b99d-8d183fb5b6e1"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.374565 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fae0170c-94f4-4e36-b99d-8d183fb5b6e1-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "fae0170c-94f4-4e36-b99d-8d183fb5b6e1" (UID: "fae0170c-94f4-4e36-b99d-8d183fb5b6e1"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.374599 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fae0170c-94f4-4e36-b99d-8d183fb5b6e1-kube-api-access-rf2xd" (OuterVolumeSpecName: "kube-api-access-rf2xd") pod "fae0170c-94f4-4e36-b99d-8d183fb5b6e1" (UID: "fae0170c-94f4-4e36-b99d-8d183fb5b6e1"). InnerVolumeSpecName "kube-api-access-rf2xd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.388864 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fae0170c-94f4-4e36-b99d-8d183fb5b6e1-scripts" (OuterVolumeSpecName: "scripts") pod "fae0170c-94f4-4e36-b99d-8d183fb5b6e1" (UID: "fae0170c-94f4-4e36-b99d-8d183fb5b6e1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.405072 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fae0170c-94f4-4e36-b99d-8d183fb5b6e1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fae0170c-94f4-4e36-b99d-8d183fb5b6e1" (UID: "fae0170c-94f4-4e36-b99d-8d183fb5b6e1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.407293 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fae0170c-94f4-4e36-b99d-8d183fb5b6e1-config-data" (OuterVolumeSpecName: "config-data") pod "fae0170c-94f4-4e36-b99d-8d183fb5b6e1" (UID: "fae0170c-94f4-4e36-b99d-8d183fb5b6e1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.472127 4642 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fae0170c-94f4-4e36-b99d-8d183fb5b6e1-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.472178 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rf2xd\" (UniqueName: \"kubernetes.io/projected/fae0170c-94f4-4e36-b99d-8d183fb5b6e1-kube-api-access-rf2xd\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.472212 4642 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fae0170c-94f4-4e36-b99d-8d183fb5b6e1-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.472220 4642 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fae0170c-94f4-4e36-b99d-8d183fb5b6e1-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.472228 4642 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fae0170c-94f4-4e36-b99d-8d183fb5b6e1-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.472236 4642 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fae0170c-94f4-4e36-b99d-8d183fb5b6e1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.596914 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-84d4f5dddf-lct7d"] Jan 28 07:04:16 crc kubenswrapper[4642]: W0128 07:04:16.598924 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod46dbf3f9_174e_4b72_9432_c0307d14c9ac.slice/crio-2cf5bec6bbbdbee97091bb7afd989e8508b07aae3fa4c4259c5491ea238508d0 WatchSource:0}: Error finding container 2cf5bec6bbbdbee97091bb7afd989e8508b07aae3fa4c4259c5491ea238508d0: Status 404 returned error can't find the container with id 2cf5bec6bbbdbee97091bb7afd989e8508b07aae3fa4c4259c5491ea238508d0 Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.660625 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-659d7fff7d-x4jhp"] Jan 28 07:04:16 crc kubenswrapper[4642]: W0128 07:04:16.663554 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod94d4e46c_7083_4acb_8925_a9a92278e0c6.slice/crio-d7c4365df525740e4029774e15c6361421f51fb4c4e16d8f52a251a2d73a7f91 WatchSource:0}: Error finding container d7c4365df525740e4029774e15c6361421f51fb4c4e16d8f52a251a2d73a7f91: Status 404 returned error can't find the container with id d7c4365df525740e4029774e15c6361421f51fb4c4e16d8f52a251a2d73a7f91 Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.733886 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2266e1d6-2cec-4bfc-9a24-b6408860e980","Type":"ContainerStarted","Data":"a98f299adcf7ea63064992435916a114d91f4a8ea25a12db110f82f97974ec91"} Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.739942 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-f77bb558-ws68h"] Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.741990 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-cpmkw" Jan 28 07:04:16 crc kubenswrapper[4642]: E0128 07:04:16.745589 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fae0170c-94f4-4e36-b99d-8d183fb5b6e1" containerName="keystone-bootstrap" Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.745686 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="fae0170c-94f4-4e36-b99d-8d183fb5b6e1" containerName="keystone-bootstrap" Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.746024 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="fae0170c-94f4-4e36-b99d-8d183fb5b6e1" containerName="keystone-bootstrap" Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.746723 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-cpmkw" event={"ID":"fae0170c-94f4-4e36-b99d-8d183fb5b6e1","Type":"ContainerDied","Data":"3a5cefedb57c1169c457381c92fb797276024c885de5894a90d2950b9755185a"} Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.746820 4642 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a5cefedb57c1169c457381c92fb797276024c885de5894a90d2950b9755185a" Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.746820 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-f77bb558-ws68h" Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.747440 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-84d4f5dddf-lct7d" event={"ID":"46dbf3f9-174e-4b72-9432-c0307d14c9ac","Type":"ContainerStarted","Data":"2cf5bec6bbbdbee97091bb7afd989e8508b07aae3fa4c4259c5491ea238508d0"} Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.749570 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.749970 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.750270 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.751438 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.751472 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.751630 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-lm99t" Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.752062 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-59fd947774-hzkdn" event={"ID":"e08591ac-7a27-4fc3-aaf0-b6957a9d94b5","Type":"ContainerStarted","Data":"7d426098c460219a58a61f19fc1363528c0778d9eef1d403cc7d78f386882946"} Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.752128 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-59fd947774-hzkdn" Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.752151 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-59fd947774-hzkdn" Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.756316 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-659d7fff7d-x4jhp" event={"ID":"94d4e46c-7083-4acb-8925-a9a92278e0c6","Type":"ContainerStarted","Data":"d7c4365df525740e4029774e15c6361421f51fb4c4e16d8f52a251a2d73a7f91"} Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.760884 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-f77bb558-ws68h"] Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.765031 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.765013579 podStartE2EDuration="3.765013579s" podCreationTimestamp="2026-01-28 07:04:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 07:04:16.762042894 +0000 UTC m=+979.994131704" watchObservedRunningTime="2026-01-28 07:04:16.765013579 +0000 UTC m=+979.997102388" Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.786844 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d574698ff-4mg6j"] Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.816431 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-59fd947774-hzkdn" podStartSLOduration=2.813584874 podStartE2EDuration="2.813584874s" podCreationTimestamp="2026-01-28 07:04:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 07:04:16.812750997 +0000 UTC m=+980.044839806" watchObservedRunningTime="2026-01-28 07:04:16.813584874 +0000 UTC m=+980.045673683" Jan 28 07:04:16 crc kubenswrapper[4642]: W0128 07:04:16.828464 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod747300e5_abfb_4f04_90b4_a88ab60f1a5f.slice/crio-2a85d0fb95e2ea374114b5ea9ffed07d76745ecc7f2bca2880f43258ebbe0014 WatchSource:0}: Error finding container 2a85d0fb95e2ea374114b5ea9ffed07d76745ecc7f2bca2880f43258ebbe0014: Status 404 returned error can't find the container with id 2a85d0fb95e2ea374114b5ea9ffed07d76745ecc7f2bca2880f43258ebbe0014 Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.850092 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6586c6444d-858tc"] Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.884557 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/83dd211e-6375-4640-921f-c26d8181e31b-credential-keys\") pod \"keystone-f77bb558-ws68h\" (UID: \"83dd211e-6375-4640-921f-c26d8181e31b\") " pod="openstack/keystone-f77bb558-ws68h" Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.884781 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/83dd211e-6375-4640-921f-c26d8181e31b-public-tls-certs\") pod \"keystone-f77bb558-ws68h\" (UID: \"83dd211e-6375-4640-921f-c26d8181e31b\") " pod="openstack/keystone-f77bb558-ws68h" Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.884836 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83dd211e-6375-4640-921f-c26d8181e31b-combined-ca-bundle\") pod \"keystone-f77bb558-ws68h\" (UID: \"83dd211e-6375-4640-921f-c26d8181e31b\") " pod="openstack/keystone-f77bb558-ws68h" Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.884861 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lls6w\" (UniqueName: \"kubernetes.io/projected/83dd211e-6375-4640-921f-c26d8181e31b-kube-api-access-lls6w\") pod \"keystone-f77bb558-ws68h\" (UID: \"83dd211e-6375-4640-921f-c26d8181e31b\") " pod="openstack/keystone-f77bb558-ws68h" Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.884878 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/83dd211e-6375-4640-921f-c26d8181e31b-fernet-keys\") pod \"keystone-f77bb558-ws68h\" (UID: \"83dd211e-6375-4640-921f-c26d8181e31b\") " pod="openstack/keystone-f77bb558-ws68h" Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.884905 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83dd211e-6375-4640-921f-c26d8181e31b-config-data\") pod \"keystone-f77bb558-ws68h\" (UID: \"83dd211e-6375-4640-921f-c26d8181e31b\") " pod="openstack/keystone-f77bb558-ws68h" Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.884924 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/83dd211e-6375-4640-921f-c26d8181e31b-internal-tls-certs\") pod \"keystone-f77bb558-ws68h\" (UID: \"83dd211e-6375-4640-921f-c26d8181e31b\") " pod="openstack/keystone-f77bb558-ws68h" Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.884940 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83dd211e-6375-4640-921f-c26d8181e31b-scripts\") pod \"keystone-f77bb558-ws68h\" (UID: \"83dd211e-6375-4640-921f-c26d8181e31b\") " pod="openstack/keystone-f77bb558-ws68h" Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.986332 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/83dd211e-6375-4640-921f-c26d8181e31b-public-tls-certs\") pod \"keystone-f77bb558-ws68h\" (UID: \"83dd211e-6375-4640-921f-c26d8181e31b\") " pod="openstack/keystone-f77bb558-ws68h" Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.986380 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83dd211e-6375-4640-921f-c26d8181e31b-combined-ca-bundle\") pod \"keystone-f77bb558-ws68h\" (UID: \"83dd211e-6375-4640-921f-c26d8181e31b\") " pod="openstack/keystone-f77bb558-ws68h" Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.986400 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lls6w\" (UniqueName: \"kubernetes.io/projected/83dd211e-6375-4640-921f-c26d8181e31b-kube-api-access-lls6w\") pod \"keystone-f77bb558-ws68h\" (UID: \"83dd211e-6375-4640-921f-c26d8181e31b\") " pod="openstack/keystone-f77bb558-ws68h" Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.986421 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/83dd211e-6375-4640-921f-c26d8181e31b-fernet-keys\") pod \"keystone-f77bb558-ws68h\" (UID: \"83dd211e-6375-4640-921f-c26d8181e31b\") " pod="openstack/keystone-f77bb558-ws68h" Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.986443 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83dd211e-6375-4640-921f-c26d8181e31b-config-data\") pod \"keystone-f77bb558-ws68h\" (UID: \"83dd211e-6375-4640-921f-c26d8181e31b\") " pod="openstack/keystone-f77bb558-ws68h" Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.986464 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/83dd211e-6375-4640-921f-c26d8181e31b-internal-tls-certs\") pod \"keystone-f77bb558-ws68h\" (UID: \"83dd211e-6375-4640-921f-c26d8181e31b\") " pod="openstack/keystone-f77bb558-ws68h" Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.986487 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83dd211e-6375-4640-921f-c26d8181e31b-scripts\") pod \"keystone-f77bb558-ws68h\" (UID: \"83dd211e-6375-4640-921f-c26d8181e31b\") " pod="openstack/keystone-f77bb558-ws68h" Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.986537 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/83dd211e-6375-4640-921f-c26d8181e31b-credential-keys\") pod \"keystone-f77bb558-ws68h\" (UID: \"83dd211e-6375-4640-921f-c26d8181e31b\") " pod="openstack/keystone-f77bb558-ws68h" Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.989200 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/83dd211e-6375-4640-921f-c26d8181e31b-public-tls-certs\") pod \"keystone-f77bb558-ws68h\" (UID: \"83dd211e-6375-4640-921f-c26d8181e31b\") " pod="openstack/keystone-f77bb558-ws68h" Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.990645 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/83dd211e-6375-4640-921f-c26d8181e31b-internal-tls-certs\") pod \"keystone-f77bb558-ws68h\" (UID: \"83dd211e-6375-4640-921f-c26d8181e31b\") " pod="openstack/keystone-f77bb558-ws68h" Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.991969 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/83dd211e-6375-4640-921f-c26d8181e31b-fernet-keys\") pod \"keystone-f77bb558-ws68h\" (UID: \"83dd211e-6375-4640-921f-c26d8181e31b\") " pod="openstack/keystone-f77bb558-ws68h" Jan 28 07:04:16 crc kubenswrapper[4642]: I0128 07:04:16.992554 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83dd211e-6375-4640-921f-c26d8181e31b-scripts\") pod \"keystone-f77bb558-ws68h\" (UID: \"83dd211e-6375-4640-921f-c26d8181e31b\") " pod="openstack/keystone-f77bb558-ws68h" Jan 28 07:04:17 crc kubenswrapper[4642]: I0128 07:04:17.000325 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/83dd211e-6375-4640-921f-c26d8181e31b-credential-keys\") pod \"keystone-f77bb558-ws68h\" (UID: \"83dd211e-6375-4640-921f-c26d8181e31b\") " pod="openstack/keystone-f77bb558-ws68h" Jan 28 07:04:17 crc kubenswrapper[4642]: I0128 07:04:17.001375 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83dd211e-6375-4640-921f-c26d8181e31b-combined-ca-bundle\") pod \"keystone-f77bb558-ws68h\" (UID: \"83dd211e-6375-4640-921f-c26d8181e31b\") " pod="openstack/keystone-f77bb558-ws68h" Jan 28 07:04:17 crc kubenswrapper[4642]: I0128 07:04:17.001791 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83dd211e-6375-4640-921f-c26d8181e31b-config-data\") pod \"keystone-f77bb558-ws68h\" (UID: \"83dd211e-6375-4640-921f-c26d8181e31b\") " pod="openstack/keystone-f77bb558-ws68h" Jan 28 07:04:17 crc kubenswrapper[4642]: I0128 07:04:17.002308 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lls6w\" (UniqueName: \"kubernetes.io/projected/83dd211e-6375-4640-921f-c26d8181e31b-kube-api-access-lls6w\") pod \"keystone-f77bb558-ws68h\" (UID: \"83dd211e-6375-4640-921f-c26d8181e31b\") " pod="openstack/keystone-f77bb558-ws68h" Jan 28 07:04:17 crc kubenswrapper[4642]: I0128 07:04:17.068830 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-f77bb558-ws68h" Jan 28 07:04:17 crc kubenswrapper[4642]: I0128 07:04:17.773108 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d574698ff-4mg6j" event={"ID":"ff037186-7ca7-4860-a8ae-0d3b84abe5da","Type":"ContainerDied","Data":"48d4ff721828b99fdf26f6192a12929095911935a1ae3257a070ef4b0df1e6d8"} Jan 28 07:04:17 crc kubenswrapper[4642]: I0128 07:04:17.772917 4642 generic.go:334] "Generic (PLEG): container finished" podID="ff037186-7ca7-4860-a8ae-0d3b84abe5da" containerID="48d4ff721828b99fdf26f6192a12929095911935a1ae3257a070ef4b0df1e6d8" exitCode=0 Jan 28 07:04:17 crc kubenswrapper[4642]: I0128 07:04:17.773535 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d574698ff-4mg6j" event={"ID":"ff037186-7ca7-4860-a8ae-0d3b84abe5da","Type":"ContainerStarted","Data":"b14d9eb602b348ed9b56037b54a171cada0610389e6bdc4758a5a2ef43f78cb8"} Jan 28 07:04:17 crc kubenswrapper[4642]: I0128 07:04:17.775459 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6586c6444d-858tc" event={"ID":"747300e5-abfb-4f04-90b4-a88ab60f1a5f","Type":"ContainerStarted","Data":"d281cc6bf90f440e9a88696df456a95d575fc1145b46a94c952b6c5f5a269b29"} Jan 28 07:04:17 crc kubenswrapper[4642]: I0128 07:04:17.775507 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6586c6444d-858tc" event={"ID":"747300e5-abfb-4f04-90b4-a88ab60f1a5f","Type":"ContainerStarted","Data":"1788ced8982e83d0f3acfe7102596a7f8107a713813fe5e662ed157060cbaf86"} Jan 28 07:04:17 crc kubenswrapper[4642]: I0128 07:04:17.775520 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6586c6444d-858tc" event={"ID":"747300e5-abfb-4f04-90b4-a88ab60f1a5f","Type":"ContainerStarted","Data":"2a85d0fb95e2ea374114b5ea9ffed07d76745ecc7f2bca2880f43258ebbe0014"} Jan 28 07:04:17 crc kubenswrapper[4642]: I0128 07:04:17.775817 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7d9f567799-j8ft2" podUID="901e4414-3465-41d0-a4d6-cc041e6e4319" containerName="dnsmasq-dns" containerID="cri-o://1d369b4f5a0ab0a5dc9cca771c171ac02b6bdf82cdf93e51b34872b9f3ce4a03" gracePeriod=10 Jan 28 07:04:17 crc kubenswrapper[4642]: I0128 07:04:17.807037 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6586c6444d-858tc" podStartSLOduration=2.807022099 podStartE2EDuration="2.807022099s" podCreationTimestamp="2026-01-28 07:04:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 07:04:17.80431526 +0000 UTC m=+981.036404069" watchObservedRunningTime="2026-01-28 07:04:17.807022099 +0000 UTC m=+981.039110909" Jan 28 07:04:18 crc kubenswrapper[4642]: I0128 07:04:18.728639 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-6d89d45779-ghhkc"] Jan 28 07:04:18 crc kubenswrapper[4642]: I0128 07:04:18.730167 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6d89d45779-ghhkc" Jan 28 07:04:18 crc kubenswrapper[4642]: I0128 07:04:18.734300 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-6dcc495f5d-rjtzn"] Jan 28 07:04:18 crc kubenswrapper[4642]: I0128 07:04:18.735576 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6dcc495f5d-rjtzn" Jan 28 07:04:18 crc kubenswrapper[4642]: I0128 07:04:18.743791 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6d89d45779-ghhkc"] Jan 28 07:04:18 crc kubenswrapper[4642]: I0128 07:04:18.752450 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6dcc495f5d-rjtzn"] Jan 28 07:04:18 crc kubenswrapper[4642]: I0128 07:04:18.789146 4642 generic.go:334] "Generic (PLEG): container finished" podID="901e4414-3465-41d0-a4d6-cc041e6e4319" containerID="1d369b4f5a0ab0a5dc9cca771c171ac02b6bdf82cdf93e51b34872b9f3ce4a03" exitCode=0 Jan 28 07:04:18 crc kubenswrapper[4642]: I0128 07:04:18.789232 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d9f567799-j8ft2" event={"ID":"901e4414-3465-41d0-a4d6-cc041e6e4319","Type":"ContainerDied","Data":"1d369b4f5a0ab0a5dc9cca771c171ac02b6bdf82cdf93e51b34872b9f3ce4a03"} Jan 28 07:04:18 crc kubenswrapper[4642]: I0128 07:04:18.789364 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6586c6444d-858tc" Jan 28 07:04:18 crc kubenswrapper[4642]: I0128 07:04:18.789395 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6586c6444d-858tc" Jan 28 07:04:18 crc kubenswrapper[4642]: I0128 07:04:18.834392 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14fef196-27d3-4eb2-8347-80de82035a9d-combined-ca-bundle\") pod \"barbican-keystone-listener-6dcc495f5d-rjtzn\" (UID: \"14fef196-27d3-4eb2-8347-80de82035a9d\") " pod="openstack/barbican-keystone-listener-6dcc495f5d-rjtzn" Jan 28 07:04:18 crc kubenswrapper[4642]: I0128 07:04:18.834440 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b7df992e-0fe2-4c0b-b217-10bc93d786ac-logs\") pod \"barbican-worker-6d89d45779-ghhkc\" (UID: \"b7df992e-0fe2-4c0b-b217-10bc93d786ac\") " pod="openstack/barbican-worker-6d89d45779-ghhkc" Jan 28 07:04:18 crc kubenswrapper[4642]: I0128 07:04:18.834485 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7df992e-0fe2-4c0b-b217-10bc93d786ac-combined-ca-bundle\") pod \"barbican-worker-6d89d45779-ghhkc\" (UID: \"b7df992e-0fe2-4c0b-b217-10bc93d786ac\") " pod="openstack/barbican-worker-6d89d45779-ghhkc" Jan 28 07:04:18 crc kubenswrapper[4642]: I0128 07:04:18.834508 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b7df992e-0fe2-4c0b-b217-10bc93d786ac-config-data-custom\") pod \"barbican-worker-6d89d45779-ghhkc\" (UID: \"b7df992e-0fe2-4c0b-b217-10bc93d786ac\") " pod="openstack/barbican-worker-6d89d45779-ghhkc" Jan 28 07:04:18 crc kubenswrapper[4642]: I0128 07:04:18.834556 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14fef196-27d3-4eb2-8347-80de82035a9d-logs\") pod \"barbican-keystone-listener-6dcc495f5d-rjtzn\" (UID: \"14fef196-27d3-4eb2-8347-80de82035a9d\") " pod="openstack/barbican-keystone-listener-6dcc495f5d-rjtzn" Jan 28 07:04:18 crc kubenswrapper[4642]: I0128 07:04:18.834576 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gf284\" (UniqueName: \"kubernetes.io/projected/b7df992e-0fe2-4c0b-b217-10bc93d786ac-kube-api-access-gf284\") pod \"barbican-worker-6d89d45779-ghhkc\" (UID: \"b7df992e-0fe2-4c0b-b217-10bc93d786ac\") " pod="openstack/barbican-worker-6d89d45779-ghhkc" Jan 28 07:04:18 crc kubenswrapper[4642]: I0128 07:04:18.834600 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14fef196-27d3-4eb2-8347-80de82035a9d-config-data\") pod \"barbican-keystone-listener-6dcc495f5d-rjtzn\" (UID: \"14fef196-27d3-4eb2-8347-80de82035a9d\") " pod="openstack/barbican-keystone-listener-6dcc495f5d-rjtzn" Jan 28 07:04:18 crc kubenswrapper[4642]: I0128 07:04:18.834616 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7df992e-0fe2-4c0b-b217-10bc93d786ac-config-data\") pod \"barbican-worker-6d89d45779-ghhkc\" (UID: \"b7df992e-0fe2-4c0b-b217-10bc93d786ac\") " pod="openstack/barbican-worker-6d89d45779-ghhkc" Jan 28 07:04:18 crc kubenswrapper[4642]: I0128 07:04:18.834650 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2r78m\" (UniqueName: \"kubernetes.io/projected/14fef196-27d3-4eb2-8347-80de82035a9d-kube-api-access-2r78m\") pod \"barbican-keystone-listener-6dcc495f5d-rjtzn\" (UID: \"14fef196-27d3-4eb2-8347-80de82035a9d\") " pod="openstack/barbican-keystone-listener-6dcc495f5d-rjtzn" Jan 28 07:04:18 crc kubenswrapper[4642]: I0128 07:04:18.834669 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/14fef196-27d3-4eb2-8347-80de82035a9d-config-data-custom\") pod \"barbican-keystone-listener-6dcc495f5d-rjtzn\" (UID: \"14fef196-27d3-4eb2-8347-80de82035a9d\") " pod="openstack/barbican-keystone-listener-6dcc495f5d-rjtzn" Jan 28 07:04:18 crc kubenswrapper[4642]: I0128 07:04:18.846957 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-65b856d58d-bqqcd"] Jan 28 07:04:18 crc kubenswrapper[4642]: I0128 07:04:18.848324 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-65b856d58d-bqqcd" Jan 28 07:04:18 crc kubenswrapper[4642]: I0128 07:04:18.850560 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 28 07:04:18 crc kubenswrapper[4642]: I0128 07:04:18.850623 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 28 07:04:18 crc kubenswrapper[4642]: I0128 07:04:18.859941 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-65b856d58d-bqqcd"] Jan 28 07:04:18 crc kubenswrapper[4642]: I0128 07:04:18.936448 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7df992e-0fe2-4c0b-b217-10bc93d786ac-combined-ca-bundle\") pod \"barbican-worker-6d89d45779-ghhkc\" (UID: \"b7df992e-0fe2-4c0b-b217-10bc93d786ac\") " pod="openstack/barbican-worker-6d89d45779-ghhkc" Jan 28 07:04:18 crc kubenswrapper[4642]: I0128 07:04:18.937126 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b7df992e-0fe2-4c0b-b217-10bc93d786ac-config-data-custom\") pod \"barbican-worker-6d89d45779-ghhkc\" (UID: \"b7df992e-0fe2-4c0b-b217-10bc93d786ac\") " pod="openstack/barbican-worker-6d89d45779-ghhkc" Jan 28 07:04:18 crc kubenswrapper[4642]: I0128 07:04:18.937277 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e8d2ea1-c13f-48c7-8481-284676407f2b-public-tls-certs\") pod \"barbican-api-65b856d58d-bqqcd\" (UID: \"9e8d2ea1-c13f-48c7-8481-284676407f2b\") " pod="openstack/barbican-api-65b856d58d-bqqcd" Jan 28 07:04:18 crc kubenswrapper[4642]: I0128 07:04:18.937388 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e8d2ea1-c13f-48c7-8481-284676407f2b-internal-tls-certs\") pod \"barbican-api-65b856d58d-bqqcd\" (UID: \"9e8d2ea1-c13f-48c7-8481-284676407f2b\") " pod="openstack/barbican-api-65b856d58d-bqqcd" Jan 28 07:04:18 crc kubenswrapper[4642]: I0128 07:04:18.937490 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14fef196-27d3-4eb2-8347-80de82035a9d-logs\") pod \"barbican-keystone-listener-6dcc495f5d-rjtzn\" (UID: \"14fef196-27d3-4eb2-8347-80de82035a9d\") " pod="openstack/barbican-keystone-listener-6dcc495f5d-rjtzn" Jan 28 07:04:18 crc kubenswrapper[4642]: I0128 07:04:18.937568 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gf284\" (UniqueName: \"kubernetes.io/projected/b7df992e-0fe2-4c0b-b217-10bc93d786ac-kube-api-access-gf284\") pod \"barbican-worker-6d89d45779-ghhkc\" (UID: \"b7df992e-0fe2-4c0b-b217-10bc93d786ac\") " pod="openstack/barbican-worker-6d89d45779-ghhkc" Jan 28 07:04:18 crc kubenswrapper[4642]: I0128 07:04:18.937639 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e8d2ea1-c13f-48c7-8481-284676407f2b-config-data\") pod \"barbican-api-65b856d58d-bqqcd\" (UID: \"9e8d2ea1-c13f-48c7-8481-284676407f2b\") " pod="openstack/barbican-api-65b856d58d-bqqcd" Jan 28 07:04:18 crc kubenswrapper[4642]: I0128 07:04:18.937711 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e8d2ea1-c13f-48c7-8481-284676407f2b-combined-ca-bundle\") pod \"barbican-api-65b856d58d-bqqcd\" (UID: \"9e8d2ea1-c13f-48c7-8481-284676407f2b\") " pod="openstack/barbican-api-65b856d58d-bqqcd" Jan 28 07:04:18 crc kubenswrapper[4642]: I0128 07:04:18.937796 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14fef196-27d3-4eb2-8347-80de82035a9d-config-data\") pod \"barbican-keystone-listener-6dcc495f5d-rjtzn\" (UID: \"14fef196-27d3-4eb2-8347-80de82035a9d\") " pod="openstack/barbican-keystone-listener-6dcc495f5d-rjtzn" Jan 28 07:04:18 crc kubenswrapper[4642]: I0128 07:04:18.937857 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7df992e-0fe2-4c0b-b217-10bc93d786ac-config-data\") pod \"barbican-worker-6d89d45779-ghhkc\" (UID: \"b7df992e-0fe2-4c0b-b217-10bc93d786ac\") " pod="openstack/barbican-worker-6d89d45779-ghhkc" Jan 28 07:04:18 crc kubenswrapper[4642]: I0128 07:04:18.937921 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9e8d2ea1-c13f-48c7-8481-284676407f2b-config-data-custom\") pod \"barbican-api-65b856d58d-bqqcd\" (UID: \"9e8d2ea1-c13f-48c7-8481-284676407f2b\") " pod="openstack/barbican-api-65b856d58d-bqqcd" Jan 28 07:04:18 crc kubenswrapper[4642]: I0128 07:04:18.937984 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8spf2\" (UniqueName: \"kubernetes.io/projected/9e8d2ea1-c13f-48c7-8481-284676407f2b-kube-api-access-8spf2\") pod \"barbican-api-65b856d58d-bqqcd\" (UID: \"9e8d2ea1-c13f-48c7-8481-284676407f2b\") " pod="openstack/barbican-api-65b856d58d-bqqcd" Jan 28 07:04:18 crc kubenswrapper[4642]: I0128 07:04:18.938069 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2r78m\" (UniqueName: \"kubernetes.io/projected/14fef196-27d3-4eb2-8347-80de82035a9d-kube-api-access-2r78m\") pod \"barbican-keystone-listener-6dcc495f5d-rjtzn\" (UID: \"14fef196-27d3-4eb2-8347-80de82035a9d\") " pod="openstack/barbican-keystone-listener-6dcc495f5d-rjtzn" Jan 28 07:04:18 crc kubenswrapper[4642]: I0128 07:04:18.938136 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/14fef196-27d3-4eb2-8347-80de82035a9d-config-data-custom\") pod \"barbican-keystone-listener-6dcc495f5d-rjtzn\" (UID: \"14fef196-27d3-4eb2-8347-80de82035a9d\") " pod="openstack/barbican-keystone-listener-6dcc495f5d-rjtzn" Jan 28 07:04:18 crc kubenswrapper[4642]: I0128 07:04:18.938217 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e8d2ea1-c13f-48c7-8481-284676407f2b-logs\") pod \"barbican-api-65b856d58d-bqqcd\" (UID: \"9e8d2ea1-c13f-48c7-8481-284676407f2b\") " pod="openstack/barbican-api-65b856d58d-bqqcd" Jan 28 07:04:18 crc kubenswrapper[4642]: I0128 07:04:18.938311 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14fef196-27d3-4eb2-8347-80de82035a9d-combined-ca-bundle\") pod \"barbican-keystone-listener-6dcc495f5d-rjtzn\" (UID: \"14fef196-27d3-4eb2-8347-80de82035a9d\") " pod="openstack/barbican-keystone-listener-6dcc495f5d-rjtzn" Jan 28 07:04:18 crc kubenswrapper[4642]: I0128 07:04:18.938393 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b7df992e-0fe2-4c0b-b217-10bc93d786ac-logs\") pod \"barbican-worker-6d89d45779-ghhkc\" (UID: \"b7df992e-0fe2-4c0b-b217-10bc93d786ac\") " pod="openstack/barbican-worker-6d89d45779-ghhkc" Jan 28 07:04:18 crc kubenswrapper[4642]: I0128 07:04:18.938723 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b7df992e-0fe2-4c0b-b217-10bc93d786ac-logs\") pod \"barbican-worker-6d89d45779-ghhkc\" (UID: \"b7df992e-0fe2-4c0b-b217-10bc93d786ac\") " pod="openstack/barbican-worker-6d89d45779-ghhkc" Jan 28 07:04:18 crc kubenswrapper[4642]: I0128 07:04:18.940454 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14fef196-27d3-4eb2-8347-80de82035a9d-logs\") pod \"barbican-keystone-listener-6dcc495f5d-rjtzn\" (UID: \"14fef196-27d3-4eb2-8347-80de82035a9d\") " pod="openstack/barbican-keystone-listener-6dcc495f5d-rjtzn" Jan 28 07:04:18 crc kubenswrapper[4642]: I0128 07:04:18.941148 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7df992e-0fe2-4c0b-b217-10bc93d786ac-combined-ca-bundle\") pod \"barbican-worker-6d89d45779-ghhkc\" (UID: \"b7df992e-0fe2-4c0b-b217-10bc93d786ac\") " pod="openstack/barbican-worker-6d89d45779-ghhkc" Jan 28 07:04:18 crc kubenswrapper[4642]: I0128 07:04:18.942101 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b7df992e-0fe2-4c0b-b217-10bc93d786ac-config-data-custom\") pod \"barbican-worker-6d89d45779-ghhkc\" (UID: \"b7df992e-0fe2-4c0b-b217-10bc93d786ac\") " pod="openstack/barbican-worker-6d89d45779-ghhkc" Jan 28 07:04:18 crc kubenswrapper[4642]: I0128 07:04:18.948250 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14fef196-27d3-4eb2-8347-80de82035a9d-config-data\") pod \"barbican-keystone-listener-6dcc495f5d-rjtzn\" (UID: \"14fef196-27d3-4eb2-8347-80de82035a9d\") " pod="openstack/barbican-keystone-listener-6dcc495f5d-rjtzn" Jan 28 07:04:18 crc kubenswrapper[4642]: I0128 07:04:18.952753 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14fef196-27d3-4eb2-8347-80de82035a9d-combined-ca-bundle\") pod \"barbican-keystone-listener-6dcc495f5d-rjtzn\" (UID: \"14fef196-27d3-4eb2-8347-80de82035a9d\") " pod="openstack/barbican-keystone-listener-6dcc495f5d-rjtzn" Jan 28 07:04:18 crc kubenswrapper[4642]: I0128 07:04:18.956397 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gf284\" (UniqueName: \"kubernetes.io/projected/b7df992e-0fe2-4c0b-b217-10bc93d786ac-kube-api-access-gf284\") pod \"barbican-worker-6d89d45779-ghhkc\" (UID: \"b7df992e-0fe2-4c0b-b217-10bc93d786ac\") " pod="openstack/barbican-worker-6d89d45779-ghhkc" Jan 28 07:04:18 crc kubenswrapper[4642]: I0128 07:04:18.957653 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/14fef196-27d3-4eb2-8347-80de82035a9d-config-data-custom\") pod \"barbican-keystone-listener-6dcc495f5d-rjtzn\" (UID: \"14fef196-27d3-4eb2-8347-80de82035a9d\") " pod="openstack/barbican-keystone-listener-6dcc495f5d-rjtzn" Jan 28 07:04:18 crc kubenswrapper[4642]: I0128 07:04:18.957991 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7df992e-0fe2-4c0b-b217-10bc93d786ac-config-data\") pod \"barbican-worker-6d89d45779-ghhkc\" (UID: \"b7df992e-0fe2-4c0b-b217-10bc93d786ac\") " pod="openstack/barbican-worker-6d89d45779-ghhkc" Jan 28 07:04:18 crc kubenswrapper[4642]: I0128 07:04:18.967693 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2r78m\" (UniqueName: \"kubernetes.io/projected/14fef196-27d3-4eb2-8347-80de82035a9d-kube-api-access-2r78m\") pod \"barbican-keystone-listener-6dcc495f5d-rjtzn\" (UID: \"14fef196-27d3-4eb2-8347-80de82035a9d\") " pod="openstack/barbican-keystone-listener-6dcc495f5d-rjtzn" Jan 28 07:04:19 crc kubenswrapper[4642]: I0128 07:04:19.039867 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e8d2ea1-c13f-48c7-8481-284676407f2b-logs\") pod \"barbican-api-65b856d58d-bqqcd\" (UID: \"9e8d2ea1-c13f-48c7-8481-284676407f2b\") " pod="openstack/barbican-api-65b856d58d-bqqcd" Jan 28 07:04:19 crc kubenswrapper[4642]: I0128 07:04:19.039988 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e8d2ea1-c13f-48c7-8481-284676407f2b-public-tls-certs\") pod \"barbican-api-65b856d58d-bqqcd\" (UID: \"9e8d2ea1-c13f-48c7-8481-284676407f2b\") " pod="openstack/barbican-api-65b856d58d-bqqcd" Jan 28 07:04:19 crc kubenswrapper[4642]: I0128 07:04:19.040008 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e8d2ea1-c13f-48c7-8481-284676407f2b-internal-tls-certs\") pod \"barbican-api-65b856d58d-bqqcd\" (UID: \"9e8d2ea1-c13f-48c7-8481-284676407f2b\") " pod="openstack/barbican-api-65b856d58d-bqqcd" Jan 28 07:04:19 crc kubenswrapper[4642]: I0128 07:04:19.040048 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e8d2ea1-c13f-48c7-8481-284676407f2b-config-data\") pod \"barbican-api-65b856d58d-bqqcd\" (UID: \"9e8d2ea1-c13f-48c7-8481-284676407f2b\") " pod="openstack/barbican-api-65b856d58d-bqqcd" Jan 28 07:04:19 crc kubenswrapper[4642]: I0128 07:04:19.040071 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e8d2ea1-c13f-48c7-8481-284676407f2b-combined-ca-bundle\") pod \"barbican-api-65b856d58d-bqqcd\" (UID: \"9e8d2ea1-c13f-48c7-8481-284676407f2b\") " pod="openstack/barbican-api-65b856d58d-bqqcd" Jan 28 07:04:19 crc kubenswrapper[4642]: I0128 07:04:19.040094 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9e8d2ea1-c13f-48c7-8481-284676407f2b-config-data-custom\") pod \"barbican-api-65b856d58d-bqqcd\" (UID: \"9e8d2ea1-c13f-48c7-8481-284676407f2b\") " pod="openstack/barbican-api-65b856d58d-bqqcd" Jan 28 07:04:19 crc kubenswrapper[4642]: I0128 07:04:19.040109 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8spf2\" (UniqueName: \"kubernetes.io/projected/9e8d2ea1-c13f-48c7-8481-284676407f2b-kube-api-access-8spf2\") pod \"barbican-api-65b856d58d-bqqcd\" (UID: \"9e8d2ea1-c13f-48c7-8481-284676407f2b\") " pod="openstack/barbican-api-65b856d58d-bqqcd" Jan 28 07:04:19 crc kubenswrapper[4642]: I0128 07:04:19.040898 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e8d2ea1-c13f-48c7-8481-284676407f2b-logs\") pod \"barbican-api-65b856d58d-bqqcd\" (UID: \"9e8d2ea1-c13f-48c7-8481-284676407f2b\") " pod="openstack/barbican-api-65b856d58d-bqqcd" Jan 28 07:04:19 crc kubenswrapper[4642]: I0128 07:04:19.047340 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e8d2ea1-c13f-48c7-8481-284676407f2b-public-tls-certs\") pod \"barbican-api-65b856d58d-bqqcd\" (UID: \"9e8d2ea1-c13f-48c7-8481-284676407f2b\") " pod="openstack/barbican-api-65b856d58d-bqqcd" Jan 28 07:04:19 crc kubenswrapper[4642]: I0128 07:04:19.048044 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e8d2ea1-c13f-48c7-8481-284676407f2b-internal-tls-certs\") pod \"barbican-api-65b856d58d-bqqcd\" (UID: \"9e8d2ea1-c13f-48c7-8481-284676407f2b\") " pod="openstack/barbican-api-65b856d58d-bqqcd" Jan 28 07:04:19 crc kubenswrapper[4642]: I0128 07:04:19.051526 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e8d2ea1-c13f-48c7-8481-284676407f2b-config-data\") pod \"barbican-api-65b856d58d-bqqcd\" (UID: \"9e8d2ea1-c13f-48c7-8481-284676407f2b\") " pod="openstack/barbican-api-65b856d58d-bqqcd" Jan 28 07:04:19 crc kubenswrapper[4642]: I0128 07:04:19.053146 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6d89d45779-ghhkc" Jan 28 07:04:19 crc kubenswrapper[4642]: I0128 07:04:19.053945 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9e8d2ea1-c13f-48c7-8481-284676407f2b-config-data-custom\") pod \"barbican-api-65b856d58d-bqqcd\" (UID: \"9e8d2ea1-c13f-48c7-8481-284676407f2b\") " pod="openstack/barbican-api-65b856d58d-bqqcd" Jan 28 07:04:19 crc kubenswrapper[4642]: I0128 07:04:19.053968 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e8d2ea1-c13f-48c7-8481-284676407f2b-combined-ca-bundle\") pod \"barbican-api-65b856d58d-bqqcd\" (UID: \"9e8d2ea1-c13f-48c7-8481-284676407f2b\") " pod="openstack/barbican-api-65b856d58d-bqqcd" Jan 28 07:04:19 crc kubenswrapper[4642]: I0128 07:04:19.059671 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8spf2\" (UniqueName: \"kubernetes.io/projected/9e8d2ea1-c13f-48c7-8481-284676407f2b-kube-api-access-8spf2\") pod \"barbican-api-65b856d58d-bqqcd\" (UID: \"9e8d2ea1-c13f-48c7-8481-284676407f2b\") " pod="openstack/barbican-api-65b856d58d-bqqcd" Jan 28 07:04:19 crc kubenswrapper[4642]: I0128 07:04:19.063583 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6dcc495f5d-rjtzn" Jan 28 07:04:19 crc kubenswrapper[4642]: I0128 07:04:19.159465 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-65b856d58d-bqqcd" Jan 28 07:04:19 crc kubenswrapper[4642]: I0128 07:04:19.248540 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d9f567799-j8ft2" Jan 28 07:04:19 crc kubenswrapper[4642]: I0128 07:04:19.346433 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fj7l2\" (UniqueName: \"kubernetes.io/projected/901e4414-3465-41d0-a4d6-cc041e6e4319-kube-api-access-fj7l2\") pod \"901e4414-3465-41d0-a4d6-cc041e6e4319\" (UID: \"901e4414-3465-41d0-a4d6-cc041e6e4319\") " Jan 28 07:04:19 crc kubenswrapper[4642]: I0128 07:04:19.346863 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/901e4414-3465-41d0-a4d6-cc041e6e4319-config\") pod \"901e4414-3465-41d0-a4d6-cc041e6e4319\" (UID: \"901e4414-3465-41d0-a4d6-cc041e6e4319\") " Jan 28 07:04:19 crc kubenswrapper[4642]: I0128 07:04:19.346918 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/901e4414-3465-41d0-a4d6-cc041e6e4319-dns-svc\") pod \"901e4414-3465-41d0-a4d6-cc041e6e4319\" (UID: \"901e4414-3465-41d0-a4d6-cc041e6e4319\") " Jan 28 07:04:19 crc kubenswrapper[4642]: I0128 07:04:19.346950 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/901e4414-3465-41d0-a4d6-cc041e6e4319-ovsdbserver-nb\") pod \"901e4414-3465-41d0-a4d6-cc041e6e4319\" (UID: \"901e4414-3465-41d0-a4d6-cc041e6e4319\") " Jan 28 07:04:19 crc kubenswrapper[4642]: I0128 07:04:19.346992 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/901e4414-3465-41d0-a4d6-cc041e6e4319-ovsdbserver-sb\") pod \"901e4414-3465-41d0-a4d6-cc041e6e4319\" (UID: \"901e4414-3465-41d0-a4d6-cc041e6e4319\") " Jan 28 07:04:19 crc kubenswrapper[4642]: I0128 07:04:19.347050 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/901e4414-3465-41d0-a4d6-cc041e6e4319-dns-swift-storage-0\") pod \"901e4414-3465-41d0-a4d6-cc041e6e4319\" (UID: \"901e4414-3465-41d0-a4d6-cc041e6e4319\") " Jan 28 07:04:19 crc kubenswrapper[4642]: I0128 07:04:19.351946 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/901e4414-3465-41d0-a4d6-cc041e6e4319-kube-api-access-fj7l2" (OuterVolumeSpecName: "kube-api-access-fj7l2") pod "901e4414-3465-41d0-a4d6-cc041e6e4319" (UID: "901e4414-3465-41d0-a4d6-cc041e6e4319"). InnerVolumeSpecName "kube-api-access-fj7l2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:04:19 crc kubenswrapper[4642]: I0128 07:04:19.384111 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/901e4414-3465-41d0-a4d6-cc041e6e4319-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "901e4414-3465-41d0-a4d6-cc041e6e4319" (UID: "901e4414-3465-41d0-a4d6-cc041e6e4319"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:04:19 crc kubenswrapper[4642]: I0128 07:04:19.385430 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/901e4414-3465-41d0-a4d6-cc041e6e4319-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "901e4414-3465-41d0-a4d6-cc041e6e4319" (UID: "901e4414-3465-41d0-a4d6-cc041e6e4319"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:04:19 crc kubenswrapper[4642]: I0128 07:04:19.389452 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/901e4414-3465-41d0-a4d6-cc041e6e4319-config" (OuterVolumeSpecName: "config") pod "901e4414-3465-41d0-a4d6-cc041e6e4319" (UID: "901e4414-3465-41d0-a4d6-cc041e6e4319"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:04:19 crc kubenswrapper[4642]: I0128 07:04:19.390926 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/901e4414-3465-41d0-a4d6-cc041e6e4319-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "901e4414-3465-41d0-a4d6-cc041e6e4319" (UID: "901e4414-3465-41d0-a4d6-cc041e6e4319"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:04:19 crc kubenswrapper[4642]: I0128 07:04:19.400967 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/901e4414-3465-41d0-a4d6-cc041e6e4319-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "901e4414-3465-41d0-a4d6-cc041e6e4319" (UID: "901e4414-3465-41d0-a4d6-cc041e6e4319"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:04:19 crc kubenswrapper[4642]: I0128 07:04:19.441903 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-f77bb558-ws68h"] Jan 28 07:04:19 crc kubenswrapper[4642]: I0128 07:04:19.451247 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fj7l2\" (UniqueName: \"kubernetes.io/projected/901e4414-3465-41d0-a4d6-cc041e6e4319-kube-api-access-fj7l2\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:19 crc kubenswrapper[4642]: I0128 07:04:19.451280 4642 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/901e4414-3465-41d0-a4d6-cc041e6e4319-config\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:19 crc kubenswrapper[4642]: I0128 07:04:19.451290 4642 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/901e4414-3465-41d0-a4d6-cc041e6e4319-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:19 crc kubenswrapper[4642]: I0128 07:04:19.451300 4642 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/901e4414-3465-41d0-a4d6-cc041e6e4319-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:19 crc kubenswrapper[4642]: I0128 07:04:19.451309 4642 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/901e4414-3465-41d0-a4d6-cc041e6e4319-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:19 crc kubenswrapper[4642]: I0128 07:04:19.451317 4642 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/901e4414-3465-41d0-a4d6-cc041e6e4319-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:19 crc kubenswrapper[4642]: W0128 07:04:19.452440 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod83dd211e_6375_4640_921f_c26d8181e31b.slice/crio-d18de2c9334dd7aacb4189cf441da3576881bcc1d69bed6549986d0f3c55d32e WatchSource:0}: Error finding container d18de2c9334dd7aacb4189cf441da3576881bcc1d69bed6549986d0f3c55d32e: Status 404 returned error can't find the container with id d18de2c9334dd7aacb4189cf441da3576881bcc1d69bed6549986d0f3c55d32e Jan 28 07:04:19 crc kubenswrapper[4642]: I0128 07:04:19.557502 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6dcc495f5d-rjtzn"] Jan 28 07:04:19 crc kubenswrapper[4642]: I0128 07:04:19.562544 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6d89d45779-ghhkc"] Jan 28 07:04:19 crc kubenswrapper[4642]: W0128 07:04:19.583519 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb7df992e_0fe2_4c0b_b217_10bc93d786ac.slice/crio-0bfe30af1d9c268f72cd7797957fab18a9d586791a9e58bc5c5d02cdbeb72a29 WatchSource:0}: Error finding container 0bfe30af1d9c268f72cd7797957fab18a9d586791a9e58bc5c5d02cdbeb72a29: Status 404 returned error can't find the container with id 0bfe30af1d9c268f72cd7797957fab18a9d586791a9e58bc5c5d02cdbeb72a29 Jan 28 07:04:19 crc kubenswrapper[4642]: I0128 07:04:19.723232 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-65b856d58d-bqqcd"] Jan 28 07:04:19 crc kubenswrapper[4642]: W0128 07:04:19.741001 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9e8d2ea1_c13f_48c7_8481_284676407f2b.slice/crio-e199d61b3684bec818a68a0874fd5e049386d69bf1b4f630eaa43e796059514f WatchSource:0}: Error finding container e199d61b3684bec818a68a0874fd5e049386d69bf1b4f630eaa43e796059514f: Status 404 returned error can't find the container with id e199d61b3684bec818a68a0874fd5e049386d69bf1b4f630eaa43e796059514f Jan 28 07:04:19 crc kubenswrapper[4642]: I0128 07:04:19.804331 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d574698ff-4mg6j" event={"ID":"ff037186-7ca7-4860-a8ae-0d3b84abe5da","Type":"ContainerStarted","Data":"2646b67997ec10c0776f8fd0723bba77015132cb25ec60ba5ed35a6950ed037f"} Jan 28 07:04:19 crc kubenswrapper[4642]: I0128 07:04:19.804455 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5d574698ff-4mg6j" Jan 28 07:04:19 crc kubenswrapper[4642]: I0128 07:04:19.810400 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6d89d45779-ghhkc" event={"ID":"b7df992e-0fe2-4c0b-b217-10bc93d786ac","Type":"ContainerStarted","Data":"0bfe30af1d9c268f72cd7797957fab18a9d586791a9e58bc5c5d02cdbeb72a29"} Jan 28 07:04:19 crc kubenswrapper[4642]: I0128 07:04:19.819369 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"201fcbe3-af3e-4f46-bd0f-07693de10229","Type":"ContainerStarted","Data":"3fe858a821da4f29bdf5249bc6d7312d774e26222eb8043313250bfe9cbeac9e"} Jan 28 07:04:19 crc kubenswrapper[4642]: I0128 07:04:19.820769 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d9f567799-j8ft2" event={"ID":"901e4414-3465-41d0-a4d6-cc041e6e4319","Type":"ContainerDied","Data":"46355cc7c3bae8c99fa8c2ec7e44d4e07027c16308171bd32b5241dc5920fc99"} Jan 28 07:04:19 crc kubenswrapper[4642]: I0128 07:04:19.820802 4642 scope.go:117] "RemoveContainer" containerID="1d369b4f5a0ab0a5dc9cca771c171ac02b6bdf82cdf93e51b34872b9f3ce4a03" Jan 28 07:04:19 crc kubenswrapper[4642]: I0128 07:04:19.820887 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d9f567799-j8ft2" Jan 28 07:04:19 crc kubenswrapper[4642]: I0128 07:04:19.837966 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5d574698ff-4mg6j" podStartSLOduration=4.83795598 podStartE2EDuration="4.83795598s" podCreationTimestamp="2026-01-28 07:04:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 07:04:19.8345469 +0000 UTC m=+983.066635710" watchObservedRunningTime="2026-01-28 07:04:19.83795598 +0000 UTC m=+983.070044789" Jan 28 07:04:19 crc kubenswrapper[4642]: I0128 07:04:19.838904 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-f77bb558-ws68h" event={"ID":"83dd211e-6375-4640-921f-c26d8181e31b","Type":"ContainerStarted","Data":"54b12eeb19f6cfa3135108b91567111c854ab675bf71bd13ae91b5e5ab16e99e"} Jan 28 07:04:19 crc kubenswrapper[4642]: I0128 07:04:19.838940 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-f77bb558-ws68h" event={"ID":"83dd211e-6375-4640-921f-c26d8181e31b","Type":"ContainerStarted","Data":"d18de2c9334dd7aacb4189cf441da3576881bcc1d69bed6549986d0f3c55d32e"} Jan 28 07:04:19 crc kubenswrapper[4642]: I0128 07:04:19.839181 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-f77bb558-ws68h" Jan 28 07:04:19 crc kubenswrapper[4642]: I0128 07:04:19.842097 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-65b856d58d-bqqcd" event={"ID":"9e8d2ea1-c13f-48c7-8481-284676407f2b","Type":"ContainerStarted","Data":"e199d61b3684bec818a68a0874fd5e049386d69bf1b4f630eaa43e796059514f"} Jan 28 07:04:19 crc kubenswrapper[4642]: I0128 07:04:19.844782 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6dcc495f5d-rjtzn" event={"ID":"14fef196-27d3-4eb2-8347-80de82035a9d","Type":"ContainerStarted","Data":"ef57fc2b55f352daff6173afc5c30ac45cd63ed485fc9d489012d663cc754bd2"} Jan 28 07:04:19 crc kubenswrapper[4642]: I0128 07:04:19.865670 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-f77bb558-ws68h" podStartSLOduration=3.865649049 podStartE2EDuration="3.865649049s" podCreationTimestamp="2026-01-28 07:04:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 07:04:19.864816534 +0000 UTC m=+983.096905343" watchObservedRunningTime="2026-01-28 07:04:19.865649049 +0000 UTC m=+983.097737858" Jan 28 07:04:19 crc kubenswrapper[4642]: I0128 07:04:19.878092 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d9f567799-j8ft2"] Jan 28 07:04:19 crc kubenswrapper[4642]: I0128 07:04:19.883663 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d9f567799-j8ft2"] Jan 28 07:04:20 crc kubenswrapper[4642]: I0128 07:04:20.842782 4642 scope.go:117] "RemoveContainer" containerID="d132fd0596b763de0025bb0059a6ec1f184d08f36e3a4724bb9f90ddd9feb264" Jan 28 07:04:20 crc kubenswrapper[4642]: I0128 07:04:20.855117 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-65b856d58d-bqqcd" event={"ID":"9e8d2ea1-c13f-48c7-8481-284676407f2b","Type":"ContainerStarted","Data":"117c2fcb571fb494f74e8c988ea96c13a5c6b5c7c2dd6b08b7331de40b1a0d10"} Jan 28 07:04:21 crc kubenswrapper[4642]: I0128 07:04:21.108268 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="901e4414-3465-41d0-a4d6-cc041e6e4319" path="/var/lib/kubelet/pods/901e4414-3465-41d0-a4d6-cc041e6e4319/volumes" Jan 28 07:04:21 crc kubenswrapper[4642]: I0128 07:04:21.867029 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6dcc495f5d-rjtzn" event={"ID":"14fef196-27d3-4eb2-8347-80de82035a9d","Type":"ContainerStarted","Data":"48f93132be2eb303f106276a4892dd3d125c1cb43e89b55a2e999026a03409f8"} Jan 28 07:04:21 crc kubenswrapper[4642]: I0128 07:04:21.867406 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6dcc495f5d-rjtzn" event={"ID":"14fef196-27d3-4eb2-8347-80de82035a9d","Type":"ContainerStarted","Data":"f4ad50a5ffd1348f46d497cbf13171ebc473cdae76052157e38d24adac57ce0d"} Jan 28 07:04:21 crc kubenswrapper[4642]: I0128 07:04:21.869293 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6d89d45779-ghhkc" event={"ID":"b7df992e-0fe2-4c0b-b217-10bc93d786ac","Type":"ContainerStarted","Data":"e3189bada6cee17ac92a7fb7ca32eb4df86c50bd3b3bc2b724cc3791e5082683"} Jan 28 07:04:21 crc kubenswrapper[4642]: I0128 07:04:21.869358 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6d89d45779-ghhkc" event={"ID":"b7df992e-0fe2-4c0b-b217-10bc93d786ac","Type":"ContainerStarted","Data":"a1c931b75e87b729f8810ae2268041f8f1428ee7bf4a44842f7cd3df9026b3fd"} Jan 28 07:04:21 crc kubenswrapper[4642]: I0128 07:04:21.871095 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-659d7fff7d-x4jhp" event={"ID":"94d4e46c-7083-4acb-8925-a9a92278e0c6","Type":"ContainerStarted","Data":"0090957fad569ddbb7c7ff55a8fe348adcd9ee144812fbe6b714c0e5a35d0b81"} Jan 28 07:04:21 crc kubenswrapper[4642]: I0128 07:04:21.871129 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-659d7fff7d-x4jhp" event={"ID":"94d4e46c-7083-4acb-8925-a9a92278e0c6","Type":"ContainerStarted","Data":"5f6fc8c6cfdde9eaa51ee3340fc2fb4f8003c31526b7645e86f40f84c39c5d65"} Jan 28 07:04:21 crc kubenswrapper[4642]: I0128 07:04:21.874223 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-84d4f5dddf-lct7d" event={"ID":"46dbf3f9-174e-4b72-9432-c0307d14c9ac","Type":"ContainerStarted","Data":"a37755ebf447dbff77d6188e2a852c3f791d86639d7b60ced552faa195ea2f08"} Jan 28 07:04:21 crc kubenswrapper[4642]: I0128 07:04:21.874273 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-84d4f5dddf-lct7d" event={"ID":"46dbf3f9-174e-4b72-9432-c0307d14c9ac","Type":"ContainerStarted","Data":"9d82231da3c8bfefb2a21ad829aae165a34370ea4b8bbdd6384e077495ed3f39"} Jan 28 07:04:21 crc kubenswrapper[4642]: I0128 07:04:21.876498 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-65b856d58d-bqqcd" event={"ID":"9e8d2ea1-c13f-48c7-8481-284676407f2b","Type":"ContainerStarted","Data":"74f9bbbfd5b24884945672652fee29ff7d5a190eccfa564cc5793cb95547aaad"} Jan 28 07:04:21 crc kubenswrapper[4642]: I0128 07:04:21.876677 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-65b856d58d-bqqcd" Jan 28 07:04:21 crc kubenswrapper[4642]: I0128 07:04:21.876705 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-65b856d58d-bqqcd" Jan 28 07:04:21 crc kubenswrapper[4642]: I0128 07:04:21.903627 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-6dcc495f5d-rjtzn" podStartSLOduration=2.589422967 podStartE2EDuration="3.903606556s" podCreationTimestamp="2026-01-28 07:04:18 +0000 UTC" firstStartedPulling="2026-01-28 07:04:19.571468795 +0000 UTC m=+982.803557604" lastFinishedPulling="2026-01-28 07:04:20.885652384 +0000 UTC m=+984.117741193" observedRunningTime="2026-01-28 07:04:21.882109547 +0000 UTC m=+985.114198356" watchObservedRunningTime="2026-01-28 07:04:21.903606556 +0000 UTC m=+985.135695365" Jan 28 07:04:21 crc kubenswrapper[4642]: I0128 07:04:21.916840 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-84d4f5dddf-lct7d" podStartSLOduration=2.660763329 podStartE2EDuration="6.916822084s" podCreationTimestamp="2026-01-28 07:04:15 +0000 UTC" firstStartedPulling="2026-01-28 07:04:16.603258153 +0000 UTC m=+979.835346962" lastFinishedPulling="2026-01-28 07:04:20.859316907 +0000 UTC m=+984.091405717" observedRunningTime="2026-01-28 07:04:21.912547808 +0000 UTC m=+985.144636617" watchObservedRunningTime="2026-01-28 07:04:21.916822084 +0000 UTC m=+985.148910894" Jan 28 07:04:21 crc kubenswrapper[4642]: I0128 07:04:21.941855 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-659d7fff7d-x4jhp"] Jan 28 07:04:21 crc kubenswrapper[4642]: I0128 07:04:21.944092 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-6d89d45779-ghhkc" podStartSLOduration=2.643850073 podStartE2EDuration="3.944074786s" podCreationTimestamp="2026-01-28 07:04:18 +0000 UTC" firstStartedPulling="2026-01-28 07:04:19.586064408 +0000 UTC m=+982.818153217" lastFinishedPulling="2026-01-28 07:04:20.886289121 +0000 UTC m=+984.118377930" observedRunningTime="2026-01-28 07:04:21.942537024 +0000 UTC m=+985.174625834" watchObservedRunningTime="2026-01-28 07:04:21.944074786 +0000 UTC m=+985.176163595" Jan 28 07:04:21 crc kubenswrapper[4642]: I0128 07:04:21.963206 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-65b856d58d-bqqcd" podStartSLOduration=3.9631933029999997 podStartE2EDuration="3.963193303s" podCreationTimestamp="2026-01-28 07:04:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 07:04:21.960937972 +0000 UTC m=+985.193026781" watchObservedRunningTime="2026-01-28 07:04:21.963193303 +0000 UTC m=+985.195282101" Jan 28 07:04:21 crc kubenswrapper[4642]: I0128 07:04:21.978201 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-84d4f5dddf-lct7d"] Jan 28 07:04:21 crc kubenswrapper[4642]: I0128 07:04:21.988497 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-659d7fff7d-x4jhp" podStartSLOduration=2.770076523 podStartE2EDuration="6.988466362s" podCreationTimestamp="2026-01-28 07:04:15 +0000 UTC" firstStartedPulling="2026-01-28 07:04:16.666137059 +0000 UTC m=+979.898225868" lastFinishedPulling="2026-01-28 07:04:20.884526898 +0000 UTC m=+984.116615707" observedRunningTime="2026-01-28 07:04:21.984488493 +0000 UTC m=+985.216577301" watchObservedRunningTime="2026-01-28 07:04:21.988466362 +0000 UTC m=+985.220555171" Jan 28 07:04:22 crc kubenswrapper[4642]: I0128 07:04:22.643012 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6586c6444d-858tc" Jan 28 07:04:22 crc kubenswrapper[4642]: I0128 07:04:22.967834 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 28 07:04:22 crc kubenswrapper[4642]: I0128 07:04:22.967888 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 28 07:04:22 crc kubenswrapper[4642]: I0128 07:04:22.996017 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 28 07:04:23 crc kubenswrapper[4642]: I0128 07:04:23.001960 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 28 07:04:23 crc kubenswrapper[4642]: I0128 07:04:23.823332 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6586c6444d-858tc" Jan 28 07:04:23 crc kubenswrapper[4642]: I0128 07:04:23.891525 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-84d4f5dddf-lct7d" podUID="46dbf3f9-174e-4b72-9432-c0307d14c9ac" containerName="barbican-worker-log" containerID="cri-o://9d82231da3c8bfefb2a21ad829aae165a34370ea4b8bbdd6384e077495ed3f39" gracePeriod=30 Jan 28 07:04:23 crc kubenswrapper[4642]: I0128 07:04:23.891711 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-659d7fff7d-x4jhp" podUID="94d4e46c-7083-4acb-8925-a9a92278e0c6" containerName="barbican-keystone-listener-log" containerID="cri-o://5f6fc8c6cfdde9eaa51ee3340fc2fb4f8003c31526b7645e86f40f84c39c5d65" gracePeriod=30 Jan 28 07:04:23 crc kubenswrapper[4642]: I0128 07:04:23.892629 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-84d4f5dddf-lct7d" podUID="46dbf3f9-174e-4b72-9432-c0307d14c9ac" containerName="barbican-worker" containerID="cri-o://a37755ebf447dbff77d6188e2a852c3f791d86639d7b60ced552faa195ea2f08" gracePeriod=30 Jan 28 07:04:23 crc kubenswrapper[4642]: I0128 07:04:23.892787 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-659d7fff7d-x4jhp" podUID="94d4e46c-7083-4acb-8925-a9a92278e0c6" containerName="barbican-keystone-listener" containerID="cri-o://0090957fad569ddbb7c7ff55a8fe348adcd9ee144812fbe6b714c0e5a35d0b81" gracePeriod=30 Jan 28 07:04:23 crc kubenswrapper[4642]: I0128 07:04:23.892865 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 28 07:04:23 crc kubenswrapper[4642]: I0128 07:04:23.892886 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 28 07:04:23 crc kubenswrapper[4642]: I0128 07:04:23.949885 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 28 07:04:23 crc kubenswrapper[4642]: I0128 07:04:23.949919 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 28 07:04:23 crc kubenswrapper[4642]: I0128 07:04:23.977268 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 28 07:04:23 crc kubenswrapper[4642]: I0128 07:04:23.986786 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 28 07:04:24 crc kubenswrapper[4642]: I0128 07:04:24.905330 4642 generic.go:334] "Generic (PLEG): container finished" podID="46dbf3f9-174e-4b72-9432-c0307d14c9ac" containerID="9d82231da3c8bfefb2a21ad829aae165a34370ea4b8bbdd6384e077495ed3f39" exitCode=143 Jan 28 07:04:24 crc kubenswrapper[4642]: I0128 07:04:24.905540 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-84d4f5dddf-lct7d" event={"ID":"46dbf3f9-174e-4b72-9432-c0307d14c9ac","Type":"ContainerDied","Data":"9d82231da3c8bfefb2a21ad829aae165a34370ea4b8bbdd6384e077495ed3f39"} Jan 28 07:04:24 crc kubenswrapper[4642]: I0128 07:04:24.908711 4642 generic.go:334] "Generic (PLEG): container finished" podID="94d4e46c-7083-4acb-8925-a9a92278e0c6" containerID="0090957fad569ddbb7c7ff55a8fe348adcd9ee144812fbe6b714c0e5a35d0b81" exitCode=0 Jan 28 07:04:24 crc kubenswrapper[4642]: I0128 07:04:24.908734 4642 generic.go:334] "Generic (PLEG): container finished" podID="94d4e46c-7083-4acb-8925-a9a92278e0c6" containerID="5f6fc8c6cfdde9eaa51ee3340fc2fb4f8003c31526b7645e86f40f84c39c5d65" exitCode=143 Jan 28 07:04:24 crc kubenswrapper[4642]: I0128 07:04:24.908780 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-659d7fff7d-x4jhp" event={"ID":"94d4e46c-7083-4acb-8925-a9a92278e0c6","Type":"ContainerDied","Data":"0090957fad569ddbb7c7ff55a8fe348adcd9ee144812fbe6b714c0e5a35d0b81"} Jan 28 07:04:24 crc kubenswrapper[4642]: I0128 07:04:24.908805 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-659d7fff7d-x4jhp" event={"ID":"94d4e46c-7083-4acb-8925-a9a92278e0c6","Type":"ContainerDied","Data":"5f6fc8c6cfdde9eaa51ee3340fc2fb4f8003c31526b7645e86f40f84c39c5d65"} Jan 28 07:04:24 crc kubenswrapper[4642]: I0128 07:04:24.909442 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 28 07:04:24 crc kubenswrapper[4642]: I0128 07:04:24.909498 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 28 07:04:25 crc kubenswrapper[4642]: I0128 07:04:25.499570 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 28 07:04:25 crc kubenswrapper[4642]: I0128 07:04:25.633555 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 28 07:04:25 crc kubenswrapper[4642]: I0128 07:04:25.948025 4642 generic.go:334] "Generic (PLEG): container finished" podID="46dbf3f9-174e-4b72-9432-c0307d14c9ac" containerID="a37755ebf447dbff77d6188e2a852c3f791d86639d7b60ced552faa195ea2f08" exitCode=0 Jan 28 07:04:25 crc kubenswrapper[4642]: I0128 07:04:25.948681 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-84d4f5dddf-lct7d" event={"ID":"46dbf3f9-174e-4b72-9432-c0307d14c9ac","Type":"ContainerDied","Data":"a37755ebf447dbff77d6188e2a852c3f791d86639d7b60ced552faa195ea2f08"} Jan 28 07:04:26 crc kubenswrapper[4642]: I0128 07:04:26.292363 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5d574698ff-4mg6j" Jan 28 07:04:26 crc kubenswrapper[4642]: I0128 07:04:26.340485 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-849d6cccf9-nf7n7"] Jan 28 07:04:26 crc kubenswrapper[4642]: I0128 07:04:26.340702 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-849d6cccf9-nf7n7" podUID="c40a1d5e-2d6f-4f93-9a5f-6fbc808f1a2e" containerName="dnsmasq-dns" containerID="cri-o://716156c7890e28ba48a0c68ce3ac2b1368c24ced520d78fdbc588d840e3ff3f5" gracePeriod=10 Jan 28 07:04:26 crc kubenswrapper[4642]: I0128 07:04:26.382059 4642 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-849d6cccf9-nf7n7" podUID="c40a1d5e-2d6f-4f93-9a5f-6fbc808f1a2e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.126:5353: connect: connection refused" Jan 28 07:04:26 crc kubenswrapper[4642]: I0128 07:04:26.740302 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 28 07:04:26 crc kubenswrapper[4642]: I0128 07:04:26.924366 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 28 07:04:26 crc kubenswrapper[4642]: I0128 07:04:26.994338 4642 generic.go:334] "Generic (PLEG): container finished" podID="c40a1d5e-2d6f-4f93-9a5f-6fbc808f1a2e" containerID="716156c7890e28ba48a0c68ce3ac2b1368c24ced520d78fdbc588d840e3ff3f5" exitCode=0 Jan 28 07:04:26 crc kubenswrapper[4642]: I0128 07:04:26.994410 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-849d6cccf9-nf7n7" event={"ID":"c40a1d5e-2d6f-4f93-9a5f-6fbc808f1a2e","Type":"ContainerDied","Data":"716156c7890e28ba48a0c68ce3ac2b1368c24ced520d78fdbc588d840e3ff3f5"} Jan 28 07:04:27 crc kubenswrapper[4642]: I0128 07:04:27.826095 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-84d4f5dddf-lct7d" Jan 28 07:04:27 crc kubenswrapper[4642]: I0128 07:04:27.959682 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46dbf3f9-174e-4b72-9432-c0307d14c9ac-logs\") pod \"46dbf3f9-174e-4b72-9432-c0307d14c9ac\" (UID: \"46dbf3f9-174e-4b72-9432-c0307d14c9ac\") " Jan 28 07:04:27 crc kubenswrapper[4642]: I0128 07:04:27.959991 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kkpv9\" (UniqueName: \"kubernetes.io/projected/46dbf3f9-174e-4b72-9432-c0307d14c9ac-kube-api-access-kkpv9\") pod \"46dbf3f9-174e-4b72-9432-c0307d14c9ac\" (UID: \"46dbf3f9-174e-4b72-9432-c0307d14c9ac\") " Jan 28 07:04:27 crc kubenswrapper[4642]: I0128 07:04:27.960169 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/46dbf3f9-174e-4b72-9432-c0307d14c9ac-config-data-custom\") pod \"46dbf3f9-174e-4b72-9432-c0307d14c9ac\" (UID: \"46dbf3f9-174e-4b72-9432-c0307d14c9ac\") " Jan 28 07:04:27 crc kubenswrapper[4642]: I0128 07:04:27.960326 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46dbf3f9-174e-4b72-9432-c0307d14c9ac-logs" (OuterVolumeSpecName: "logs") pod "46dbf3f9-174e-4b72-9432-c0307d14c9ac" (UID: "46dbf3f9-174e-4b72-9432-c0307d14c9ac"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 07:04:27 crc kubenswrapper[4642]: I0128 07:04:27.961166 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46dbf3f9-174e-4b72-9432-c0307d14c9ac-combined-ca-bundle\") pod \"46dbf3f9-174e-4b72-9432-c0307d14c9ac\" (UID: \"46dbf3f9-174e-4b72-9432-c0307d14c9ac\") " Jan 28 07:04:27 crc kubenswrapper[4642]: I0128 07:04:27.961393 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46dbf3f9-174e-4b72-9432-c0307d14c9ac-config-data\") pod \"46dbf3f9-174e-4b72-9432-c0307d14c9ac\" (UID: \"46dbf3f9-174e-4b72-9432-c0307d14c9ac\") " Jan 28 07:04:27 crc kubenswrapper[4642]: I0128 07:04:27.962325 4642 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46dbf3f9-174e-4b72-9432-c0307d14c9ac-logs\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:27 crc kubenswrapper[4642]: I0128 07:04:27.969538 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46dbf3f9-174e-4b72-9432-c0307d14c9ac-kube-api-access-kkpv9" (OuterVolumeSpecName: "kube-api-access-kkpv9") pod "46dbf3f9-174e-4b72-9432-c0307d14c9ac" (UID: "46dbf3f9-174e-4b72-9432-c0307d14c9ac"). InnerVolumeSpecName "kube-api-access-kkpv9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:04:27 crc kubenswrapper[4642]: I0128 07:04:27.975379 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46dbf3f9-174e-4b72-9432-c0307d14c9ac-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "46dbf3f9-174e-4b72-9432-c0307d14c9ac" (UID: "46dbf3f9-174e-4b72-9432-c0307d14c9ac"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:04:28 crc kubenswrapper[4642]: I0128 07:04:28.004254 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-84d4f5dddf-lct7d" event={"ID":"46dbf3f9-174e-4b72-9432-c0307d14c9ac","Type":"ContainerDied","Data":"2cf5bec6bbbdbee97091bb7afd989e8508b07aae3fa4c4259c5491ea238508d0"} Jan 28 07:04:28 crc kubenswrapper[4642]: I0128 07:04:28.004303 4642 scope.go:117] "RemoveContainer" containerID="a37755ebf447dbff77d6188e2a852c3f791d86639d7b60ced552faa195ea2f08" Jan 28 07:04:28 crc kubenswrapper[4642]: I0128 07:04:28.004427 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-84d4f5dddf-lct7d" Jan 28 07:04:28 crc kubenswrapper[4642]: I0128 07:04:28.007341 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-659d7fff7d-x4jhp" event={"ID":"94d4e46c-7083-4acb-8925-a9a92278e0c6","Type":"ContainerDied","Data":"d7c4365df525740e4029774e15c6361421f51fb4c4e16d8f52a251a2d73a7f91"} Jan 28 07:04:28 crc kubenswrapper[4642]: I0128 07:04:28.007359 4642 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7c4365df525740e4029774e15c6361421f51fb4c4e16d8f52a251a2d73a7f91" Jan 28 07:04:28 crc kubenswrapper[4642]: I0128 07:04:28.012056 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="201fcbe3-af3e-4f46-bd0f-07693de10229" containerName="ceilometer-central-agent" containerID="cri-o://b536b7183d5a8423c1a0ed66eec87ed8e00629d2b1da21c17881a879e0873af1" gracePeriod=30 Jan 28 07:04:28 crc kubenswrapper[4642]: I0128 07:04:28.012253 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"201fcbe3-af3e-4f46-bd0f-07693de10229","Type":"ContainerStarted","Data":"2e1fa65b94cb91ff1b39cf0473ca3738496cdae1c0ade8009ef01be72a6dc084"} Jan 28 07:04:28 crc kubenswrapper[4642]: I0128 07:04:28.012281 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 28 07:04:28 crc kubenswrapper[4642]: I0128 07:04:28.012363 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="201fcbe3-af3e-4f46-bd0f-07693de10229" containerName="proxy-httpd" containerID="cri-o://2e1fa65b94cb91ff1b39cf0473ca3738496cdae1c0ade8009ef01be72a6dc084" gracePeriod=30 Jan 28 07:04:28 crc kubenswrapper[4642]: I0128 07:04:28.012582 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="201fcbe3-af3e-4f46-bd0f-07693de10229" containerName="ceilometer-notification-agent" containerID="cri-o://0883a9883907d4acd5bfde2b91f1b7acb7d0911b19f63131597360895a3f70f0" gracePeriod=30 Jan 28 07:04:28 crc kubenswrapper[4642]: I0128 07:04:28.012637 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="201fcbe3-af3e-4f46-bd0f-07693de10229" containerName="sg-core" containerID="cri-o://3fe858a821da4f29bdf5249bc6d7312d774e26222eb8043313250bfe9cbeac9e" gracePeriod=30 Jan 28 07:04:28 crc kubenswrapper[4642]: I0128 07:04:28.029300 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46dbf3f9-174e-4b72-9432-c0307d14c9ac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "46dbf3f9-174e-4b72-9432-c0307d14c9ac" (UID: "46dbf3f9-174e-4b72-9432-c0307d14c9ac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:04:28 crc kubenswrapper[4642]: I0128 07:04:28.036401 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46dbf3f9-174e-4b72-9432-c0307d14c9ac-config-data" (OuterVolumeSpecName: "config-data") pod "46dbf3f9-174e-4b72-9432-c0307d14c9ac" (UID: "46dbf3f9-174e-4b72-9432-c0307d14c9ac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:04:28 crc kubenswrapper[4642]: I0128 07:04:28.045199 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.070376522 podStartE2EDuration="30.04515798s" podCreationTimestamp="2026-01-28 07:03:58 +0000 UTC" firstStartedPulling="2026-01-28 07:03:59.616971789 +0000 UTC m=+962.849060598" lastFinishedPulling="2026-01-28 07:04:27.591753248 +0000 UTC m=+990.823842056" observedRunningTime="2026-01-28 07:04:28.042805467 +0000 UTC m=+991.274894276" watchObservedRunningTime="2026-01-28 07:04:28.04515798 +0000 UTC m=+991.277246789" Jan 28 07:04:28 crc kubenswrapper[4642]: I0128 07:04:28.064753 4642 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46dbf3f9-174e-4b72-9432-c0307d14c9ac-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:28 crc kubenswrapper[4642]: I0128 07:04:28.064801 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kkpv9\" (UniqueName: \"kubernetes.io/projected/46dbf3f9-174e-4b72-9432-c0307d14c9ac-kube-api-access-kkpv9\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:28 crc kubenswrapper[4642]: I0128 07:04:28.064817 4642 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/46dbf3f9-174e-4b72-9432-c0307d14c9ac-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:28 crc kubenswrapper[4642]: I0128 07:04:28.064826 4642 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46dbf3f9-174e-4b72-9432-c0307d14c9ac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:28 crc kubenswrapper[4642]: I0128 07:04:28.142696 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-659d7fff7d-x4jhp" Jan 28 07:04:28 crc kubenswrapper[4642]: I0128 07:04:28.149310 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-849d6cccf9-nf7n7" Jan 28 07:04:28 crc kubenswrapper[4642]: I0128 07:04:28.154316 4642 scope.go:117] "RemoveContainer" containerID="9d82231da3c8bfefb2a21ad829aae165a34370ea4b8bbdd6384e077495ed3f39" Jan 28 07:04:28 crc kubenswrapper[4642]: I0128 07:04:28.267722 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/94d4e46c-7083-4acb-8925-a9a92278e0c6-config-data-custom\") pod \"94d4e46c-7083-4acb-8925-a9a92278e0c6\" (UID: \"94d4e46c-7083-4acb-8925-a9a92278e0c6\") " Jan 28 07:04:28 crc kubenswrapper[4642]: I0128 07:04:28.267830 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c40a1d5e-2d6f-4f93-9a5f-6fbc808f1a2e-config\") pod \"c40a1d5e-2d6f-4f93-9a5f-6fbc808f1a2e\" (UID: \"c40a1d5e-2d6f-4f93-9a5f-6fbc808f1a2e\") " Jan 28 07:04:28 crc kubenswrapper[4642]: I0128 07:04:28.267879 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94d4e46c-7083-4acb-8925-a9a92278e0c6-logs\") pod \"94d4e46c-7083-4acb-8925-a9a92278e0c6\" (UID: \"94d4e46c-7083-4acb-8925-a9a92278e0c6\") " Jan 28 07:04:28 crc kubenswrapper[4642]: I0128 07:04:28.267957 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c40a1d5e-2d6f-4f93-9a5f-6fbc808f1a2e-ovsdbserver-nb\") pod \"c40a1d5e-2d6f-4f93-9a5f-6fbc808f1a2e\" (UID: \"c40a1d5e-2d6f-4f93-9a5f-6fbc808f1a2e\") " Jan 28 07:04:28 crc kubenswrapper[4642]: I0128 07:04:28.268029 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94d4e46c-7083-4acb-8925-a9a92278e0c6-config-data\") pod \"94d4e46c-7083-4acb-8925-a9a92278e0c6\" (UID: \"94d4e46c-7083-4acb-8925-a9a92278e0c6\") " Jan 28 07:04:28 crc kubenswrapper[4642]: I0128 07:04:28.268081 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c40a1d5e-2d6f-4f93-9a5f-6fbc808f1a2e-dns-swift-storage-0\") pod \"c40a1d5e-2d6f-4f93-9a5f-6fbc808f1a2e\" (UID: \"c40a1d5e-2d6f-4f93-9a5f-6fbc808f1a2e\") " Jan 28 07:04:28 crc kubenswrapper[4642]: I0128 07:04:28.268251 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mc49s\" (UniqueName: \"kubernetes.io/projected/94d4e46c-7083-4acb-8925-a9a92278e0c6-kube-api-access-mc49s\") pod \"94d4e46c-7083-4acb-8925-a9a92278e0c6\" (UID: \"94d4e46c-7083-4acb-8925-a9a92278e0c6\") " Jan 28 07:04:28 crc kubenswrapper[4642]: I0128 07:04:28.268299 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fv5pb\" (UniqueName: \"kubernetes.io/projected/c40a1d5e-2d6f-4f93-9a5f-6fbc808f1a2e-kube-api-access-fv5pb\") pod \"c40a1d5e-2d6f-4f93-9a5f-6fbc808f1a2e\" (UID: \"c40a1d5e-2d6f-4f93-9a5f-6fbc808f1a2e\") " Jan 28 07:04:28 crc kubenswrapper[4642]: I0128 07:04:28.268347 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c40a1d5e-2d6f-4f93-9a5f-6fbc808f1a2e-dns-svc\") pod \"c40a1d5e-2d6f-4f93-9a5f-6fbc808f1a2e\" (UID: \"c40a1d5e-2d6f-4f93-9a5f-6fbc808f1a2e\") " Jan 28 07:04:28 crc kubenswrapper[4642]: I0128 07:04:28.268374 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c40a1d5e-2d6f-4f93-9a5f-6fbc808f1a2e-ovsdbserver-sb\") pod \"c40a1d5e-2d6f-4f93-9a5f-6fbc808f1a2e\" (UID: \"c40a1d5e-2d6f-4f93-9a5f-6fbc808f1a2e\") " Jan 28 07:04:28 crc kubenswrapper[4642]: I0128 07:04:28.268393 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94d4e46c-7083-4acb-8925-a9a92278e0c6-combined-ca-bundle\") pod \"94d4e46c-7083-4acb-8925-a9a92278e0c6\" (UID: \"94d4e46c-7083-4acb-8925-a9a92278e0c6\") " Jan 28 07:04:28 crc kubenswrapper[4642]: I0128 07:04:28.268677 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94d4e46c-7083-4acb-8925-a9a92278e0c6-logs" (OuterVolumeSpecName: "logs") pod "94d4e46c-7083-4acb-8925-a9a92278e0c6" (UID: "94d4e46c-7083-4acb-8925-a9a92278e0c6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 07:04:28 crc kubenswrapper[4642]: I0128 07:04:28.269089 4642 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94d4e46c-7083-4acb-8925-a9a92278e0c6-logs\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:28 crc kubenswrapper[4642]: I0128 07:04:28.273576 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94d4e46c-7083-4acb-8925-a9a92278e0c6-kube-api-access-mc49s" (OuterVolumeSpecName: "kube-api-access-mc49s") pod "94d4e46c-7083-4acb-8925-a9a92278e0c6" (UID: "94d4e46c-7083-4acb-8925-a9a92278e0c6"). InnerVolumeSpecName "kube-api-access-mc49s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:04:28 crc kubenswrapper[4642]: I0128 07:04:28.273716 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94d4e46c-7083-4acb-8925-a9a92278e0c6-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "94d4e46c-7083-4acb-8925-a9a92278e0c6" (UID: "94d4e46c-7083-4acb-8925-a9a92278e0c6"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:04:28 crc kubenswrapper[4642]: I0128 07:04:28.273739 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c40a1d5e-2d6f-4f93-9a5f-6fbc808f1a2e-kube-api-access-fv5pb" (OuterVolumeSpecName: "kube-api-access-fv5pb") pod "c40a1d5e-2d6f-4f93-9a5f-6fbc808f1a2e" (UID: "c40a1d5e-2d6f-4f93-9a5f-6fbc808f1a2e"). InnerVolumeSpecName "kube-api-access-fv5pb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:04:28 crc kubenswrapper[4642]: I0128 07:04:28.302664 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94d4e46c-7083-4acb-8925-a9a92278e0c6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "94d4e46c-7083-4acb-8925-a9a92278e0c6" (UID: "94d4e46c-7083-4acb-8925-a9a92278e0c6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:04:28 crc kubenswrapper[4642]: I0128 07:04:28.314054 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94d4e46c-7083-4acb-8925-a9a92278e0c6-config-data" (OuterVolumeSpecName: "config-data") pod "94d4e46c-7083-4acb-8925-a9a92278e0c6" (UID: "94d4e46c-7083-4acb-8925-a9a92278e0c6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:04:28 crc kubenswrapper[4642]: I0128 07:04:28.316595 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c40a1d5e-2d6f-4f93-9a5f-6fbc808f1a2e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c40a1d5e-2d6f-4f93-9a5f-6fbc808f1a2e" (UID: "c40a1d5e-2d6f-4f93-9a5f-6fbc808f1a2e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:04:28 crc kubenswrapper[4642]: I0128 07:04:28.318056 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c40a1d5e-2d6f-4f93-9a5f-6fbc808f1a2e-config" (OuterVolumeSpecName: "config") pod "c40a1d5e-2d6f-4f93-9a5f-6fbc808f1a2e" (UID: "c40a1d5e-2d6f-4f93-9a5f-6fbc808f1a2e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:04:28 crc kubenswrapper[4642]: I0128 07:04:28.320132 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c40a1d5e-2d6f-4f93-9a5f-6fbc808f1a2e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c40a1d5e-2d6f-4f93-9a5f-6fbc808f1a2e" (UID: "c40a1d5e-2d6f-4f93-9a5f-6fbc808f1a2e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:04:28 crc kubenswrapper[4642]: I0128 07:04:28.325619 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c40a1d5e-2d6f-4f93-9a5f-6fbc808f1a2e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c40a1d5e-2d6f-4f93-9a5f-6fbc808f1a2e" (UID: "c40a1d5e-2d6f-4f93-9a5f-6fbc808f1a2e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:04:28 crc kubenswrapper[4642]: I0128 07:04:28.333615 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c40a1d5e-2d6f-4f93-9a5f-6fbc808f1a2e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c40a1d5e-2d6f-4f93-9a5f-6fbc808f1a2e" (UID: "c40a1d5e-2d6f-4f93-9a5f-6fbc808f1a2e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:04:28 crc kubenswrapper[4642]: I0128 07:04:28.343839 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-84d4f5dddf-lct7d"] Jan 28 07:04:28 crc kubenswrapper[4642]: I0128 07:04:28.347958 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-84d4f5dddf-lct7d"] Jan 28 07:04:28 crc kubenswrapper[4642]: I0128 07:04:28.374284 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mc49s\" (UniqueName: \"kubernetes.io/projected/94d4e46c-7083-4acb-8925-a9a92278e0c6-kube-api-access-mc49s\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:28 crc kubenswrapper[4642]: I0128 07:04:28.374326 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fv5pb\" (UniqueName: \"kubernetes.io/projected/c40a1d5e-2d6f-4f93-9a5f-6fbc808f1a2e-kube-api-access-fv5pb\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:28 crc kubenswrapper[4642]: I0128 07:04:28.374340 4642 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c40a1d5e-2d6f-4f93-9a5f-6fbc808f1a2e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:28 crc kubenswrapper[4642]: I0128 07:04:28.374365 4642 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c40a1d5e-2d6f-4f93-9a5f-6fbc808f1a2e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:28 crc kubenswrapper[4642]: I0128 07:04:28.374378 4642 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94d4e46c-7083-4acb-8925-a9a92278e0c6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:28 crc kubenswrapper[4642]: I0128 07:04:28.374388 4642 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/94d4e46c-7083-4acb-8925-a9a92278e0c6-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:28 crc kubenswrapper[4642]: I0128 07:04:28.380151 4642 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c40a1d5e-2d6f-4f93-9a5f-6fbc808f1a2e-config\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:28 crc kubenswrapper[4642]: I0128 07:04:28.380199 4642 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c40a1d5e-2d6f-4f93-9a5f-6fbc808f1a2e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:28 crc kubenswrapper[4642]: I0128 07:04:28.380214 4642 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94d4e46c-7083-4acb-8925-a9a92278e0c6-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:28 crc kubenswrapper[4642]: I0128 07:04:28.380225 4642 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c40a1d5e-2d6f-4f93-9a5f-6fbc808f1a2e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:28 crc kubenswrapper[4642]: I0128 07:04:28.744657 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 07:04:28 crc kubenswrapper[4642]: I0128 07:04:28.895202 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/201fcbe3-af3e-4f46-bd0f-07693de10229-run-httpd\") pod \"201fcbe3-af3e-4f46-bd0f-07693de10229\" (UID: \"201fcbe3-af3e-4f46-bd0f-07693de10229\") " Jan 28 07:04:28 crc kubenswrapper[4642]: I0128 07:04:28.895294 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/201fcbe3-af3e-4f46-bd0f-07693de10229-scripts\") pod \"201fcbe3-af3e-4f46-bd0f-07693de10229\" (UID: \"201fcbe3-af3e-4f46-bd0f-07693de10229\") " Jan 28 07:04:28 crc kubenswrapper[4642]: I0128 07:04:28.895331 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/201fcbe3-af3e-4f46-bd0f-07693de10229-combined-ca-bundle\") pod \"201fcbe3-af3e-4f46-bd0f-07693de10229\" (UID: \"201fcbe3-af3e-4f46-bd0f-07693de10229\") " Jan 28 07:04:28 crc kubenswrapper[4642]: I0128 07:04:28.895362 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dc45k\" (UniqueName: \"kubernetes.io/projected/201fcbe3-af3e-4f46-bd0f-07693de10229-kube-api-access-dc45k\") pod \"201fcbe3-af3e-4f46-bd0f-07693de10229\" (UID: \"201fcbe3-af3e-4f46-bd0f-07693de10229\") " Jan 28 07:04:28 crc kubenswrapper[4642]: I0128 07:04:28.895402 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/201fcbe3-af3e-4f46-bd0f-07693de10229-sg-core-conf-yaml\") pod \"201fcbe3-af3e-4f46-bd0f-07693de10229\" (UID: \"201fcbe3-af3e-4f46-bd0f-07693de10229\") " Jan 28 07:04:28 crc kubenswrapper[4642]: I0128 07:04:28.895489 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/201fcbe3-af3e-4f46-bd0f-07693de10229-config-data\") pod \"201fcbe3-af3e-4f46-bd0f-07693de10229\" (UID: \"201fcbe3-af3e-4f46-bd0f-07693de10229\") " Jan 28 07:04:28 crc kubenswrapper[4642]: I0128 07:04:28.895585 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/201fcbe3-af3e-4f46-bd0f-07693de10229-log-httpd\") pod \"201fcbe3-af3e-4f46-bd0f-07693de10229\" (UID: \"201fcbe3-af3e-4f46-bd0f-07693de10229\") " Jan 28 07:04:28 crc kubenswrapper[4642]: I0128 07:04:28.895679 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/201fcbe3-af3e-4f46-bd0f-07693de10229-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "201fcbe3-af3e-4f46-bd0f-07693de10229" (UID: "201fcbe3-af3e-4f46-bd0f-07693de10229"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 07:04:28 crc kubenswrapper[4642]: I0128 07:04:28.895973 4642 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/201fcbe3-af3e-4f46-bd0f-07693de10229-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:28 crc kubenswrapper[4642]: I0128 07:04:28.896337 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/201fcbe3-af3e-4f46-bd0f-07693de10229-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "201fcbe3-af3e-4f46-bd0f-07693de10229" (UID: "201fcbe3-af3e-4f46-bd0f-07693de10229"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 07:04:28 crc kubenswrapper[4642]: I0128 07:04:28.901841 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/201fcbe3-af3e-4f46-bd0f-07693de10229-scripts" (OuterVolumeSpecName: "scripts") pod "201fcbe3-af3e-4f46-bd0f-07693de10229" (UID: "201fcbe3-af3e-4f46-bd0f-07693de10229"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:04:28 crc kubenswrapper[4642]: I0128 07:04:28.902067 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/201fcbe3-af3e-4f46-bd0f-07693de10229-kube-api-access-dc45k" (OuterVolumeSpecName: "kube-api-access-dc45k") pod "201fcbe3-af3e-4f46-bd0f-07693de10229" (UID: "201fcbe3-af3e-4f46-bd0f-07693de10229"). InnerVolumeSpecName "kube-api-access-dc45k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:04:28 crc kubenswrapper[4642]: I0128 07:04:28.924002 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/201fcbe3-af3e-4f46-bd0f-07693de10229-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "201fcbe3-af3e-4f46-bd0f-07693de10229" (UID: "201fcbe3-af3e-4f46-bd0f-07693de10229"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:04:28 crc kubenswrapper[4642]: I0128 07:04:28.956672 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/201fcbe3-af3e-4f46-bd0f-07693de10229-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "201fcbe3-af3e-4f46-bd0f-07693de10229" (UID: "201fcbe3-af3e-4f46-bd0f-07693de10229"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:04:28 crc kubenswrapper[4642]: I0128 07:04:28.968405 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/201fcbe3-af3e-4f46-bd0f-07693de10229-config-data" (OuterVolumeSpecName: "config-data") pod "201fcbe3-af3e-4f46-bd0f-07693de10229" (UID: "201fcbe3-af3e-4f46-bd0f-07693de10229"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:04:28 crc kubenswrapper[4642]: I0128 07:04:28.997073 4642 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/201fcbe3-af3e-4f46-bd0f-07693de10229-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:28 crc kubenswrapper[4642]: I0128 07:04:28.997109 4642 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/201fcbe3-af3e-4f46-bd0f-07693de10229-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:28 crc kubenswrapper[4642]: I0128 07:04:28.997118 4642 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/201fcbe3-af3e-4f46-bd0f-07693de10229-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:28 crc kubenswrapper[4642]: I0128 07:04:28.997129 4642 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/201fcbe3-af3e-4f46-bd0f-07693de10229-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:28 crc kubenswrapper[4642]: I0128 07:04:28.997142 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dc45k\" (UniqueName: \"kubernetes.io/projected/201fcbe3-af3e-4f46-bd0f-07693de10229-kube-api-access-dc45k\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:28 crc kubenswrapper[4642]: I0128 07:04:28.997151 4642 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/201fcbe3-af3e-4f46-bd0f-07693de10229-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.021843 4642 generic.go:334] "Generic (PLEG): container finished" podID="201fcbe3-af3e-4f46-bd0f-07693de10229" containerID="2e1fa65b94cb91ff1b39cf0473ca3738496cdae1c0ade8009ef01be72a6dc084" exitCode=0 Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.021870 4642 generic.go:334] "Generic (PLEG): container finished" podID="201fcbe3-af3e-4f46-bd0f-07693de10229" containerID="3fe858a821da4f29bdf5249bc6d7312d774e26222eb8043313250bfe9cbeac9e" exitCode=2 Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.021879 4642 generic.go:334] "Generic (PLEG): container finished" podID="201fcbe3-af3e-4f46-bd0f-07693de10229" containerID="0883a9883907d4acd5bfde2b91f1b7acb7d0911b19f63131597360895a3f70f0" exitCode=0 Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.021889 4642 generic.go:334] "Generic (PLEG): container finished" podID="201fcbe3-af3e-4f46-bd0f-07693de10229" containerID="b536b7183d5a8423c1a0ed66eec87ed8e00629d2b1da21c17881a879e0873af1" exitCode=0 Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.021898 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"201fcbe3-af3e-4f46-bd0f-07693de10229","Type":"ContainerDied","Data":"2e1fa65b94cb91ff1b39cf0473ca3738496cdae1c0ade8009ef01be72a6dc084"} Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.022009 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"201fcbe3-af3e-4f46-bd0f-07693de10229","Type":"ContainerDied","Data":"3fe858a821da4f29bdf5249bc6d7312d774e26222eb8043313250bfe9cbeac9e"} Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.022025 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"201fcbe3-af3e-4f46-bd0f-07693de10229","Type":"ContainerDied","Data":"0883a9883907d4acd5bfde2b91f1b7acb7d0911b19f63131597360895a3f70f0"} Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.022035 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"201fcbe3-af3e-4f46-bd0f-07693de10229","Type":"ContainerDied","Data":"b536b7183d5a8423c1a0ed66eec87ed8e00629d2b1da21c17881a879e0873af1"} Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.022044 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"201fcbe3-af3e-4f46-bd0f-07693de10229","Type":"ContainerDied","Data":"d144cb525a349523ee3bb14504c31b8e1a0c6d9d2d8ad6eb9be210030d682afd"} Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.022059 4642 scope.go:117] "RemoveContainer" containerID="2e1fa65b94cb91ff1b39cf0473ca3738496cdae1c0ade8009ef01be72a6dc084" Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.021876 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.026699 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-659d7fff7d-x4jhp" Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.026719 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-849d6cccf9-nf7n7" event={"ID":"c40a1d5e-2d6f-4f93-9a5f-6fbc808f1a2e","Type":"ContainerDied","Data":"fb3d5c998962cb58d11a913c432d3d7aab955189488ad05c2e8cbf2e18cea5e3"} Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.026746 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-849d6cccf9-nf7n7" Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.058338 4642 scope.go:117] "RemoveContainer" containerID="3fe858a821da4f29bdf5249bc6d7312d774e26222eb8043313250bfe9cbeac9e" Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.069232 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-849d6cccf9-nf7n7"] Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.083025 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-849d6cccf9-nf7n7"] Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.093314 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.103716 4642 scope.go:117] "RemoveContainer" containerID="0883a9883907d4acd5bfde2b91f1b7acb7d0911b19f63131597360895a3f70f0" Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.111079 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46dbf3f9-174e-4b72-9432-c0307d14c9ac" path="/var/lib/kubelet/pods/46dbf3f9-174e-4b72-9432-c0307d14c9ac/volumes" Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.111803 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c40a1d5e-2d6f-4f93-9a5f-6fbc808f1a2e" path="/var/lib/kubelet/pods/c40a1d5e-2d6f-4f93-9a5f-6fbc808f1a2e/volumes" Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.112369 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.112447 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-659d7fff7d-x4jhp"] Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.113561 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 28 07:04:29 crc kubenswrapper[4642]: E0128 07:04:29.113905 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46dbf3f9-174e-4b72-9432-c0307d14c9ac" containerName="barbican-worker" Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.113975 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="46dbf3f9-174e-4b72-9432-c0307d14c9ac" containerName="barbican-worker" Jan 28 07:04:29 crc kubenswrapper[4642]: E0128 07:04:29.114030 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="901e4414-3465-41d0-a4d6-cc041e6e4319" containerName="dnsmasq-dns" Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.114074 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="901e4414-3465-41d0-a4d6-cc041e6e4319" containerName="dnsmasq-dns" Jan 28 07:04:29 crc kubenswrapper[4642]: E0128 07:04:29.114120 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="201fcbe3-af3e-4f46-bd0f-07693de10229" containerName="sg-core" Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.114164 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="201fcbe3-af3e-4f46-bd0f-07693de10229" containerName="sg-core" Jan 28 07:04:29 crc kubenswrapper[4642]: E0128 07:04:29.114243 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="201fcbe3-af3e-4f46-bd0f-07693de10229" containerName="ceilometer-notification-agent" Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.114299 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="201fcbe3-af3e-4f46-bd0f-07693de10229" containerName="ceilometer-notification-agent" Jan 28 07:04:29 crc kubenswrapper[4642]: E0128 07:04:29.114359 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="201fcbe3-af3e-4f46-bd0f-07693de10229" containerName="proxy-httpd" Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.114402 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="201fcbe3-af3e-4f46-bd0f-07693de10229" containerName="proxy-httpd" Jan 28 07:04:29 crc kubenswrapper[4642]: E0128 07:04:29.114448 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c40a1d5e-2d6f-4f93-9a5f-6fbc808f1a2e" containerName="dnsmasq-dns" Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.114501 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="c40a1d5e-2d6f-4f93-9a5f-6fbc808f1a2e" containerName="dnsmasq-dns" Jan 28 07:04:29 crc kubenswrapper[4642]: E0128 07:04:29.114556 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94d4e46c-7083-4acb-8925-a9a92278e0c6" containerName="barbican-keystone-listener" Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.114605 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="94d4e46c-7083-4acb-8925-a9a92278e0c6" containerName="barbican-keystone-listener" Jan 28 07:04:29 crc kubenswrapper[4642]: E0128 07:04:29.114660 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94d4e46c-7083-4acb-8925-a9a92278e0c6" containerName="barbican-keystone-listener-log" Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.114703 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="94d4e46c-7083-4acb-8925-a9a92278e0c6" containerName="barbican-keystone-listener-log" Jan 28 07:04:29 crc kubenswrapper[4642]: E0128 07:04:29.114770 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="201fcbe3-af3e-4f46-bd0f-07693de10229" containerName="ceilometer-central-agent" Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.114816 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="201fcbe3-af3e-4f46-bd0f-07693de10229" containerName="ceilometer-central-agent" Jan 28 07:04:29 crc kubenswrapper[4642]: E0128 07:04:29.114904 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46dbf3f9-174e-4b72-9432-c0307d14c9ac" containerName="barbican-worker-log" Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.114960 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="46dbf3f9-174e-4b72-9432-c0307d14c9ac" containerName="barbican-worker-log" Jan 28 07:04:29 crc kubenswrapper[4642]: E0128 07:04:29.115015 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="901e4414-3465-41d0-a4d6-cc041e6e4319" containerName="init" Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.115057 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="901e4414-3465-41d0-a4d6-cc041e6e4319" containerName="init" Jan 28 07:04:29 crc kubenswrapper[4642]: E0128 07:04:29.115304 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c40a1d5e-2d6f-4f93-9a5f-6fbc808f1a2e" containerName="init" Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.115362 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="c40a1d5e-2d6f-4f93-9a5f-6fbc808f1a2e" containerName="init" Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.115602 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="94d4e46c-7083-4acb-8925-a9a92278e0c6" containerName="barbican-keystone-listener-log" Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.115664 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="94d4e46c-7083-4acb-8925-a9a92278e0c6" containerName="barbican-keystone-listener" Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.115721 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="901e4414-3465-41d0-a4d6-cc041e6e4319" containerName="dnsmasq-dns" Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.115772 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="201fcbe3-af3e-4f46-bd0f-07693de10229" containerName="ceilometer-notification-agent" Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.115816 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="46dbf3f9-174e-4b72-9432-c0307d14c9ac" containerName="barbican-worker-log" Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.115862 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="46dbf3f9-174e-4b72-9432-c0307d14c9ac" containerName="barbican-worker" Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.115906 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="c40a1d5e-2d6f-4f93-9a5f-6fbc808f1a2e" containerName="dnsmasq-dns" Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.115958 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="201fcbe3-af3e-4f46-bd0f-07693de10229" containerName="ceilometer-central-agent" Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.116021 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="201fcbe3-af3e-4f46-bd0f-07693de10229" containerName="proxy-httpd" Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.116069 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="201fcbe3-af3e-4f46-bd0f-07693de10229" containerName="sg-core" Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.117632 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.118131 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-659d7fff7d-x4jhp"] Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.120109 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.120399 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.123014 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.143051 4642 scope.go:117] "RemoveContainer" containerID="b536b7183d5a8423c1a0ed66eec87ed8e00629d2b1da21c17881a879e0873af1" Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.161531 4642 scope.go:117] "RemoveContainer" containerID="2e1fa65b94cb91ff1b39cf0473ca3738496cdae1c0ade8009ef01be72a6dc084" Jan 28 07:04:29 crc kubenswrapper[4642]: E0128 07:04:29.162111 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e1fa65b94cb91ff1b39cf0473ca3738496cdae1c0ade8009ef01be72a6dc084\": container with ID starting with 2e1fa65b94cb91ff1b39cf0473ca3738496cdae1c0ade8009ef01be72a6dc084 not found: ID does not exist" containerID="2e1fa65b94cb91ff1b39cf0473ca3738496cdae1c0ade8009ef01be72a6dc084" Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.162168 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e1fa65b94cb91ff1b39cf0473ca3738496cdae1c0ade8009ef01be72a6dc084"} err="failed to get container status \"2e1fa65b94cb91ff1b39cf0473ca3738496cdae1c0ade8009ef01be72a6dc084\": rpc error: code = NotFound desc = could not find container \"2e1fa65b94cb91ff1b39cf0473ca3738496cdae1c0ade8009ef01be72a6dc084\": container with ID starting with 2e1fa65b94cb91ff1b39cf0473ca3738496cdae1c0ade8009ef01be72a6dc084 not found: ID does not exist" Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.162220 4642 scope.go:117] "RemoveContainer" containerID="3fe858a821da4f29bdf5249bc6d7312d774e26222eb8043313250bfe9cbeac9e" Jan 28 07:04:29 crc kubenswrapper[4642]: E0128 07:04:29.162591 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fe858a821da4f29bdf5249bc6d7312d774e26222eb8043313250bfe9cbeac9e\": container with ID starting with 3fe858a821da4f29bdf5249bc6d7312d774e26222eb8043313250bfe9cbeac9e not found: ID does not exist" containerID="3fe858a821da4f29bdf5249bc6d7312d774e26222eb8043313250bfe9cbeac9e" Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.162621 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fe858a821da4f29bdf5249bc6d7312d774e26222eb8043313250bfe9cbeac9e"} err="failed to get container status \"3fe858a821da4f29bdf5249bc6d7312d774e26222eb8043313250bfe9cbeac9e\": rpc error: code = NotFound desc = could not find container \"3fe858a821da4f29bdf5249bc6d7312d774e26222eb8043313250bfe9cbeac9e\": container with ID starting with 3fe858a821da4f29bdf5249bc6d7312d774e26222eb8043313250bfe9cbeac9e not found: ID does not exist" Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.162652 4642 scope.go:117] "RemoveContainer" containerID="0883a9883907d4acd5bfde2b91f1b7acb7d0911b19f63131597360895a3f70f0" Jan 28 07:04:29 crc kubenswrapper[4642]: E0128 07:04:29.163034 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0883a9883907d4acd5bfde2b91f1b7acb7d0911b19f63131597360895a3f70f0\": container with ID starting with 0883a9883907d4acd5bfde2b91f1b7acb7d0911b19f63131597360895a3f70f0 not found: ID does not exist" containerID="0883a9883907d4acd5bfde2b91f1b7acb7d0911b19f63131597360895a3f70f0" Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.163104 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0883a9883907d4acd5bfde2b91f1b7acb7d0911b19f63131597360895a3f70f0"} err="failed to get container status \"0883a9883907d4acd5bfde2b91f1b7acb7d0911b19f63131597360895a3f70f0\": rpc error: code = NotFound desc = could not find container \"0883a9883907d4acd5bfde2b91f1b7acb7d0911b19f63131597360895a3f70f0\": container with ID starting with 0883a9883907d4acd5bfde2b91f1b7acb7d0911b19f63131597360895a3f70f0 not found: ID does not exist" Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.163120 4642 scope.go:117] "RemoveContainer" containerID="b536b7183d5a8423c1a0ed66eec87ed8e00629d2b1da21c17881a879e0873af1" Jan 28 07:04:29 crc kubenswrapper[4642]: E0128 07:04:29.164137 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b536b7183d5a8423c1a0ed66eec87ed8e00629d2b1da21c17881a879e0873af1\": container with ID starting with b536b7183d5a8423c1a0ed66eec87ed8e00629d2b1da21c17881a879e0873af1 not found: ID does not exist" containerID="b536b7183d5a8423c1a0ed66eec87ed8e00629d2b1da21c17881a879e0873af1" Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.164179 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b536b7183d5a8423c1a0ed66eec87ed8e00629d2b1da21c17881a879e0873af1"} err="failed to get container status \"b536b7183d5a8423c1a0ed66eec87ed8e00629d2b1da21c17881a879e0873af1\": rpc error: code = NotFound desc = could not find container \"b536b7183d5a8423c1a0ed66eec87ed8e00629d2b1da21c17881a879e0873af1\": container with ID starting with b536b7183d5a8423c1a0ed66eec87ed8e00629d2b1da21c17881a879e0873af1 not found: ID does not exist" Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.164215 4642 scope.go:117] "RemoveContainer" containerID="2e1fa65b94cb91ff1b39cf0473ca3738496cdae1c0ade8009ef01be72a6dc084" Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.164709 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e1fa65b94cb91ff1b39cf0473ca3738496cdae1c0ade8009ef01be72a6dc084"} err="failed to get container status \"2e1fa65b94cb91ff1b39cf0473ca3738496cdae1c0ade8009ef01be72a6dc084\": rpc error: code = NotFound desc = could not find container \"2e1fa65b94cb91ff1b39cf0473ca3738496cdae1c0ade8009ef01be72a6dc084\": container with ID starting with 2e1fa65b94cb91ff1b39cf0473ca3738496cdae1c0ade8009ef01be72a6dc084 not found: ID does not exist" Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.164733 4642 scope.go:117] "RemoveContainer" containerID="3fe858a821da4f29bdf5249bc6d7312d774e26222eb8043313250bfe9cbeac9e" Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.165129 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fe858a821da4f29bdf5249bc6d7312d774e26222eb8043313250bfe9cbeac9e"} err="failed to get container status \"3fe858a821da4f29bdf5249bc6d7312d774e26222eb8043313250bfe9cbeac9e\": rpc error: code = NotFound desc = could not find container \"3fe858a821da4f29bdf5249bc6d7312d774e26222eb8043313250bfe9cbeac9e\": container with ID starting with 3fe858a821da4f29bdf5249bc6d7312d774e26222eb8043313250bfe9cbeac9e not found: ID does not exist" Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.165153 4642 scope.go:117] "RemoveContainer" containerID="0883a9883907d4acd5bfde2b91f1b7acb7d0911b19f63131597360895a3f70f0" Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.165700 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0883a9883907d4acd5bfde2b91f1b7acb7d0911b19f63131597360895a3f70f0"} err="failed to get container status \"0883a9883907d4acd5bfde2b91f1b7acb7d0911b19f63131597360895a3f70f0\": rpc error: code = NotFound desc = could not find container \"0883a9883907d4acd5bfde2b91f1b7acb7d0911b19f63131597360895a3f70f0\": container with ID starting with 0883a9883907d4acd5bfde2b91f1b7acb7d0911b19f63131597360895a3f70f0 not found: ID does not exist" Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.165723 4642 scope.go:117] "RemoveContainer" containerID="b536b7183d5a8423c1a0ed66eec87ed8e00629d2b1da21c17881a879e0873af1" Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.166307 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b536b7183d5a8423c1a0ed66eec87ed8e00629d2b1da21c17881a879e0873af1"} err="failed to get container status \"b536b7183d5a8423c1a0ed66eec87ed8e00629d2b1da21c17881a879e0873af1\": rpc error: code = NotFound desc = could not find container \"b536b7183d5a8423c1a0ed66eec87ed8e00629d2b1da21c17881a879e0873af1\": container with ID starting with b536b7183d5a8423c1a0ed66eec87ed8e00629d2b1da21c17881a879e0873af1 not found: ID does not exist" Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.166346 4642 scope.go:117] "RemoveContainer" containerID="2e1fa65b94cb91ff1b39cf0473ca3738496cdae1c0ade8009ef01be72a6dc084" Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.166804 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e1fa65b94cb91ff1b39cf0473ca3738496cdae1c0ade8009ef01be72a6dc084"} err="failed to get container status \"2e1fa65b94cb91ff1b39cf0473ca3738496cdae1c0ade8009ef01be72a6dc084\": rpc error: code = NotFound desc = could not find container \"2e1fa65b94cb91ff1b39cf0473ca3738496cdae1c0ade8009ef01be72a6dc084\": container with ID starting with 2e1fa65b94cb91ff1b39cf0473ca3738496cdae1c0ade8009ef01be72a6dc084 not found: ID does not exist" Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.166847 4642 scope.go:117] "RemoveContainer" containerID="3fe858a821da4f29bdf5249bc6d7312d774e26222eb8043313250bfe9cbeac9e" Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.167180 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fe858a821da4f29bdf5249bc6d7312d774e26222eb8043313250bfe9cbeac9e"} err="failed to get container status \"3fe858a821da4f29bdf5249bc6d7312d774e26222eb8043313250bfe9cbeac9e\": rpc error: code = NotFound desc = could not find container \"3fe858a821da4f29bdf5249bc6d7312d774e26222eb8043313250bfe9cbeac9e\": container with ID starting with 3fe858a821da4f29bdf5249bc6d7312d774e26222eb8043313250bfe9cbeac9e not found: ID does not exist" Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.167217 4642 scope.go:117] "RemoveContainer" containerID="0883a9883907d4acd5bfde2b91f1b7acb7d0911b19f63131597360895a3f70f0" Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.167433 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0883a9883907d4acd5bfde2b91f1b7acb7d0911b19f63131597360895a3f70f0"} err="failed to get container status \"0883a9883907d4acd5bfde2b91f1b7acb7d0911b19f63131597360895a3f70f0\": rpc error: code = NotFound desc = could not find container \"0883a9883907d4acd5bfde2b91f1b7acb7d0911b19f63131597360895a3f70f0\": container with ID starting with 0883a9883907d4acd5bfde2b91f1b7acb7d0911b19f63131597360895a3f70f0 not found: ID does not exist" Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.167451 4642 scope.go:117] "RemoveContainer" containerID="b536b7183d5a8423c1a0ed66eec87ed8e00629d2b1da21c17881a879e0873af1" Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.167792 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b536b7183d5a8423c1a0ed66eec87ed8e00629d2b1da21c17881a879e0873af1"} err="failed to get container status \"b536b7183d5a8423c1a0ed66eec87ed8e00629d2b1da21c17881a879e0873af1\": rpc error: code = NotFound desc = could not find container \"b536b7183d5a8423c1a0ed66eec87ed8e00629d2b1da21c17881a879e0873af1\": container with ID starting with b536b7183d5a8423c1a0ed66eec87ed8e00629d2b1da21c17881a879e0873af1 not found: ID does not exist" Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.167809 4642 scope.go:117] "RemoveContainer" containerID="2e1fa65b94cb91ff1b39cf0473ca3738496cdae1c0ade8009ef01be72a6dc084" Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.168038 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e1fa65b94cb91ff1b39cf0473ca3738496cdae1c0ade8009ef01be72a6dc084"} err="failed to get container status \"2e1fa65b94cb91ff1b39cf0473ca3738496cdae1c0ade8009ef01be72a6dc084\": rpc error: code = NotFound desc = could not find container \"2e1fa65b94cb91ff1b39cf0473ca3738496cdae1c0ade8009ef01be72a6dc084\": container with ID starting with 2e1fa65b94cb91ff1b39cf0473ca3738496cdae1c0ade8009ef01be72a6dc084 not found: ID does not exist" Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.168055 4642 scope.go:117] "RemoveContainer" containerID="3fe858a821da4f29bdf5249bc6d7312d774e26222eb8043313250bfe9cbeac9e" Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.168283 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fe858a821da4f29bdf5249bc6d7312d774e26222eb8043313250bfe9cbeac9e"} err="failed to get container status \"3fe858a821da4f29bdf5249bc6d7312d774e26222eb8043313250bfe9cbeac9e\": rpc error: code = NotFound desc = could not find container \"3fe858a821da4f29bdf5249bc6d7312d774e26222eb8043313250bfe9cbeac9e\": container with ID starting with 3fe858a821da4f29bdf5249bc6d7312d774e26222eb8043313250bfe9cbeac9e not found: ID does not exist" Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.168300 4642 scope.go:117] "RemoveContainer" containerID="0883a9883907d4acd5bfde2b91f1b7acb7d0911b19f63131597360895a3f70f0" Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.168527 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0883a9883907d4acd5bfde2b91f1b7acb7d0911b19f63131597360895a3f70f0"} err="failed to get container status \"0883a9883907d4acd5bfde2b91f1b7acb7d0911b19f63131597360895a3f70f0\": rpc error: code = NotFound desc = could not find container \"0883a9883907d4acd5bfde2b91f1b7acb7d0911b19f63131597360895a3f70f0\": container with ID starting with 0883a9883907d4acd5bfde2b91f1b7acb7d0911b19f63131597360895a3f70f0 not found: ID does not exist" Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.168544 4642 scope.go:117] "RemoveContainer" containerID="b536b7183d5a8423c1a0ed66eec87ed8e00629d2b1da21c17881a879e0873af1" Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.168747 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b536b7183d5a8423c1a0ed66eec87ed8e00629d2b1da21c17881a879e0873af1"} err="failed to get container status \"b536b7183d5a8423c1a0ed66eec87ed8e00629d2b1da21c17881a879e0873af1\": rpc error: code = NotFound desc = could not find container \"b536b7183d5a8423c1a0ed66eec87ed8e00629d2b1da21c17881a879e0873af1\": container with ID starting with b536b7183d5a8423c1a0ed66eec87ed8e00629d2b1da21c17881a879e0873af1 not found: ID does not exist" Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.168767 4642 scope.go:117] "RemoveContainer" containerID="716156c7890e28ba48a0c68ce3ac2b1368c24ced520d78fdbc588d840e3ff3f5" Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.184400 4642 scope.go:117] "RemoveContainer" containerID="d0678ef6e24f0be02ecc6fbfa4d0e423ab40d0e3177d47fdc5d416a94b3962dd" Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.200694 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/789c47c0-e193-4caf-b7a0-fad74e069087-run-httpd\") pod \"ceilometer-0\" (UID: \"789c47c0-e193-4caf-b7a0-fad74e069087\") " pod="openstack/ceilometer-0" Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.200737 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjn5x\" (UniqueName: \"kubernetes.io/projected/789c47c0-e193-4caf-b7a0-fad74e069087-kube-api-access-cjn5x\") pod \"ceilometer-0\" (UID: \"789c47c0-e193-4caf-b7a0-fad74e069087\") " pod="openstack/ceilometer-0" Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.200769 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/789c47c0-e193-4caf-b7a0-fad74e069087-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"789c47c0-e193-4caf-b7a0-fad74e069087\") " pod="openstack/ceilometer-0" Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.200800 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/789c47c0-e193-4caf-b7a0-fad74e069087-scripts\") pod \"ceilometer-0\" (UID: \"789c47c0-e193-4caf-b7a0-fad74e069087\") " pod="openstack/ceilometer-0" Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.200832 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/789c47c0-e193-4caf-b7a0-fad74e069087-log-httpd\") pod \"ceilometer-0\" (UID: \"789c47c0-e193-4caf-b7a0-fad74e069087\") " pod="openstack/ceilometer-0" Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.200852 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/789c47c0-e193-4caf-b7a0-fad74e069087-config-data\") pod \"ceilometer-0\" (UID: \"789c47c0-e193-4caf-b7a0-fad74e069087\") " pod="openstack/ceilometer-0" Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.200878 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/789c47c0-e193-4caf-b7a0-fad74e069087-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"789c47c0-e193-4caf-b7a0-fad74e069087\") " pod="openstack/ceilometer-0" Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.302677 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/789c47c0-e193-4caf-b7a0-fad74e069087-config-data\") pod \"ceilometer-0\" (UID: \"789c47c0-e193-4caf-b7a0-fad74e069087\") " pod="openstack/ceilometer-0" Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.302724 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/789c47c0-e193-4caf-b7a0-fad74e069087-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"789c47c0-e193-4caf-b7a0-fad74e069087\") " pod="openstack/ceilometer-0" Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.302815 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/789c47c0-e193-4caf-b7a0-fad74e069087-run-httpd\") pod \"ceilometer-0\" (UID: \"789c47c0-e193-4caf-b7a0-fad74e069087\") " pod="openstack/ceilometer-0" Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.302842 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjn5x\" (UniqueName: \"kubernetes.io/projected/789c47c0-e193-4caf-b7a0-fad74e069087-kube-api-access-cjn5x\") pod \"ceilometer-0\" (UID: \"789c47c0-e193-4caf-b7a0-fad74e069087\") " pod="openstack/ceilometer-0" Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.302871 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/789c47c0-e193-4caf-b7a0-fad74e069087-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"789c47c0-e193-4caf-b7a0-fad74e069087\") " pod="openstack/ceilometer-0" Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.302895 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/789c47c0-e193-4caf-b7a0-fad74e069087-scripts\") pod \"ceilometer-0\" (UID: \"789c47c0-e193-4caf-b7a0-fad74e069087\") " pod="openstack/ceilometer-0" Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.302926 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/789c47c0-e193-4caf-b7a0-fad74e069087-log-httpd\") pod \"ceilometer-0\" (UID: \"789c47c0-e193-4caf-b7a0-fad74e069087\") " pod="openstack/ceilometer-0" Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.303316 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/789c47c0-e193-4caf-b7a0-fad74e069087-log-httpd\") pod \"ceilometer-0\" (UID: \"789c47c0-e193-4caf-b7a0-fad74e069087\") " pod="openstack/ceilometer-0" Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.304174 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/789c47c0-e193-4caf-b7a0-fad74e069087-run-httpd\") pod \"ceilometer-0\" (UID: \"789c47c0-e193-4caf-b7a0-fad74e069087\") " pod="openstack/ceilometer-0" Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.308175 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/789c47c0-e193-4caf-b7a0-fad74e069087-config-data\") pod \"ceilometer-0\" (UID: \"789c47c0-e193-4caf-b7a0-fad74e069087\") " pod="openstack/ceilometer-0" Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.310503 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/789c47c0-e193-4caf-b7a0-fad74e069087-scripts\") pod \"ceilometer-0\" (UID: \"789c47c0-e193-4caf-b7a0-fad74e069087\") " pod="openstack/ceilometer-0" Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.312102 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/789c47c0-e193-4caf-b7a0-fad74e069087-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"789c47c0-e193-4caf-b7a0-fad74e069087\") " pod="openstack/ceilometer-0" Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.312916 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/789c47c0-e193-4caf-b7a0-fad74e069087-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"789c47c0-e193-4caf-b7a0-fad74e069087\") " pod="openstack/ceilometer-0" Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.319992 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjn5x\" (UniqueName: \"kubernetes.io/projected/789c47c0-e193-4caf-b7a0-fad74e069087-kube-api-access-cjn5x\") pod \"ceilometer-0\" (UID: \"789c47c0-e193-4caf-b7a0-fad74e069087\") " pod="openstack/ceilometer-0" Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.439984 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 07:04:29 crc kubenswrapper[4642]: I0128 07:04:29.837342 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 07:04:29 crc kubenswrapper[4642]: W0128 07:04:29.845002 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod789c47c0_e193_4caf_b7a0_fad74e069087.slice/crio-0c990e3c5095b80e3a5e5d71805285ff36bb074ba6da38b0c84cf3f91d7d7128 WatchSource:0}: Error finding container 0c990e3c5095b80e3a5e5d71805285ff36bb074ba6da38b0c84cf3f91d7d7128: Status 404 returned error can't find the container with id 0c990e3c5095b80e3a5e5d71805285ff36bb074ba6da38b0c84cf3f91d7d7128 Jan 28 07:04:30 crc kubenswrapper[4642]: I0128 07:04:30.039771 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"789c47c0-e193-4caf-b7a0-fad74e069087","Type":"ContainerStarted","Data":"0c990e3c5095b80e3a5e5d71805285ff36bb074ba6da38b0c84cf3f91d7d7128"} Jan 28 07:04:30 crc kubenswrapper[4642]: I0128 07:04:30.305704 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-65b856d58d-bqqcd" Jan 28 07:04:30 crc kubenswrapper[4642]: I0128 07:04:30.368051 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-65b856d58d-bqqcd" Jan 28 07:04:30 crc kubenswrapper[4642]: I0128 07:04:30.406472 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6586c6444d-858tc"] Jan 28 07:04:30 crc kubenswrapper[4642]: I0128 07:04:30.406672 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6586c6444d-858tc" podUID="747300e5-abfb-4f04-90b4-a88ab60f1a5f" containerName="barbican-api-log" containerID="cri-o://1788ced8982e83d0f3acfe7102596a7f8107a713813fe5e662ed157060cbaf86" gracePeriod=30 Jan 28 07:04:30 crc kubenswrapper[4642]: I0128 07:04:30.406835 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6586c6444d-858tc" podUID="747300e5-abfb-4f04-90b4-a88ab60f1a5f" containerName="barbican-api" containerID="cri-o://d281cc6bf90f440e9a88696df456a95d575fc1145b46a94c952b6c5f5a269b29" gracePeriod=30 Jan 28 07:04:31 crc kubenswrapper[4642]: I0128 07:04:31.055290 4642 generic.go:334] "Generic (PLEG): container finished" podID="747300e5-abfb-4f04-90b4-a88ab60f1a5f" containerID="1788ced8982e83d0f3acfe7102596a7f8107a713813fe5e662ed157060cbaf86" exitCode=143 Jan 28 07:04:31 crc kubenswrapper[4642]: I0128 07:04:31.055358 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6586c6444d-858tc" event={"ID":"747300e5-abfb-4f04-90b4-a88ab60f1a5f","Type":"ContainerDied","Data":"1788ced8982e83d0f3acfe7102596a7f8107a713813fe5e662ed157060cbaf86"} Jan 28 07:04:31 crc kubenswrapper[4642]: I0128 07:04:31.057591 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"789c47c0-e193-4caf-b7a0-fad74e069087","Type":"ContainerStarted","Data":"6b9666071ae5d9423af33d8a12c2f7074dd10dae94b62ab19c5df9908e7cfb7e"} Jan 28 07:04:31 crc kubenswrapper[4642]: I0128 07:04:31.128552 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="201fcbe3-af3e-4f46-bd0f-07693de10229" path="/var/lib/kubelet/pods/201fcbe3-af3e-4f46-bd0f-07693de10229/volumes" Jan 28 07:04:31 crc kubenswrapper[4642]: I0128 07:04:31.129260 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94d4e46c-7083-4acb-8925-a9a92278e0c6" path="/var/lib/kubelet/pods/94d4e46c-7083-4acb-8925-a9a92278e0c6/volumes" Jan 28 07:04:32 crc kubenswrapper[4642]: I0128 07:04:32.074465 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"789c47c0-e193-4caf-b7a0-fad74e069087","Type":"ContainerStarted","Data":"bf343bb2d680fce762576304e86b1515dc5620bcb7fa86395b1ca33aa3de44b2"} Jan 28 07:04:33 crc kubenswrapper[4642]: I0128 07:04:33.094490 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"789c47c0-e193-4caf-b7a0-fad74e069087","Type":"ContainerStarted","Data":"673c4e608e44a612854ceed78b443f2c65d64f48aea558cc8c3631b85ecd74aa"} Jan 28 07:04:33 crc kubenswrapper[4642]: I0128 07:04:33.493080 4642 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6586c6444d-858tc" podUID="747300e5-abfb-4f04-90b4-a88ab60f1a5f" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.155:9311/healthcheck\": read tcp 10.217.0.2:60934->10.217.0.155:9311: read: connection reset by peer" Jan 28 07:04:33 crc kubenswrapper[4642]: I0128 07:04:33.493142 4642 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6586c6444d-858tc" podUID="747300e5-abfb-4f04-90b4-a88ab60f1a5f" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.155:9311/healthcheck\": read tcp 10.217.0.2:60946->10.217.0.155:9311: read: connection reset by peer" Jan 28 07:04:33 crc kubenswrapper[4642]: I0128 07:04:33.888058 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6586c6444d-858tc" Jan 28 07:04:34 crc kubenswrapper[4642]: I0128 07:04:34.018579 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/747300e5-abfb-4f04-90b4-a88ab60f1a5f-config-data-custom\") pod \"747300e5-abfb-4f04-90b4-a88ab60f1a5f\" (UID: \"747300e5-abfb-4f04-90b4-a88ab60f1a5f\") " Jan 28 07:04:34 crc kubenswrapper[4642]: I0128 07:04:34.018669 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/747300e5-abfb-4f04-90b4-a88ab60f1a5f-config-data\") pod \"747300e5-abfb-4f04-90b4-a88ab60f1a5f\" (UID: \"747300e5-abfb-4f04-90b4-a88ab60f1a5f\") " Jan 28 07:04:34 crc kubenswrapper[4642]: I0128 07:04:34.018694 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/747300e5-abfb-4f04-90b4-a88ab60f1a5f-combined-ca-bundle\") pod \"747300e5-abfb-4f04-90b4-a88ab60f1a5f\" (UID: \"747300e5-abfb-4f04-90b4-a88ab60f1a5f\") " Jan 28 07:04:34 crc kubenswrapper[4642]: I0128 07:04:34.018758 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/747300e5-abfb-4f04-90b4-a88ab60f1a5f-logs\") pod \"747300e5-abfb-4f04-90b4-a88ab60f1a5f\" (UID: \"747300e5-abfb-4f04-90b4-a88ab60f1a5f\") " Jan 28 07:04:34 crc kubenswrapper[4642]: I0128 07:04:34.018778 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fld52\" (UniqueName: \"kubernetes.io/projected/747300e5-abfb-4f04-90b4-a88ab60f1a5f-kube-api-access-fld52\") pod \"747300e5-abfb-4f04-90b4-a88ab60f1a5f\" (UID: \"747300e5-abfb-4f04-90b4-a88ab60f1a5f\") " Jan 28 07:04:34 crc kubenswrapper[4642]: I0128 07:04:34.019506 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/747300e5-abfb-4f04-90b4-a88ab60f1a5f-logs" (OuterVolumeSpecName: "logs") pod "747300e5-abfb-4f04-90b4-a88ab60f1a5f" (UID: "747300e5-abfb-4f04-90b4-a88ab60f1a5f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 07:04:34 crc kubenswrapper[4642]: I0128 07:04:34.025757 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/747300e5-abfb-4f04-90b4-a88ab60f1a5f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "747300e5-abfb-4f04-90b4-a88ab60f1a5f" (UID: "747300e5-abfb-4f04-90b4-a88ab60f1a5f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:04:34 crc kubenswrapper[4642]: I0128 07:04:34.026821 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/747300e5-abfb-4f04-90b4-a88ab60f1a5f-kube-api-access-fld52" (OuterVolumeSpecName: "kube-api-access-fld52") pod "747300e5-abfb-4f04-90b4-a88ab60f1a5f" (UID: "747300e5-abfb-4f04-90b4-a88ab60f1a5f"). InnerVolumeSpecName "kube-api-access-fld52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:04:34 crc kubenswrapper[4642]: I0128 07:04:34.046205 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/747300e5-abfb-4f04-90b4-a88ab60f1a5f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "747300e5-abfb-4f04-90b4-a88ab60f1a5f" (UID: "747300e5-abfb-4f04-90b4-a88ab60f1a5f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:04:34 crc kubenswrapper[4642]: I0128 07:04:34.067488 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/747300e5-abfb-4f04-90b4-a88ab60f1a5f-config-data" (OuterVolumeSpecName: "config-data") pod "747300e5-abfb-4f04-90b4-a88ab60f1a5f" (UID: "747300e5-abfb-4f04-90b4-a88ab60f1a5f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:04:34 crc kubenswrapper[4642]: I0128 07:04:34.109306 4642 generic.go:334] "Generic (PLEG): container finished" podID="747300e5-abfb-4f04-90b4-a88ab60f1a5f" containerID="d281cc6bf90f440e9a88696df456a95d575fc1145b46a94c952b6c5f5a269b29" exitCode=0 Jan 28 07:04:34 crc kubenswrapper[4642]: I0128 07:04:34.109914 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6586c6444d-858tc" event={"ID":"747300e5-abfb-4f04-90b4-a88ab60f1a5f","Type":"ContainerDied","Data":"d281cc6bf90f440e9a88696df456a95d575fc1145b46a94c952b6c5f5a269b29"} Jan 28 07:04:34 crc kubenswrapper[4642]: I0128 07:04:34.109981 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6586c6444d-858tc" event={"ID":"747300e5-abfb-4f04-90b4-a88ab60f1a5f","Type":"ContainerDied","Data":"2a85d0fb95e2ea374114b5ea9ffed07d76745ecc7f2bca2880f43258ebbe0014"} Jan 28 07:04:34 crc kubenswrapper[4642]: I0128 07:04:34.110042 4642 scope.go:117] "RemoveContainer" containerID="d281cc6bf90f440e9a88696df456a95d575fc1145b46a94c952b6c5f5a269b29" Jan 28 07:04:34 crc kubenswrapper[4642]: I0128 07:04:34.110160 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6586c6444d-858tc" Jan 28 07:04:34 crc kubenswrapper[4642]: I0128 07:04:34.121916 4642 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/747300e5-abfb-4f04-90b4-a88ab60f1a5f-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:34 crc kubenswrapper[4642]: I0128 07:04:34.121940 4642 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/747300e5-abfb-4f04-90b4-a88ab60f1a5f-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:34 crc kubenswrapper[4642]: I0128 07:04:34.121950 4642 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/747300e5-abfb-4f04-90b4-a88ab60f1a5f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:34 crc kubenswrapper[4642]: I0128 07:04:34.121962 4642 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/747300e5-abfb-4f04-90b4-a88ab60f1a5f-logs\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:34 crc kubenswrapper[4642]: I0128 07:04:34.121973 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fld52\" (UniqueName: \"kubernetes.io/projected/747300e5-abfb-4f04-90b4-a88ab60f1a5f-kube-api-access-fld52\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:34 crc kubenswrapper[4642]: I0128 07:04:34.144613 4642 scope.go:117] "RemoveContainer" containerID="1788ced8982e83d0f3acfe7102596a7f8107a713813fe5e662ed157060cbaf86" Jan 28 07:04:34 crc kubenswrapper[4642]: I0128 07:04:34.146875 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6586c6444d-858tc"] Jan 28 07:04:34 crc kubenswrapper[4642]: I0128 07:04:34.154391 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-6586c6444d-858tc"] Jan 28 07:04:34 crc kubenswrapper[4642]: I0128 07:04:34.164762 4642 scope.go:117] "RemoveContainer" containerID="d281cc6bf90f440e9a88696df456a95d575fc1145b46a94c952b6c5f5a269b29" Jan 28 07:04:34 crc kubenswrapper[4642]: E0128 07:04:34.165264 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d281cc6bf90f440e9a88696df456a95d575fc1145b46a94c952b6c5f5a269b29\": container with ID starting with d281cc6bf90f440e9a88696df456a95d575fc1145b46a94c952b6c5f5a269b29 not found: ID does not exist" containerID="d281cc6bf90f440e9a88696df456a95d575fc1145b46a94c952b6c5f5a269b29" Jan 28 07:04:34 crc kubenswrapper[4642]: I0128 07:04:34.165308 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d281cc6bf90f440e9a88696df456a95d575fc1145b46a94c952b6c5f5a269b29"} err="failed to get container status \"d281cc6bf90f440e9a88696df456a95d575fc1145b46a94c952b6c5f5a269b29\": rpc error: code = NotFound desc = could not find container \"d281cc6bf90f440e9a88696df456a95d575fc1145b46a94c952b6c5f5a269b29\": container with ID starting with d281cc6bf90f440e9a88696df456a95d575fc1145b46a94c952b6c5f5a269b29 not found: ID does not exist" Jan 28 07:04:34 crc kubenswrapper[4642]: I0128 07:04:34.165333 4642 scope.go:117] "RemoveContainer" containerID="1788ced8982e83d0f3acfe7102596a7f8107a713813fe5e662ed157060cbaf86" Jan 28 07:04:34 crc kubenswrapper[4642]: E0128 07:04:34.165769 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1788ced8982e83d0f3acfe7102596a7f8107a713813fe5e662ed157060cbaf86\": container with ID starting with 1788ced8982e83d0f3acfe7102596a7f8107a713813fe5e662ed157060cbaf86 not found: ID does not exist" containerID="1788ced8982e83d0f3acfe7102596a7f8107a713813fe5e662ed157060cbaf86" Jan 28 07:04:34 crc kubenswrapper[4642]: I0128 07:04:34.165813 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1788ced8982e83d0f3acfe7102596a7f8107a713813fe5e662ed157060cbaf86"} err="failed to get container status \"1788ced8982e83d0f3acfe7102596a7f8107a713813fe5e662ed157060cbaf86\": rpc error: code = NotFound desc = could not find container \"1788ced8982e83d0f3acfe7102596a7f8107a713813fe5e662ed157060cbaf86\": container with ID starting with 1788ced8982e83d0f3acfe7102596a7f8107a713813fe5e662ed157060cbaf86 not found: ID does not exist" Jan 28 07:04:35 crc kubenswrapper[4642]: I0128 07:04:35.109739 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="747300e5-abfb-4f04-90b4-a88ab60f1a5f" path="/var/lib/kubelet/pods/747300e5-abfb-4f04-90b4-a88ab60f1a5f/volumes" Jan 28 07:04:35 crc kubenswrapper[4642]: I0128 07:04:35.124179 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"789c47c0-e193-4caf-b7a0-fad74e069087","Type":"ContainerStarted","Data":"cd3959ecf7e23fe5039b83a72db3b34e9b79411ed22d82d2e983ecdb81fde7fb"} Jan 28 07:04:35 crc kubenswrapper[4642]: I0128 07:04:35.125615 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 28 07:04:35 crc kubenswrapper[4642]: I0128 07:04:35.153046 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.825488526 podStartE2EDuration="6.153014164s" podCreationTimestamp="2026-01-28 07:04:29 +0000 UTC" firstStartedPulling="2026-01-28 07:04:29.847547877 +0000 UTC m=+993.079636687" lastFinishedPulling="2026-01-28 07:04:34.175073516 +0000 UTC m=+997.407162325" observedRunningTime="2026-01-28 07:04:35.147411279 +0000 UTC m=+998.379500088" watchObservedRunningTime="2026-01-28 07:04:35.153014164 +0000 UTC m=+998.385102973" Jan 28 07:04:41 crc kubenswrapper[4642]: I0128 07:04:41.364282 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5586f7766d-jr6js" Jan 28 07:04:41 crc kubenswrapper[4642]: I0128 07:04:41.608849 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6c985bd689-xq2rx"] Jan 28 07:04:41 crc kubenswrapper[4642]: I0128 07:04:41.609500 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6c985bd689-xq2rx" podUID="b06aaf7d-5be1-49f9-aa27-9c5fe5408068" containerName="neutron-api" containerID="cri-o://a0f5cbbb93779a6df185d76d6369a8fe22c26ea9e5be6ff979071014999a42d2" gracePeriod=30 Jan 28 07:04:41 crc kubenswrapper[4642]: I0128 07:04:41.609730 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6c985bd689-xq2rx" podUID="b06aaf7d-5be1-49f9-aa27-9c5fe5408068" containerName="neutron-httpd" containerID="cri-o://5742a6deedc3126d7e407aab7fa02157f0d16d462d5dd51264f40a60c8e39bf2" gracePeriod=30 Jan 28 07:04:41 crc kubenswrapper[4642]: I0128 07:04:41.616836 4642 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-6c985bd689-xq2rx" podUID="b06aaf7d-5be1-49f9-aa27-9c5fe5408068" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.149:9696/\": EOF" Jan 28 07:04:41 crc kubenswrapper[4642]: I0128 07:04:41.642429 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7476bb99fc-vvh9d"] Jan 28 07:04:41 crc kubenswrapper[4642]: E0128 07:04:41.642838 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="747300e5-abfb-4f04-90b4-a88ab60f1a5f" containerName="barbican-api-log" Jan 28 07:04:41 crc kubenswrapper[4642]: I0128 07:04:41.642858 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="747300e5-abfb-4f04-90b4-a88ab60f1a5f" containerName="barbican-api-log" Jan 28 07:04:41 crc kubenswrapper[4642]: E0128 07:04:41.642895 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="747300e5-abfb-4f04-90b4-a88ab60f1a5f" containerName="barbican-api" Jan 28 07:04:41 crc kubenswrapper[4642]: I0128 07:04:41.642902 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="747300e5-abfb-4f04-90b4-a88ab60f1a5f" containerName="barbican-api" Jan 28 07:04:41 crc kubenswrapper[4642]: I0128 07:04:41.643058 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="747300e5-abfb-4f04-90b4-a88ab60f1a5f" containerName="barbican-api" Jan 28 07:04:41 crc kubenswrapper[4642]: I0128 07:04:41.643080 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="747300e5-abfb-4f04-90b4-a88ab60f1a5f" containerName="barbican-api-log" Jan 28 07:04:41 crc kubenswrapper[4642]: I0128 07:04:41.644059 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7476bb99fc-vvh9d" Jan 28 07:04:41 crc kubenswrapper[4642]: I0128 07:04:41.662118 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7476bb99fc-vvh9d"] Jan 28 07:04:41 crc kubenswrapper[4642]: I0128 07:04:41.775224 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49e0885f-27b0-4197-9ff0-95732b63bf51-combined-ca-bundle\") pod \"neutron-7476bb99fc-vvh9d\" (UID: \"49e0885f-27b0-4197-9ff0-95732b63bf51\") " pod="openstack/neutron-7476bb99fc-vvh9d" Jan 28 07:04:41 crc kubenswrapper[4642]: I0128 07:04:41.775316 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/49e0885f-27b0-4197-9ff0-95732b63bf51-ovndb-tls-certs\") pod \"neutron-7476bb99fc-vvh9d\" (UID: \"49e0885f-27b0-4197-9ff0-95732b63bf51\") " pod="openstack/neutron-7476bb99fc-vvh9d" Jan 28 07:04:41 crc kubenswrapper[4642]: I0128 07:04:41.775375 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/49e0885f-27b0-4197-9ff0-95732b63bf51-public-tls-certs\") pod \"neutron-7476bb99fc-vvh9d\" (UID: \"49e0885f-27b0-4197-9ff0-95732b63bf51\") " pod="openstack/neutron-7476bb99fc-vvh9d" Jan 28 07:04:41 crc kubenswrapper[4642]: I0128 07:04:41.775453 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/49e0885f-27b0-4197-9ff0-95732b63bf51-httpd-config\") pod \"neutron-7476bb99fc-vvh9d\" (UID: \"49e0885f-27b0-4197-9ff0-95732b63bf51\") " pod="openstack/neutron-7476bb99fc-vvh9d" Jan 28 07:04:41 crc kubenswrapper[4642]: I0128 07:04:41.775521 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5tmx\" (UniqueName: \"kubernetes.io/projected/49e0885f-27b0-4197-9ff0-95732b63bf51-kube-api-access-l5tmx\") pod \"neutron-7476bb99fc-vvh9d\" (UID: \"49e0885f-27b0-4197-9ff0-95732b63bf51\") " pod="openstack/neutron-7476bb99fc-vvh9d" Jan 28 07:04:41 crc kubenswrapper[4642]: I0128 07:04:41.775604 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/49e0885f-27b0-4197-9ff0-95732b63bf51-internal-tls-certs\") pod \"neutron-7476bb99fc-vvh9d\" (UID: \"49e0885f-27b0-4197-9ff0-95732b63bf51\") " pod="openstack/neutron-7476bb99fc-vvh9d" Jan 28 07:04:41 crc kubenswrapper[4642]: I0128 07:04:41.775640 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/49e0885f-27b0-4197-9ff0-95732b63bf51-config\") pod \"neutron-7476bb99fc-vvh9d\" (UID: \"49e0885f-27b0-4197-9ff0-95732b63bf51\") " pod="openstack/neutron-7476bb99fc-vvh9d" Jan 28 07:04:41 crc kubenswrapper[4642]: I0128 07:04:41.878582 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49e0885f-27b0-4197-9ff0-95732b63bf51-combined-ca-bundle\") pod \"neutron-7476bb99fc-vvh9d\" (UID: \"49e0885f-27b0-4197-9ff0-95732b63bf51\") " pod="openstack/neutron-7476bb99fc-vvh9d" Jan 28 07:04:41 crc kubenswrapper[4642]: I0128 07:04:41.878716 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/49e0885f-27b0-4197-9ff0-95732b63bf51-ovndb-tls-certs\") pod \"neutron-7476bb99fc-vvh9d\" (UID: \"49e0885f-27b0-4197-9ff0-95732b63bf51\") " pod="openstack/neutron-7476bb99fc-vvh9d" Jan 28 07:04:41 crc kubenswrapper[4642]: I0128 07:04:41.878804 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/49e0885f-27b0-4197-9ff0-95732b63bf51-public-tls-certs\") pod \"neutron-7476bb99fc-vvh9d\" (UID: \"49e0885f-27b0-4197-9ff0-95732b63bf51\") " pod="openstack/neutron-7476bb99fc-vvh9d" Jan 28 07:04:41 crc kubenswrapper[4642]: I0128 07:04:41.878930 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/49e0885f-27b0-4197-9ff0-95732b63bf51-httpd-config\") pod \"neutron-7476bb99fc-vvh9d\" (UID: \"49e0885f-27b0-4197-9ff0-95732b63bf51\") " pod="openstack/neutron-7476bb99fc-vvh9d" Jan 28 07:04:41 crc kubenswrapper[4642]: I0128 07:04:41.879020 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5tmx\" (UniqueName: \"kubernetes.io/projected/49e0885f-27b0-4197-9ff0-95732b63bf51-kube-api-access-l5tmx\") pod \"neutron-7476bb99fc-vvh9d\" (UID: \"49e0885f-27b0-4197-9ff0-95732b63bf51\") " pod="openstack/neutron-7476bb99fc-vvh9d" Jan 28 07:04:41 crc kubenswrapper[4642]: I0128 07:04:41.879152 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/49e0885f-27b0-4197-9ff0-95732b63bf51-internal-tls-certs\") pod \"neutron-7476bb99fc-vvh9d\" (UID: \"49e0885f-27b0-4197-9ff0-95732b63bf51\") " pod="openstack/neutron-7476bb99fc-vvh9d" Jan 28 07:04:41 crc kubenswrapper[4642]: I0128 07:04:41.879225 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/49e0885f-27b0-4197-9ff0-95732b63bf51-config\") pod \"neutron-7476bb99fc-vvh9d\" (UID: \"49e0885f-27b0-4197-9ff0-95732b63bf51\") " pod="openstack/neutron-7476bb99fc-vvh9d" Jan 28 07:04:41 crc kubenswrapper[4642]: I0128 07:04:41.885958 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/49e0885f-27b0-4197-9ff0-95732b63bf51-ovndb-tls-certs\") pod \"neutron-7476bb99fc-vvh9d\" (UID: \"49e0885f-27b0-4197-9ff0-95732b63bf51\") " pod="openstack/neutron-7476bb99fc-vvh9d" Jan 28 07:04:41 crc kubenswrapper[4642]: I0128 07:04:41.887363 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49e0885f-27b0-4197-9ff0-95732b63bf51-combined-ca-bundle\") pod \"neutron-7476bb99fc-vvh9d\" (UID: \"49e0885f-27b0-4197-9ff0-95732b63bf51\") " pod="openstack/neutron-7476bb99fc-vvh9d" Jan 28 07:04:41 crc kubenswrapper[4642]: I0128 07:04:41.890747 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/49e0885f-27b0-4197-9ff0-95732b63bf51-public-tls-certs\") pod \"neutron-7476bb99fc-vvh9d\" (UID: \"49e0885f-27b0-4197-9ff0-95732b63bf51\") " pod="openstack/neutron-7476bb99fc-vvh9d" Jan 28 07:04:41 crc kubenswrapper[4642]: I0128 07:04:41.891174 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/49e0885f-27b0-4197-9ff0-95732b63bf51-config\") pod \"neutron-7476bb99fc-vvh9d\" (UID: \"49e0885f-27b0-4197-9ff0-95732b63bf51\") " pod="openstack/neutron-7476bb99fc-vvh9d" Jan 28 07:04:41 crc kubenswrapper[4642]: I0128 07:04:41.893696 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/49e0885f-27b0-4197-9ff0-95732b63bf51-httpd-config\") pod \"neutron-7476bb99fc-vvh9d\" (UID: \"49e0885f-27b0-4197-9ff0-95732b63bf51\") " pod="openstack/neutron-7476bb99fc-vvh9d" Jan 28 07:04:41 crc kubenswrapper[4642]: I0128 07:04:41.895855 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/49e0885f-27b0-4197-9ff0-95732b63bf51-internal-tls-certs\") pod \"neutron-7476bb99fc-vvh9d\" (UID: \"49e0885f-27b0-4197-9ff0-95732b63bf51\") " pod="openstack/neutron-7476bb99fc-vvh9d" Jan 28 07:04:41 crc kubenswrapper[4642]: I0128 07:04:41.896495 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5tmx\" (UniqueName: \"kubernetes.io/projected/49e0885f-27b0-4197-9ff0-95732b63bf51-kube-api-access-l5tmx\") pod \"neutron-7476bb99fc-vvh9d\" (UID: \"49e0885f-27b0-4197-9ff0-95732b63bf51\") " pod="openstack/neutron-7476bb99fc-vvh9d" Jan 28 07:04:41 crc kubenswrapper[4642]: I0128 07:04:41.962761 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7476bb99fc-vvh9d" Jan 28 07:04:42 crc kubenswrapper[4642]: I0128 07:04:42.193860 4642 generic.go:334] "Generic (PLEG): container finished" podID="b06aaf7d-5be1-49f9-aa27-9c5fe5408068" containerID="5742a6deedc3126d7e407aab7fa02157f0d16d462d5dd51264f40a60c8e39bf2" exitCode=0 Jan 28 07:04:42 crc kubenswrapper[4642]: I0128 07:04:42.193938 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c985bd689-xq2rx" event={"ID":"b06aaf7d-5be1-49f9-aa27-9c5fe5408068","Type":"ContainerDied","Data":"5742a6deedc3126d7e407aab7fa02157f0d16d462d5dd51264f40a60c8e39bf2"} Jan 28 07:04:42 crc kubenswrapper[4642]: W0128 07:04:42.573254 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod49e0885f_27b0_4197_9ff0_95732b63bf51.slice/crio-d042480bb79efbc43eeb5b1fc58adde4bc1f60a3da5425e01c2c593b1bea29b2 WatchSource:0}: Error finding container d042480bb79efbc43eeb5b1fc58adde4bc1f60a3da5425e01c2c593b1bea29b2: Status 404 returned error can't find the container with id d042480bb79efbc43eeb5b1fc58adde4bc1f60a3da5425e01c2c593b1bea29b2 Jan 28 07:04:42 crc kubenswrapper[4642]: I0128 07:04:42.573560 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7476bb99fc-vvh9d"] Jan 28 07:04:43 crc kubenswrapper[4642]: I0128 07:04:43.202948 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7476bb99fc-vvh9d" event={"ID":"49e0885f-27b0-4197-9ff0-95732b63bf51","Type":"ContainerStarted","Data":"0e34c0e0d15801d252505edc7e386a5dc9180aa276d13860c934493533523e29"} Jan 28 07:04:43 crc kubenswrapper[4642]: I0128 07:04:43.203318 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7476bb99fc-vvh9d" event={"ID":"49e0885f-27b0-4197-9ff0-95732b63bf51","Type":"ContainerStarted","Data":"f368db3a083a180c20bcec39ebee02184c9ef9351914cdf9114ebc5ab3337f6f"} Jan 28 07:04:43 crc kubenswrapper[4642]: I0128 07:04:43.203331 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7476bb99fc-vvh9d" event={"ID":"49e0885f-27b0-4197-9ff0-95732b63bf51","Type":"ContainerStarted","Data":"d042480bb79efbc43eeb5b1fc58adde4bc1f60a3da5425e01c2c593b1bea29b2"} Jan 28 07:04:43 crc kubenswrapper[4642]: I0128 07:04:43.203347 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7476bb99fc-vvh9d" Jan 28 07:04:43 crc kubenswrapper[4642]: I0128 07:04:43.221804 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7476bb99fc-vvh9d" podStartSLOduration=2.221788368 podStartE2EDuration="2.221788368s" podCreationTimestamp="2026-01-28 07:04:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 07:04:43.219469959 +0000 UTC m=+1006.451558769" watchObservedRunningTime="2026-01-28 07:04:43.221788368 +0000 UTC m=+1006.453877178" Jan 28 07:04:43 crc kubenswrapper[4642]: I0128 07:04:43.632740 4642 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-6c985bd689-xq2rx" podUID="b06aaf7d-5be1-49f9-aa27-9c5fe5408068" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.149:9696/\": dial tcp 10.217.0.149:9696: connect: connection refused" Jan 28 07:04:44 crc kubenswrapper[4642]: I0128 07:04:44.133589 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6c985bd689-xq2rx" Jan 28 07:04:44 crc kubenswrapper[4642]: I0128 07:04:44.212812 4642 generic.go:334] "Generic (PLEG): container finished" podID="b06aaf7d-5be1-49f9-aa27-9c5fe5408068" containerID="a0f5cbbb93779a6df185d76d6369a8fe22c26ea9e5be6ff979071014999a42d2" exitCode=0 Jan 28 07:04:44 crc kubenswrapper[4642]: I0128 07:04:44.212934 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c985bd689-xq2rx" event={"ID":"b06aaf7d-5be1-49f9-aa27-9c5fe5408068","Type":"ContainerDied","Data":"a0f5cbbb93779a6df185d76d6369a8fe22c26ea9e5be6ff979071014999a42d2"} Jan 28 07:04:44 crc kubenswrapper[4642]: I0128 07:04:44.213003 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c985bd689-xq2rx" event={"ID":"b06aaf7d-5be1-49f9-aa27-9c5fe5408068","Type":"ContainerDied","Data":"91668e33200f7db7d9e54d0632173836f6399cabf45a1530365d5b7e870e07e6"} Jan 28 07:04:44 crc kubenswrapper[4642]: I0128 07:04:44.213037 4642 scope.go:117] "RemoveContainer" containerID="5742a6deedc3126d7e407aab7fa02157f0d16d462d5dd51264f40a60c8e39bf2" Jan 28 07:04:44 crc kubenswrapper[4642]: I0128 07:04:44.213227 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6c985bd689-xq2rx" Jan 28 07:04:44 crc kubenswrapper[4642]: I0128 07:04:44.237460 4642 scope.go:117] "RemoveContainer" containerID="a0f5cbbb93779a6df185d76d6369a8fe22c26ea9e5be6ff979071014999a42d2" Jan 28 07:04:44 crc kubenswrapper[4642]: I0128 07:04:44.238904 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b06aaf7d-5be1-49f9-aa27-9c5fe5408068-config\") pod \"b06aaf7d-5be1-49f9-aa27-9c5fe5408068\" (UID: \"b06aaf7d-5be1-49f9-aa27-9c5fe5408068\") " Jan 28 07:04:44 crc kubenswrapper[4642]: I0128 07:04:44.239038 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b06aaf7d-5be1-49f9-aa27-9c5fe5408068-ovndb-tls-certs\") pod \"b06aaf7d-5be1-49f9-aa27-9c5fe5408068\" (UID: \"b06aaf7d-5be1-49f9-aa27-9c5fe5408068\") " Jan 28 07:04:44 crc kubenswrapper[4642]: I0128 07:04:44.239164 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b06aaf7d-5be1-49f9-aa27-9c5fe5408068-httpd-config\") pod \"b06aaf7d-5be1-49f9-aa27-9c5fe5408068\" (UID: \"b06aaf7d-5be1-49f9-aa27-9c5fe5408068\") " Jan 28 07:04:44 crc kubenswrapper[4642]: I0128 07:04:44.239273 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b06aaf7d-5be1-49f9-aa27-9c5fe5408068-combined-ca-bundle\") pod \"b06aaf7d-5be1-49f9-aa27-9c5fe5408068\" (UID: \"b06aaf7d-5be1-49f9-aa27-9c5fe5408068\") " Jan 28 07:04:44 crc kubenswrapper[4642]: I0128 07:04:44.239378 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b06aaf7d-5be1-49f9-aa27-9c5fe5408068-public-tls-certs\") pod \"b06aaf7d-5be1-49f9-aa27-9c5fe5408068\" (UID: \"b06aaf7d-5be1-49f9-aa27-9c5fe5408068\") " Jan 28 07:04:44 crc kubenswrapper[4642]: I0128 07:04:44.239462 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b06aaf7d-5be1-49f9-aa27-9c5fe5408068-internal-tls-certs\") pod \"b06aaf7d-5be1-49f9-aa27-9c5fe5408068\" (UID: \"b06aaf7d-5be1-49f9-aa27-9c5fe5408068\") " Jan 28 07:04:44 crc kubenswrapper[4642]: I0128 07:04:44.239553 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cd5bj\" (UniqueName: \"kubernetes.io/projected/b06aaf7d-5be1-49f9-aa27-9c5fe5408068-kube-api-access-cd5bj\") pod \"b06aaf7d-5be1-49f9-aa27-9c5fe5408068\" (UID: \"b06aaf7d-5be1-49f9-aa27-9c5fe5408068\") " Jan 28 07:04:44 crc kubenswrapper[4642]: I0128 07:04:44.244704 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b06aaf7d-5be1-49f9-aa27-9c5fe5408068-kube-api-access-cd5bj" (OuterVolumeSpecName: "kube-api-access-cd5bj") pod "b06aaf7d-5be1-49f9-aa27-9c5fe5408068" (UID: "b06aaf7d-5be1-49f9-aa27-9c5fe5408068"). InnerVolumeSpecName "kube-api-access-cd5bj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:04:44 crc kubenswrapper[4642]: I0128 07:04:44.253864 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b06aaf7d-5be1-49f9-aa27-9c5fe5408068-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "b06aaf7d-5be1-49f9-aa27-9c5fe5408068" (UID: "b06aaf7d-5be1-49f9-aa27-9c5fe5408068"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:04:44 crc kubenswrapper[4642]: I0128 07:04:44.278349 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b06aaf7d-5be1-49f9-aa27-9c5fe5408068-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b06aaf7d-5be1-49f9-aa27-9c5fe5408068" (UID: "b06aaf7d-5be1-49f9-aa27-9c5fe5408068"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:04:44 crc kubenswrapper[4642]: I0128 07:04:44.279713 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b06aaf7d-5be1-49f9-aa27-9c5fe5408068-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "b06aaf7d-5be1-49f9-aa27-9c5fe5408068" (UID: "b06aaf7d-5be1-49f9-aa27-9c5fe5408068"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:04:44 crc kubenswrapper[4642]: I0128 07:04:44.288770 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b06aaf7d-5be1-49f9-aa27-9c5fe5408068-config" (OuterVolumeSpecName: "config") pod "b06aaf7d-5be1-49f9-aa27-9c5fe5408068" (UID: "b06aaf7d-5be1-49f9-aa27-9c5fe5408068"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:04:44 crc kubenswrapper[4642]: I0128 07:04:44.292447 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b06aaf7d-5be1-49f9-aa27-9c5fe5408068-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "b06aaf7d-5be1-49f9-aa27-9c5fe5408068" (UID: "b06aaf7d-5be1-49f9-aa27-9c5fe5408068"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:04:44 crc kubenswrapper[4642]: I0128 07:04:44.306556 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b06aaf7d-5be1-49f9-aa27-9c5fe5408068-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "b06aaf7d-5be1-49f9-aa27-9c5fe5408068" (UID: "b06aaf7d-5be1-49f9-aa27-9c5fe5408068"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:04:44 crc kubenswrapper[4642]: I0128 07:04:44.342994 4642 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/b06aaf7d-5be1-49f9-aa27-9c5fe5408068-config\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:44 crc kubenswrapper[4642]: I0128 07:04:44.343028 4642 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b06aaf7d-5be1-49f9-aa27-9c5fe5408068-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:44 crc kubenswrapper[4642]: I0128 07:04:44.343042 4642 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b06aaf7d-5be1-49f9-aa27-9c5fe5408068-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:44 crc kubenswrapper[4642]: I0128 07:04:44.343052 4642 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b06aaf7d-5be1-49f9-aa27-9c5fe5408068-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:44 crc kubenswrapper[4642]: I0128 07:04:44.343063 4642 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b06aaf7d-5be1-49f9-aa27-9c5fe5408068-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:44 crc kubenswrapper[4642]: I0128 07:04:44.343071 4642 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b06aaf7d-5be1-49f9-aa27-9c5fe5408068-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:44 crc kubenswrapper[4642]: I0128 07:04:44.343081 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cd5bj\" (UniqueName: \"kubernetes.io/projected/b06aaf7d-5be1-49f9-aa27-9c5fe5408068-kube-api-access-cd5bj\") on node \"crc\" DevicePath \"\"" Jan 28 07:04:44 crc kubenswrapper[4642]: I0128 07:04:44.352692 4642 scope.go:117] "RemoveContainer" containerID="5742a6deedc3126d7e407aab7fa02157f0d16d462d5dd51264f40a60c8e39bf2" Jan 28 07:04:44 crc kubenswrapper[4642]: E0128 07:04:44.353284 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5742a6deedc3126d7e407aab7fa02157f0d16d462d5dd51264f40a60c8e39bf2\": container with ID starting with 5742a6deedc3126d7e407aab7fa02157f0d16d462d5dd51264f40a60c8e39bf2 not found: ID does not exist" containerID="5742a6deedc3126d7e407aab7fa02157f0d16d462d5dd51264f40a60c8e39bf2" Jan 28 07:04:44 crc kubenswrapper[4642]: I0128 07:04:44.353356 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5742a6deedc3126d7e407aab7fa02157f0d16d462d5dd51264f40a60c8e39bf2"} err="failed to get container status \"5742a6deedc3126d7e407aab7fa02157f0d16d462d5dd51264f40a60c8e39bf2\": rpc error: code = NotFound desc = could not find container \"5742a6deedc3126d7e407aab7fa02157f0d16d462d5dd51264f40a60c8e39bf2\": container with ID starting with 5742a6deedc3126d7e407aab7fa02157f0d16d462d5dd51264f40a60c8e39bf2 not found: ID does not exist" Jan 28 07:04:44 crc kubenswrapper[4642]: I0128 07:04:44.353405 4642 scope.go:117] "RemoveContainer" containerID="a0f5cbbb93779a6df185d76d6369a8fe22c26ea9e5be6ff979071014999a42d2" Jan 28 07:04:44 crc kubenswrapper[4642]: E0128 07:04:44.353811 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0f5cbbb93779a6df185d76d6369a8fe22c26ea9e5be6ff979071014999a42d2\": container with ID starting with a0f5cbbb93779a6df185d76d6369a8fe22c26ea9e5be6ff979071014999a42d2 not found: ID does not exist" containerID="a0f5cbbb93779a6df185d76d6369a8fe22c26ea9e5be6ff979071014999a42d2" Jan 28 07:04:44 crc kubenswrapper[4642]: I0128 07:04:44.353852 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0f5cbbb93779a6df185d76d6369a8fe22c26ea9e5be6ff979071014999a42d2"} err="failed to get container status \"a0f5cbbb93779a6df185d76d6369a8fe22c26ea9e5be6ff979071014999a42d2\": rpc error: code = NotFound desc = could not find container \"a0f5cbbb93779a6df185d76d6369a8fe22c26ea9e5be6ff979071014999a42d2\": container with ID starting with a0f5cbbb93779a6df185d76d6369a8fe22c26ea9e5be6ff979071014999a42d2 not found: ID does not exist" Jan 28 07:04:44 crc kubenswrapper[4642]: I0128 07:04:44.545450 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6c985bd689-xq2rx"] Jan 28 07:04:44 crc kubenswrapper[4642]: I0128 07:04:44.553308 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6c985bd689-xq2rx"] Jan 28 07:04:45 crc kubenswrapper[4642]: I0128 07:04:45.110406 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b06aaf7d-5be1-49f9-aa27-9c5fe5408068" path="/var/lib/kubelet/pods/b06aaf7d-5be1-49f9-aa27-9c5fe5408068/volumes" Jan 28 07:04:45 crc kubenswrapper[4642]: I0128 07:04:45.766994 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-59fd947774-hzkdn" Jan 28 07:04:45 crc kubenswrapper[4642]: I0128 07:04:45.774002 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-59fd947774-hzkdn" Jan 28 07:04:48 crc kubenswrapper[4642]: I0128 07:04:48.408961 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-f77bb558-ws68h" Jan 28 07:04:50 crc kubenswrapper[4642]: I0128 07:04:50.861487 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 28 07:04:50 crc kubenswrapper[4642]: E0128 07:04:50.862054 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b06aaf7d-5be1-49f9-aa27-9c5fe5408068" containerName="neutron-httpd" Jan 28 07:04:50 crc kubenswrapper[4642]: I0128 07:04:50.862069 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="b06aaf7d-5be1-49f9-aa27-9c5fe5408068" containerName="neutron-httpd" Jan 28 07:04:50 crc kubenswrapper[4642]: E0128 07:04:50.862083 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b06aaf7d-5be1-49f9-aa27-9c5fe5408068" containerName="neutron-api" Jan 28 07:04:50 crc kubenswrapper[4642]: I0128 07:04:50.862088 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="b06aaf7d-5be1-49f9-aa27-9c5fe5408068" containerName="neutron-api" Jan 28 07:04:50 crc kubenswrapper[4642]: I0128 07:04:50.862246 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="b06aaf7d-5be1-49f9-aa27-9c5fe5408068" containerName="neutron-api" Jan 28 07:04:50 crc kubenswrapper[4642]: I0128 07:04:50.862261 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="b06aaf7d-5be1-49f9-aa27-9c5fe5408068" containerName="neutron-httpd" Jan 28 07:04:50 crc kubenswrapper[4642]: I0128 07:04:50.862786 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 28 07:04:50 crc kubenswrapper[4642]: I0128 07:04:50.865283 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 28 07:04:50 crc kubenswrapper[4642]: I0128 07:04:50.866064 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 28 07:04:50 crc kubenswrapper[4642]: I0128 07:04:50.866178 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-9rnb8" Jan 28 07:04:50 crc kubenswrapper[4642]: I0128 07:04:50.872281 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 28 07:04:50 crc kubenswrapper[4642]: I0128 07:04:50.894121 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c754de0a-7ee3-416f-988d-d0eb4829ea99-openstack-config-secret\") pod \"openstackclient\" (UID: \"c754de0a-7ee3-416f-988d-d0eb4829ea99\") " pod="openstack/openstackclient" Jan 28 07:04:50 crc kubenswrapper[4642]: I0128 07:04:50.894202 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c754de0a-7ee3-416f-988d-d0eb4829ea99-openstack-config\") pod \"openstackclient\" (UID: \"c754de0a-7ee3-416f-988d-d0eb4829ea99\") " pod="openstack/openstackclient" Jan 28 07:04:50 crc kubenswrapper[4642]: I0128 07:04:50.894412 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7j48v\" (UniqueName: \"kubernetes.io/projected/c754de0a-7ee3-416f-988d-d0eb4829ea99-kube-api-access-7j48v\") pod \"openstackclient\" (UID: \"c754de0a-7ee3-416f-988d-d0eb4829ea99\") " pod="openstack/openstackclient" Jan 28 07:04:50 crc kubenswrapper[4642]: I0128 07:04:50.894473 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c754de0a-7ee3-416f-988d-d0eb4829ea99-combined-ca-bundle\") pod \"openstackclient\" (UID: \"c754de0a-7ee3-416f-988d-d0eb4829ea99\") " pod="openstack/openstackclient" Jan 28 07:04:50 crc kubenswrapper[4642]: I0128 07:04:50.996234 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c754de0a-7ee3-416f-988d-d0eb4829ea99-openstack-config-secret\") pod \"openstackclient\" (UID: \"c754de0a-7ee3-416f-988d-d0eb4829ea99\") " pod="openstack/openstackclient" Jan 28 07:04:50 crc kubenswrapper[4642]: I0128 07:04:50.996351 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c754de0a-7ee3-416f-988d-d0eb4829ea99-openstack-config\") pod \"openstackclient\" (UID: \"c754de0a-7ee3-416f-988d-d0eb4829ea99\") " pod="openstack/openstackclient" Jan 28 07:04:50 crc kubenswrapper[4642]: I0128 07:04:50.996407 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7j48v\" (UniqueName: \"kubernetes.io/projected/c754de0a-7ee3-416f-988d-d0eb4829ea99-kube-api-access-7j48v\") pod \"openstackclient\" (UID: \"c754de0a-7ee3-416f-988d-d0eb4829ea99\") " pod="openstack/openstackclient" Jan 28 07:04:50 crc kubenswrapper[4642]: I0128 07:04:50.996459 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c754de0a-7ee3-416f-988d-d0eb4829ea99-combined-ca-bundle\") pod \"openstackclient\" (UID: \"c754de0a-7ee3-416f-988d-d0eb4829ea99\") " pod="openstack/openstackclient" Jan 28 07:04:50 crc kubenswrapper[4642]: I0128 07:04:50.997637 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c754de0a-7ee3-416f-988d-d0eb4829ea99-openstack-config\") pod \"openstackclient\" (UID: \"c754de0a-7ee3-416f-988d-d0eb4829ea99\") " pod="openstack/openstackclient" Jan 28 07:04:51 crc kubenswrapper[4642]: I0128 07:04:51.002035 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c754de0a-7ee3-416f-988d-d0eb4829ea99-openstack-config-secret\") pod \"openstackclient\" (UID: \"c754de0a-7ee3-416f-988d-d0eb4829ea99\") " pod="openstack/openstackclient" Jan 28 07:04:51 crc kubenswrapper[4642]: I0128 07:04:51.002490 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c754de0a-7ee3-416f-988d-d0eb4829ea99-combined-ca-bundle\") pod \"openstackclient\" (UID: \"c754de0a-7ee3-416f-988d-d0eb4829ea99\") " pod="openstack/openstackclient" Jan 28 07:04:51 crc kubenswrapper[4642]: I0128 07:04:51.012537 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7j48v\" (UniqueName: \"kubernetes.io/projected/c754de0a-7ee3-416f-988d-d0eb4829ea99-kube-api-access-7j48v\") pod \"openstackclient\" (UID: \"c754de0a-7ee3-416f-988d-d0eb4829ea99\") " pod="openstack/openstackclient" Jan 28 07:04:51 crc kubenswrapper[4642]: I0128 07:04:51.178543 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 28 07:04:55 crc kubenswrapper[4642]: I0128 07:04:55.082956 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-6f4ff4554c-l99rf"] Jan 28 07:04:55 crc kubenswrapper[4642]: I0128 07:04:55.085123 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6f4ff4554c-l99rf" Jan 28 07:04:55 crc kubenswrapper[4642]: I0128 07:04:55.086828 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 28 07:04:55 crc kubenswrapper[4642]: I0128 07:04:55.087135 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 28 07:04:55 crc kubenswrapper[4642]: I0128 07:04:55.087228 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 28 07:04:55 crc kubenswrapper[4642]: I0128 07:04:55.109098 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6f4ff4554c-l99rf"] Jan 28 07:04:55 crc kubenswrapper[4642]: I0128 07:04:55.194344 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31037f93-2b83-4bd0-bcdf-62c0a973432a-combined-ca-bundle\") pod \"swift-proxy-6f4ff4554c-l99rf\" (UID: \"31037f93-2b83-4bd0-bcdf-62c0a973432a\") " pod="openstack/swift-proxy-6f4ff4554c-l99rf" Jan 28 07:04:55 crc kubenswrapper[4642]: I0128 07:04:55.194389 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/31037f93-2b83-4bd0-bcdf-62c0a973432a-internal-tls-certs\") pod \"swift-proxy-6f4ff4554c-l99rf\" (UID: \"31037f93-2b83-4bd0-bcdf-62c0a973432a\") " pod="openstack/swift-proxy-6f4ff4554c-l99rf" Jan 28 07:04:55 crc kubenswrapper[4642]: I0128 07:04:55.194428 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31037f93-2b83-4bd0-bcdf-62c0a973432a-config-data\") pod \"swift-proxy-6f4ff4554c-l99rf\" (UID: \"31037f93-2b83-4bd0-bcdf-62c0a973432a\") " pod="openstack/swift-proxy-6f4ff4554c-l99rf" Jan 28 07:04:55 crc kubenswrapper[4642]: I0128 07:04:55.194523 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/31037f93-2b83-4bd0-bcdf-62c0a973432a-log-httpd\") pod \"swift-proxy-6f4ff4554c-l99rf\" (UID: \"31037f93-2b83-4bd0-bcdf-62c0a973432a\") " pod="openstack/swift-proxy-6f4ff4554c-l99rf" Jan 28 07:04:55 crc kubenswrapper[4642]: I0128 07:04:55.194606 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/31037f93-2b83-4bd0-bcdf-62c0a973432a-public-tls-certs\") pod \"swift-proxy-6f4ff4554c-l99rf\" (UID: \"31037f93-2b83-4bd0-bcdf-62c0a973432a\") " pod="openstack/swift-proxy-6f4ff4554c-l99rf" Jan 28 07:04:55 crc kubenswrapper[4642]: I0128 07:04:55.194631 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/31037f93-2b83-4bd0-bcdf-62c0a973432a-run-httpd\") pod \"swift-proxy-6f4ff4554c-l99rf\" (UID: \"31037f93-2b83-4bd0-bcdf-62c0a973432a\") " pod="openstack/swift-proxy-6f4ff4554c-l99rf" Jan 28 07:04:55 crc kubenswrapper[4642]: I0128 07:04:55.194730 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/31037f93-2b83-4bd0-bcdf-62c0a973432a-etc-swift\") pod \"swift-proxy-6f4ff4554c-l99rf\" (UID: \"31037f93-2b83-4bd0-bcdf-62c0a973432a\") " pod="openstack/swift-proxy-6f4ff4554c-l99rf" Jan 28 07:04:55 crc kubenswrapper[4642]: I0128 07:04:55.194751 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfrq5\" (UniqueName: \"kubernetes.io/projected/31037f93-2b83-4bd0-bcdf-62c0a973432a-kube-api-access-jfrq5\") pod \"swift-proxy-6f4ff4554c-l99rf\" (UID: \"31037f93-2b83-4bd0-bcdf-62c0a973432a\") " pod="openstack/swift-proxy-6f4ff4554c-l99rf" Jan 28 07:04:55 crc kubenswrapper[4642]: I0128 07:04:55.295945 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31037f93-2b83-4bd0-bcdf-62c0a973432a-config-data\") pod \"swift-proxy-6f4ff4554c-l99rf\" (UID: \"31037f93-2b83-4bd0-bcdf-62c0a973432a\") " pod="openstack/swift-proxy-6f4ff4554c-l99rf" Jan 28 07:04:55 crc kubenswrapper[4642]: I0128 07:04:55.296009 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/31037f93-2b83-4bd0-bcdf-62c0a973432a-log-httpd\") pod \"swift-proxy-6f4ff4554c-l99rf\" (UID: \"31037f93-2b83-4bd0-bcdf-62c0a973432a\") " pod="openstack/swift-proxy-6f4ff4554c-l99rf" Jan 28 07:04:55 crc kubenswrapper[4642]: I0128 07:04:55.296047 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/31037f93-2b83-4bd0-bcdf-62c0a973432a-public-tls-certs\") pod \"swift-proxy-6f4ff4554c-l99rf\" (UID: \"31037f93-2b83-4bd0-bcdf-62c0a973432a\") " pod="openstack/swift-proxy-6f4ff4554c-l99rf" Jan 28 07:04:55 crc kubenswrapper[4642]: I0128 07:04:55.296065 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/31037f93-2b83-4bd0-bcdf-62c0a973432a-run-httpd\") pod \"swift-proxy-6f4ff4554c-l99rf\" (UID: \"31037f93-2b83-4bd0-bcdf-62c0a973432a\") " pod="openstack/swift-proxy-6f4ff4554c-l99rf" Jan 28 07:04:55 crc kubenswrapper[4642]: I0128 07:04:55.296118 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/31037f93-2b83-4bd0-bcdf-62c0a973432a-etc-swift\") pod \"swift-proxy-6f4ff4554c-l99rf\" (UID: \"31037f93-2b83-4bd0-bcdf-62c0a973432a\") " pod="openstack/swift-proxy-6f4ff4554c-l99rf" Jan 28 07:04:55 crc kubenswrapper[4642]: I0128 07:04:55.296135 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfrq5\" (UniqueName: \"kubernetes.io/projected/31037f93-2b83-4bd0-bcdf-62c0a973432a-kube-api-access-jfrq5\") pod \"swift-proxy-6f4ff4554c-l99rf\" (UID: \"31037f93-2b83-4bd0-bcdf-62c0a973432a\") " pod="openstack/swift-proxy-6f4ff4554c-l99rf" Jan 28 07:04:55 crc kubenswrapper[4642]: I0128 07:04:55.296171 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31037f93-2b83-4bd0-bcdf-62c0a973432a-combined-ca-bundle\") pod \"swift-proxy-6f4ff4554c-l99rf\" (UID: \"31037f93-2b83-4bd0-bcdf-62c0a973432a\") " pod="openstack/swift-proxy-6f4ff4554c-l99rf" Jan 28 07:04:55 crc kubenswrapper[4642]: I0128 07:04:55.296204 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/31037f93-2b83-4bd0-bcdf-62c0a973432a-internal-tls-certs\") pod \"swift-proxy-6f4ff4554c-l99rf\" (UID: \"31037f93-2b83-4bd0-bcdf-62c0a973432a\") " pod="openstack/swift-proxy-6f4ff4554c-l99rf" Jan 28 07:04:55 crc kubenswrapper[4642]: I0128 07:04:55.296614 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/31037f93-2b83-4bd0-bcdf-62c0a973432a-log-httpd\") pod \"swift-proxy-6f4ff4554c-l99rf\" (UID: \"31037f93-2b83-4bd0-bcdf-62c0a973432a\") " pod="openstack/swift-proxy-6f4ff4554c-l99rf" Jan 28 07:04:55 crc kubenswrapper[4642]: I0128 07:04:55.297168 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/31037f93-2b83-4bd0-bcdf-62c0a973432a-run-httpd\") pod \"swift-proxy-6f4ff4554c-l99rf\" (UID: \"31037f93-2b83-4bd0-bcdf-62c0a973432a\") " pod="openstack/swift-proxy-6f4ff4554c-l99rf" Jan 28 07:04:55 crc kubenswrapper[4642]: I0128 07:04:55.301980 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31037f93-2b83-4bd0-bcdf-62c0a973432a-combined-ca-bundle\") pod \"swift-proxy-6f4ff4554c-l99rf\" (UID: \"31037f93-2b83-4bd0-bcdf-62c0a973432a\") " pod="openstack/swift-proxy-6f4ff4554c-l99rf" Jan 28 07:04:55 crc kubenswrapper[4642]: I0128 07:04:55.302547 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31037f93-2b83-4bd0-bcdf-62c0a973432a-config-data\") pod \"swift-proxy-6f4ff4554c-l99rf\" (UID: \"31037f93-2b83-4bd0-bcdf-62c0a973432a\") " pod="openstack/swift-proxy-6f4ff4554c-l99rf" Jan 28 07:04:55 crc kubenswrapper[4642]: I0128 07:04:55.302771 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/31037f93-2b83-4bd0-bcdf-62c0a973432a-public-tls-certs\") pod \"swift-proxy-6f4ff4554c-l99rf\" (UID: \"31037f93-2b83-4bd0-bcdf-62c0a973432a\") " pod="openstack/swift-proxy-6f4ff4554c-l99rf" Jan 28 07:04:55 crc kubenswrapper[4642]: I0128 07:04:55.302880 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/31037f93-2b83-4bd0-bcdf-62c0a973432a-etc-swift\") pod \"swift-proxy-6f4ff4554c-l99rf\" (UID: \"31037f93-2b83-4bd0-bcdf-62c0a973432a\") " pod="openstack/swift-proxy-6f4ff4554c-l99rf" Jan 28 07:04:55 crc kubenswrapper[4642]: I0128 07:04:55.306838 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/31037f93-2b83-4bd0-bcdf-62c0a973432a-internal-tls-certs\") pod \"swift-proxy-6f4ff4554c-l99rf\" (UID: \"31037f93-2b83-4bd0-bcdf-62c0a973432a\") " pod="openstack/swift-proxy-6f4ff4554c-l99rf" Jan 28 07:04:55 crc kubenswrapper[4642]: I0128 07:04:55.310285 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfrq5\" (UniqueName: \"kubernetes.io/projected/31037f93-2b83-4bd0-bcdf-62c0a973432a-kube-api-access-jfrq5\") pod \"swift-proxy-6f4ff4554c-l99rf\" (UID: \"31037f93-2b83-4bd0-bcdf-62c0a973432a\") " pod="openstack/swift-proxy-6f4ff4554c-l99rf" Jan 28 07:04:55 crc kubenswrapper[4642]: I0128 07:04:55.400778 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6f4ff4554c-l99rf" Jan 28 07:04:56 crc kubenswrapper[4642]: I0128 07:04:56.252900 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 07:04:56 crc kubenswrapper[4642]: I0128 07:04:56.253227 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="789c47c0-e193-4caf-b7a0-fad74e069087" containerName="ceilometer-notification-agent" containerID="cri-o://bf343bb2d680fce762576304e86b1515dc5620bcb7fa86395b1ca33aa3de44b2" gracePeriod=30 Jan 28 07:04:56 crc kubenswrapper[4642]: I0128 07:04:56.253225 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="789c47c0-e193-4caf-b7a0-fad74e069087" containerName="sg-core" containerID="cri-o://673c4e608e44a612854ceed78b443f2c65d64f48aea558cc8c3631b85ecd74aa" gracePeriod=30 Jan 28 07:04:56 crc kubenswrapper[4642]: I0128 07:04:56.253272 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="789c47c0-e193-4caf-b7a0-fad74e069087" containerName="proxy-httpd" containerID="cri-o://cd3959ecf7e23fe5039b83a72db3b34e9b79411ed22d82d2e983ecdb81fde7fb" gracePeriod=30 Jan 28 07:04:56 crc kubenswrapper[4642]: I0128 07:04:56.253150 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="789c47c0-e193-4caf-b7a0-fad74e069087" containerName="ceilometer-central-agent" containerID="cri-o://6b9666071ae5d9423af33d8a12c2f7074dd10dae94b62ab19c5df9908e7cfb7e" gracePeriod=30 Jan 28 07:04:56 crc kubenswrapper[4642]: I0128 07:04:56.296360 4642 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="789c47c0-e193-4caf-b7a0-fad74e069087" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.160:3000/\": read tcp 10.217.0.2:55290->10.217.0.160:3000: read: connection reset by peer" Jan 28 07:04:57 crc kubenswrapper[4642]: I0128 07:04:57.348528 4642 generic.go:334] "Generic (PLEG): container finished" podID="789c47c0-e193-4caf-b7a0-fad74e069087" containerID="cd3959ecf7e23fe5039b83a72db3b34e9b79411ed22d82d2e983ecdb81fde7fb" exitCode=0 Jan 28 07:04:57 crc kubenswrapper[4642]: I0128 07:04:57.348835 4642 generic.go:334] "Generic (PLEG): container finished" podID="789c47c0-e193-4caf-b7a0-fad74e069087" containerID="673c4e608e44a612854ceed78b443f2c65d64f48aea558cc8c3631b85ecd74aa" exitCode=2 Jan 28 07:04:57 crc kubenswrapper[4642]: I0128 07:04:57.348845 4642 generic.go:334] "Generic (PLEG): container finished" podID="789c47c0-e193-4caf-b7a0-fad74e069087" containerID="6b9666071ae5d9423af33d8a12c2f7074dd10dae94b62ab19c5df9908e7cfb7e" exitCode=0 Jan 28 07:04:57 crc kubenswrapper[4642]: I0128 07:04:57.348866 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"789c47c0-e193-4caf-b7a0-fad74e069087","Type":"ContainerDied","Data":"cd3959ecf7e23fe5039b83a72db3b34e9b79411ed22d82d2e983ecdb81fde7fb"} Jan 28 07:04:57 crc kubenswrapper[4642]: I0128 07:04:57.348908 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"789c47c0-e193-4caf-b7a0-fad74e069087","Type":"ContainerDied","Data":"673c4e608e44a612854ceed78b443f2c65d64f48aea558cc8c3631b85ecd74aa"} Jan 28 07:04:57 crc kubenswrapper[4642]: I0128 07:04:57.348920 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"789c47c0-e193-4caf-b7a0-fad74e069087","Type":"ContainerDied","Data":"6b9666071ae5d9423af33d8a12c2f7074dd10dae94b62ab19c5df9908e7cfb7e"} Jan 28 07:04:59 crc kubenswrapper[4642]: I0128 07:04:59.441012 4642 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="789c47c0-e193-4caf-b7a0-fad74e069087" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.160:3000/\": dial tcp 10.217.0.160:3000: connect: connection refused" Jan 28 07:04:59 crc kubenswrapper[4642]: I0128 07:04:59.535453 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6f4ff4554c-l99rf"] Jan 28 07:04:59 crc kubenswrapper[4642]: I0128 07:04:59.560604 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 28 07:05:00 crc kubenswrapper[4642]: I0128 07:05:00.382988 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"c754de0a-7ee3-416f-988d-d0eb4829ea99","Type":"ContainerStarted","Data":"d19e166609a89f717940195283b3dce25a1685eabbc032b59006d0779979b5f5"} Jan 28 07:05:00 crc kubenswrapper[4642]: I0128 07:05:00.385608 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6f4ff4554c-l99rf" event={"ID":"31037f93-2b83-4bd0-bcdf-62c0a973432a","Type":"ContainerStarted","Data":"bb62331907fc6a5ce17b3f1491cccaf17b33bf21528161d51de15ac8f5946830"} Jan 28 07:05:01 crc kubenswrapper[4642]: I0128 07:05:01.431692 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6f4ff4554c-l99rf" event={"ID":"31037f93-2b83-4bd0-bcdf-62c0a973432a","Type":"ContainerStarted","Data":"8c5d73ad17d60ffe0f46341a0316b72c9ed6521025fb70c514ba87e32f6f41f6"} Jan 28 07:05:01 crc kubenswrapper[4642]: I0128 07:05:01.431996 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6f4ff4554c-l99rf" event={"ID":"31037f93-2b83-4bd0-bcdf-62c0a973432a","Type":"ContainerStarted","Data":"76d388d385b60484ae99f314d5bd36c62c1f85063ed7cc78755ee2145b93bb3b"} Jan 28 07:05:01 crc kubenswrapper[4642]: I0128 07:05:01.433059 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6f4ff4554c-l99rf" Jan 28 07:05:01 crc kubenswrapper[4642]: I0128 07:05:01.433090 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6f4ff4554c-l99rf" Jan 28 07:05:01 crc kubenswrapper[4642]: I0128 07:05:01.444953 4642 generic.go:334] "Generic (PLEG): container finished" podID="789c47c0-e193-4caf-b7a0-fad74e069087" containerID="bf343bb2d680fce762576304e86b1515dc5620bcb7fa86395b1ca33aa3de44b2" exitCode=0 Jan 28 07:05:01 crc kubenswrapper[4642]: I0128 07:05:01.444996 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"789c47c0-e193-4caf-b7a0-fad74e069087","Type":"ContainerDied","Data":"bf343bb2d680fce762576304e86b1515dc5620bcb7fa86395b1ca33aa3de44b2"} Jan 28 07:05:01 crc kubenswrapper[4642]: I0128 07:05:01.458535 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-68q8l" event={"ID":"e6aa333d-da9b-4f35-86a3-b88b6fe94ebc","Type":"ContainerStarted","Data":"1f390e5ba9dc59d1d4b1ef1707577455571a9a0b34a3ce48db1b3641fc7e6743"} Jan 28 07:05:01 crc kubenswrapper[4642]: I0128 07:05:01.467955 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-6f4ff4554c-l99rf" podStartSLOduration=6.467944172 podStartE2EDuration="6.467944172s" podCreationTimestamp="2026-01-28 07:04:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 07:05:01.45931603 +0000 UTC m=+1024.691404839" watchObservedRunningTime="2026-01-28 07:05:01.467944172 +0000 UTC m=+1024.700032981" Jan 28 07:05:01 crc kubenswrapper[4642]: I0128 07:05:01.486343 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-68q8l" podStartSLOduration=4.075221689 podStartE2EDuration="1m3.486327948s" podCreationTimestamp="2026-01-28 07:03:58 +0000 UTC" firstStartedPulling="2026-01-28 07:03:59.760931104 +0000 UTC m=+962.993019913" lastFinishedPulling="2026-01-28 07:04:59.172037363 +0000 UTC m=+1022.404126172" observedRunningTime="2026-01-28 07:05:01.483471146 +0000 UTC m=+1024.715559956" watchObservedRunningTime="2026-01-28 07:05:01.486327948 +0000 UTC m=+1024.718416758" Jan 28 07:05:01 crc kubenswrapper[4642]: I0128 07:05:01.586546 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 07:05:01 crc kubenswrapper[4642]: I0128 07:05:01.647921 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/789c47c0-e193-4caf-b7a0-fad74e069087-combined-ca-bundle\") pod \"789c47c0-e193-4caf-b7a0-fad74e069087\" (UID: \"789c47c0-e193-4caf-b7a0-fad74e069087\") " Jan 28 07:05:01 crc kubenswrapper[4642]: I0128 07:05:01.647975 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/789c47c0-e193-4caf-b7a0-fad74e069087-config-data\") pod \"789c47c0-e193-4caf-b7a0-fad74e069087\" (UID: \"789c47c0-e193-4caf-b7a0-fad74e069087\") " Jan 28 07:05:01 crc kubenswrapper[4642]: I0128 07:05:01.648050 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjn5x\" (UniqueName: \"kubernetes.io/projected/789c47c0-e193-4caf-b7a0-fad74e069087-kube-api-access-cjn5x\") pod \"789c47c0-e193-4caf-b7a0-fad74e069087\" (UID: \"789c47c0-e193-4caf-b7a0-fad74e069087\") " Jan 28 07:05:01 crc kubenswrapper[4642]: I0128 07:05:01.648106 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/789c47c0-e193-4caf-b7a0-fad74e069087-sg-core-conf-yaml\") pod \"789c47c0-e193-4caf-b7a0-fad74e069087\" (UID: \"789c47c0-e193-4caf-b7a0-fad74e069087\") " Jan 28 07:05:01 crc kubenswrapper[4642]: I0128 07:05:01.648138 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/789c47c0-e193-4caf-b7a0-fad74e069087-log-httpd\") pod \"789c47c0-e193-4caf-b7a0-fad74e069087\" (UID: \"789c47c0-e193-4caf-b7a0-fad74e069087\") " Jan 28 07:05:01 crc kubenswrapper[4642]: I0128 07:05:01.648231 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/789c47c0-e193-4caf-b7a0-fad74e069087-run-httpd\") pod \"789c47c0-e193-4caf-b7a0-fad74e069087\" (UID: \"789c47c0-e193-4caf-b7a0-fad74e069087\") " Jan 28 07:05:01 crc kubenswrapper[4642]: I0128 07:05:01.648300 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/789c47c0-e193-4caf-b7a0-fad74e069087-scripts\") pod \"789c47c0-e193-4caf-b7a0-fad74e069087\" (UID: \"789c47c0-e193-4caf-b7a0-fad74e069087\") " Jan 28 07:05:01 crc kubenswrapper[4642]: I0128 07:05:01.649164 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/789c47c0-e193-4caf-b7a0-fad74e069087-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "789c47c0-e193-4caf-b7a0-fad74e069087" (UID: "789c47c0-e193-4caf-b7a0-fad74e069087"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 07:05:01 crc kubenswrapper[4642]: I0128 07:05:01.649574 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/789c47c0-e193-4caf-b7a0-fad74e069087-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "789c47c0-e193-4caf-b7a0-fad74e069087" (UID: "789c47c0-e193-4caf-b7a0-fad74e069087"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 07:05:01 crc kubenswrapper[4642]: I0128 07:05:01.654292 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/789c47c0-e193-4caf-b7a0-fad74e069087-kube-api-access-cjn5x" (OuterVolumeSpecName: "kube-api-access-cjn5x") pod "789c47c0-e193-4caf-b7a0-fad74e069087" (UID: "789c47c0-e193-4caf-b7a0-fad74e069087"). InnerVolumeSpecName "kube-api-access-cjn5x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:05:01 crc kubenswrapper[4642]: I0128 07:05:01.654434 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/789c47c0-e193-4caf-b7a0-fad74e069087-scripts" (OuterVolumeSpecName: "scripts") pod "789c47c0-e193-4caf-b7a0-fad74e069087" (UID: "789c47c0-e193-4caf-b7a0-fad74e069087"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:05:01 crc kubenswrapper[4642]: I0128 07:05:01.682953 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/789c47c0-e193-4caf-b7a0-fad74e069087-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "789c47c0-e193-4caf-b7a0-fad74e069087" (UID: "789c47c0-e193-4caf-b7a0-fad74e069087"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:05:01 crc kubenswrapper[4642]: I0128 07:05:01.713382 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/789c47c0-e193-4caf-b7a0-fad74e069087-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "789c47c0-e193-4caf-b7a0-fad74e069087" (UID: "789c47c0-e193-4caf-b7a0-fad74e069087"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:05:01 crc kubenswrapper[4642]: I0128 07:05:01.739490 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/789c47c0-e193-4caf-b7a0-fad74e069087-config-data" (OuterVolumeSpecName: "config-data") pod "789c47c0-e193-4caf-b7a0-fad74e069087" (UID: "789c47c0-e193-4caf-b7a0-fad74e069087"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:05:01 crc kubenswrapper[4642]: I0128 07:05:01.751087 4642 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/789c47c0-e193-4caf-b7a0-fad74e069087-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 07:05:01 crc kubenswrapper[4642]: I0128 07:05:01.751117 4642 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/789c47c0-e193-4caf-b7a0-fad74e069087-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 07:05:01 crc kubenswrapper[4642]: I0128 07:05:01.751129 4642 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/789c47c0-e193-4caf-b7a0-fad74e069087-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 07:05:01 crc kubenswrapper[4642]: I0128 07:05:01.751139 4642 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/789c47c0-e193-4caf-b7a0-fad74e069087-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 07:05:01 crc kubenswrapper[4642]: I0128 07:05:01.751148 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjn5x\" (UniqueName: \"kubernetes.io/projected/789c47c0-e193-4caf-b7a0-fad74e069087-kube-api-access-cjn5x\") on node \"crc\" DevicePath \"\"" Jan 28 07:05:01 crc kubenswrapper[4642]: I0128 07:05:01.751156 4642 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/789c47c0-e193-4caf-b7a0-fad74e069087-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 28 07:05:01 crc kubenswrapper[4642]: I0128 07:05:01.751164 4642 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/789c47c0-e193-4caf-b7a0-fad74e069087-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 07:05:02 crc kubenswrapper[4642]: I0128 07:05:02.468564 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"789c47c0-e193-4caf-b7a0-fad74e069087","Type":"ContainerDied","Data":"0c990e3c5095b80e3a5e5d71805285ff36bb074ba6da38b0c84cf3f91d7d7128"} Jan 28 07:05:02 crc kubenswrapper[4642]: I0128 07:05:02.468834 4642 scope.go:117] "RemoveContainer" containerID="cd3959ecf7e23fe5039b83a72db3b34e9b79411ed22d82d2e983ecdb81fde7fb" Jan 28 07:05:02 crc kubenswrapper[4642]: I0128 07:05:02.468630 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 07:05:02 crc kubenswrapper[4642]: I0128 07:05:02.495522 4642 scope.go:117] "RemoveContainer" containerID="673c4e608e44a612854ceed78b443f2c65d64f48aea558cc8c3631b85ecd74aa" Jan 28 07:05:02 crc kubenswrapper[4642]: I0128 07:05:02.497899 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 07:05:02 crc kubenswrapper[4642]: I0128 07:05:02.513248 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 28 07:05:02 crc kubenswrapper[4642]: I0128 07:05:02.523270 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 28 07:05:02 crc kubenswrapper[4642]: E0128 07:05:02.523677 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="789c47c0-e193-4caf-b7a0-fad74e069087" containerName="sg-core" Jan 28 07:05:02 crc kubenswrapper[4642]: I0128 07:05:02.523701 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="789c47c0-e193-4caf-b7a0-fad74e069087" containerName="sg-core" Jan 28 07:05:02 crc kubenswrapper[4642]: E0128 07:05:02.523720 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="789c47c0-e193-4caf-b7a0-fad74e069087" containerName="ceilometer-notification-agent" Jan 28 07:05:02 crc kubenswrapper[4642]: I0128 07:05:02.523748 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="789c47c0-e193-4caf-b7a0-fad74e069087" containerName="ceilometer-notification-agent" Jan 28 07:05:02 crc kubenswrapper[4642]: E0128 07:05:02.523758 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="789c47c0-e193-4caf-b7a0-fad74e069087" containerName="proxy-httpd" Jan 28 07:05:02 crc kubenswrapper[4642]: I0128 07:05:02.523764 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="789c47c0-e193-4caf-b7a0-fad74e069087" containerName="proxy-httpd" Jan 28 07:05:02 crc kubenswrapper[4642]: E0128 07:05:02.523775 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="789c47c0-e193-4caf-b7a0-fad74e069087" containerName="ceilometer-central-agent" Jan 28 07:05:02 crc kubenswrapper[4642]: I0128 07:05:02.523780 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="789c47c0-e193-4caf-b7a0-fad74e069087" containerName="ceilometer-central-agent" Jan 28 07:05:02 crc kubenswrapper[4642]: I0128 07:05:02.523955 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="789c47c0-e193-4caf-b7a0-fad74e069087" containerName="ceilometer-notification-agent" Jan 28 07:05:02 crc kubenswrapper[4642]: I0128 07:05:02.523995 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="789c47c0-e193-4caf-b7a0-fad74e069087" containerName="proxy-httpd" Jan 28 07:05:02 crc kubenswrapper[4642]: I0128 07:05:02.524006 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="789c47c0-e193-4caf-b7a0-fad74e069087" containerName="sg-core" Jan 28 07:05:02 crc kubenswrapper[4642]: I0128 07:05:02.524014 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="789c47c0-e193-4caf-b7a0-fad74e069087" containerName="ceilometer-central-agent" Jan 28 07:05:02 crc kubenswrapper[4642]: I0128 07:05:02.525285 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 07:05:02 crc kubenswrapper[4642]: I0128 07:05:02.525358 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 07:05:02 crc kubenswrapper[4642]: I0128 07:05:02.526391 4642 scope.go:117] "RemoveContainer" containerID="bf343bb2d680fce762576304e86b1515dc5620bcb7fa86395b1ca33aa3de44b2" Jan 28 07:05:02 crc kubenswrapper[4642]: I0128 07:05:02.527705 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 28 07:05:02 crc kubenswrapper[4642]: I0128 07:05:02.527796 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 28 07:05:02 crc kubenswrapper[4642]: I0128 07:05:02.563769 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/578099b8-075f-47d7-b5bf-4db9ef550508-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"578099b8-075f-47d7-b5bf-4db9ef550508\") " pod="openstack/ceilometer-0" Jan 28 07:05:02 crc kubenswrapper[4642]: I0128 07:05:02.564051 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/578099b8-075f-47d7-b5bf-4db9ef550508-scripts\") pod \"ceilometer-0\" (UID: \"578099b8-075f-47d7-b5bf-4db9ef550508\") " pod="openstack/ceilometer-0" Jan 28 07:05:02 crc kubenswrapper[4642]: I0128 07:05:02.564151 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/578099b8-075f-47d7-b5bf-4db9ef550508-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"578099b8-075f-47d7-b5bf-4db9ef550508\") " pod="openstack/ceilometer-0" Jan 28 07:05:02 crc kubenswrapper[4642]: I0128 07:05:02.564360 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/578099b8-075f-47d7-b5bf-4db9ef550508-log-httpd\") pod \"ceilometer-0\" (UID: \"578099b8-075f-47d7-b5bf-4db9ef550508\") " pod="openstack/ceilometer-0" Jan 28 07:05:02 crc kubenswrapper[4642]: I0128 07:05:02.564971 4642 scope.go:117] "RemoveContainer" containerID="6b9666071ae5d9423af33d8a12c2f7074dd10dae94b62ab19c5df9908e7cfb7e" Jan 28 07:05:02 crc kubenswrapper[4642]: I0128 07:05:02.564982 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/578099b8-075f-47d7-b5bf-4db9ef550508-run-httpd\") pod \"ceilometer-0\" (UID: \"578099b8-075f-47d7-b5bf-4db9ef550508\") " pod="openstack/ceilometer-0" Jan 28 07:05:02 crc kubenswrapper[4642]: I0128 07:05:02.565133 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzq59\" (UniqueName: \"kubernetes.io/projected/578099b8-075f-47d7-b5bf-4db9ef550508-kube-api-access-fzq59\") pod \"ceilometer-0\" (UID: \"578099b8-075f-47d7-b5bf-4db9ef550508\") " pod="openstack/ceilometer-0" Jan 28 07:05:02 crc kubenswrapper[4642]: I0128 07:05:02.565259 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/578099b8-075f-47d7-b5bf-4db9ef550508-config-data\") pod \"ceilometer-0\" (UID: \"578099b8-075f-47d7-b5bf-4db9ef550508\") " pod="openstack/ceilometer-0" Jan 28 07:05:02 crc kubenswrapper[4642]: I0128 07:05:02.666740 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/578099b8-075f-47d7-b5bf-4db9ef550508-log-httpd\") pod \"ceilometer-0\" (UID: \"578099b8-075f-47d7-b5bf-4db9ef550508\") " pod="openstack/ceilometer-0" Jan 28 07:05:02 crc kubenswrapper[4642]: I0128 07:05:02.666774 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/578099b8-075f-47d7-b5bf-4db9ef550508-run-httpd\") pod \"ceilometer-0\" (UID: \"578099b8-075f-47d7-b5bf-4db9ef550508\") " pod="openstack/ceilometer-0" Jan 28 07:05:02 crc kubenswrapper[4642]: I0128 07:05:02.666806 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzq59\" (UniqueName: \"kubernetes.io/projected/578099b8-075f-47d7-b5bf-4db9ef550508-kube-api-access-fzq59\") pod \"ceilometer-0\" (UID: \"578099b8-075f-47d7-b5bf-4db9ef550508\") " pod="openstack/ceilometer-0" Jan 28 07:05:02 crc kubenswrapper[4642]: I0128 07:05:02.666840 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/578099b8-075f-47d7-b5bf-4db9ef550508-config-data\") pod \"ceilometer-0\" (UID: \"578099b8-075f-47d7-b5bf-4db9ef550508\") " pod="openstack/ceilometer-0" Jan 28 07:05:02 crc kubenswrapper[4642]: I0128 07:05:02.666873 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/578099b8-075f-47d7-b5bf-4db9ef550508-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"578099b8-075f-47d7-b5bf-4db9ef550508\") " pod="openstack/ceilometer-0" Jan 28 07:05:02 crc kubenswrapper[4642]: I0128 07:05:02.666941 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/578099b8-075f-47d7-b5bf-4db9ef550508-scripts\") pod \"ceilometer-0\" (UID: \"578099b8-075f-47d7-b5bf-4db9ef550508\") " pod="openstack/ceilometer-0" Jan 28 07:05:02 crc kubenswrapper[4642]: I0128 07:05:02.667763 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/578099b8-075f-47d7-b5bf-4db9ef550508-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"578099b8-075f-47d7-b5bf-4db9ef550508\") " pod="openstack/ceilometer-0" Jan 28 07:05:02 crc kubenswrapper[4642]: I0128 07:05:02.668036 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/578099b8-075f-47d7-b5bf-4db9ef550508-log-httpd\") pod \"ceilometer-0\" (UID: \"578099b8-075f-47d7-b5bf-4db9ef550508\") " pod="openstack/ceilometer-0" Jan 28 07:05:02 crc kubenswrapper[4642]: I0128 07:05:02.668580 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/578099b8-075f-47d7-b5bf-4db9ef550508-run-httpd\") pod \"ceilometer-0\" (UID: \"578099b8-075f-47d7-b5bf-4db9ef550508\") " pod="openstack/ceilometer-0" Jan 28 07:05:02 crc kubenswrapper[4642]: I0128 07:05:02.672810 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/578099b8-075f-47d7-b5bf-4db9ef550508-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"578099b8-075f-47d7-b5bf-4db9ef550508\") " pod="openstack/ceilometer-0" Jan 28 07:05:02 crc kubenswrapper[4642]: I0128 07:05:02.674759 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/578099b8-075f-47d7-b5bf-4db9ef550508-scripts\") pod \"ceilometer-0\" (UID: \"578099b8-075f-47d7-b5bf-4db9ef550508\") " pod="openstack/ceilometer-0" Jan 28 07:05:02 crc kubenswrapper[4642]: I0128 07:05:02.675286 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/578099b8-075f-47d7-b5bf-4db9ef550508-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"578099b8-075f-47d7-b5bf-4db9ef550508\") " pod="openstack/ceilometer-0" Jan 28 07:05:02 crc kubenswrapper[4642]: I0128 07:05:02.676961 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/578099b8-075f-47d7-b5bf-4db9ef550508-config-data\") pod \"ceilometer-0\" (UID: \"578099b8-075f-47d7-b5bf-4db9ef550508\") " pod="openstack/ceilometer-0" Jan 28 07:05:02 crc kubenswrapper[4642]: I0128 07:05:02.682276 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzq59\" (UniqueName: \"kubernetes.io/projected/578099b8-075f-47d7-b5bf-4db9ef550508-kube-api-access-fzq59\") pod \"ceilometer-0\" (UID: \"578099b8-075f-47d7-b5bf-4db9ef550508\") " pod="openstack/ceilometer-0" Jan 28 07:05:02 crc kubenswrapper[4642]: I0128 07:05:02.841599 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 07:05:03 crc kubenswrapper[4642]: I0128 07:05:03.106594 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="789c47c0-e193-4caf-b7a0-fad74e069087" path="/var/lib/kubelet/pods/789c47c0-e193-4caf-b7a0-fad74e069087/volumes" Jan 28 07:05:03 crc kubenswrapper[4642]: I0128 07:05:03.249604 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 07:05:03 crc kubenswrapper[4642]: W0128 07:05:03.262743 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod578099b8_075f_47d7_b5bf_4db9ef550508.slice/crio-8832e83e7ccb8c1f2718e462fd58be46ffda2b90a9de3c5f3304dad30db2ef53 WatchSource:0}: Error finding container 8832e83e7ccb8c1f2718e462fd58be46ffda2b90a9de3c5f3304dad30db2ef53: Status 404 returned error can't find the container with id 8832e83e7ccb8c1f2718e462fd58be46ffda2b90a9de3c5f3304dad30db2ef53 Jan 28 07:05:03 crc kubenswrapper[4642]: I0128 07:05:03.476363 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"578099b8-075f-47d7-b5bf-4db9ef550508","Type":"ContainerStarted","Data":"8832e83e7ccb8c1f2718e462fd58be46ffda2b90a9de3c5f3304dad30db2ef53"} Jan 28 07:05:03 crc kubenswrapper[4642]: I0128 07:05:03.551029 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-655lk"] Jan 28 07:05:03 crc kubenswrapper[4642]: I0128 07:05:03.552254 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-655lk" Jan 28 07:05:03 crc kubenswrapper[4642]: I0128 07:05:03.593569 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9h5g\" (UniqueName: \"kubernetes.io/projected/be3cadcb-de4f-48dd-aae9-c2ff2ca38571-kube-api-access-v9h5g\") pod \"nova-api-db-create-655lk\" (UID: \"be3cadcb-de4f-48dd-aae9-c2ff2ca38571\") " pod="openstack/nova-api-db-create-655lk" Jan 28 07:05:03 crc kubenswrapper[4642]: I0128 07:05:03.593641 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be3cadcb-de4f-48dd-aae9-c2ff2ca38571-operator-scripts\") pod \"nova-api-db-create-655lk\" (UID: \"be3cadcb-de4f-48dd-aae9-c2ff2ca38571\") " pod="openstack/nova-api-db-create-655lk" Jan 28 07:05:03 crc kubenswrapper[4642]: I0128 07:05:03.599927 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-655lk"] Jan 28 07:05:03 crc kubenswrapper[4642]: I0128 07:05:03.664213 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-d7p52"] Jan 28 07:05:03 crc kubenswrapper[4642]: I0128 07:05:03.665573 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-d7p52" Jan 28 07:05:03 crc kubenswrapper[4642]: I0128 07:05:03.675952 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-40e8-account-create-update-mlk24"] Jan 28 07:05:03 crc kubenswrapper[4642]: I0128 07:05:03.677496 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-40e8-account-create-update-mlk24" Jan 28 07:05:03 crc kubenswrapper[4642]: I0128 07:05:03.682131 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 28 07:05:03 crc kubenswrapper[4642]: I0128 07:05:03.687731 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-d7p52"] Jan 28 07:05:03 crc kubenswrapper[4642]: I0128 07:05:03.693118 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-40e8-account-create-update-mlk24"] Jan 28 07:05:03 crc kubenswrapper[4642]: I0128 07:05:03.694788 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77kxd\" (UniqueName: \"kubernetes.io/projected/35f9d883-d85a-4ed8-907b-c4e79d737c18-kube-api-access-77kxd\") pod \"nova-api-40e8-account-create-update-mlk24\" (UID: \"35f9d883-d85a-4ed8-907b-c4e79d737c18\") " pod="openstack/nova-api-40e8-account-create-update-mlk24" Jan 28 07:05:03 crc kubenswrapper[4642]: I0128 07:05:03.694829 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9ef3a51e-a476-442b-baf1-5650caa83c8e-operator-scripts\") pod \"nova-cell0-db-create-d7p52\" (UID: \"9ef3a51e-a476-442b-baf1-5650caa83c8e\") " pod="openstack/nova-cell0-db-create-d7p52" Jan 28 07:05:03 crc kubenswrapper[4642]: I0128 07:05:03.694859 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzcks\" (UniqueName: \"kubernetes.io/projected/9ef3a51e-a476-442b-baf1-5650caa83c8e-kube-api-access-xzcks\") pod \"nova-cell0-db-create-d7p52\" (UID: \"9ef3a51e-a476-442b-baf1-5650caa83c8e\") " pod="openstack/nova-cell0-db-create-d7p52" Jan 28 07:05:03 crc kubenswrapper[4642]: I0128 07:05:03.694890 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9h5g\" (UniqueName: \"kubernetes.io/projected/be3cadcb-de4f-48dd-aae9-c2ff2ca38571-kube-api-access-v9h5g\") pod \"nova-api-db-create-655lk\" (UID: \"be3cadcb-de4f-48dd-aae9-c2ff2ca38571\") " pod="openstack/nova-api-db-create-655lk" Jan 28 07:05:03 crc kubenswrapper[4642]: I0128 07:05:03.694927 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be3cadcb-de4f-48dd-aae9-c2ff2ca38571-operator-scripts\") pod \"nova-api-db-create-655lk\" (UID: \"be3cadcb-de4f-48dd-aae9-c2ff2ca38571\") " pod="openstack/nova-api-db-create-655lk" Jan 28 07:05:03 crc kubenswrapper[4642]: I0128 07:05:03.694974 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35f9d883-d85a-4ed8-907b-c4e79d737c18-operator-scripts\") pod \"nova-api-40e8-account-create-update-mlk24\" (UID: \"35f9d883-d85a-4ed8-907b-c4e79d737c18\") " pod="openstack/nova-api-40e8-account-create-update-mlk24" Jan 28 07:05:03 crc kubenswrapper[4642]: I0128 07:05:03.696353 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be3cadcb-de4f-48dd-aae9-c2ff2ca38571-operator-scripts\") pod \"nova-api-db-create-655lk\" (UID: \"be3cadcb-de4f-48dd-aae9-c2ff2ca38571\") " pod="openstack/nova-api-db-create-655lk" Jan 28 07:05:03 crc kubenswrapper[4642]: I0128 07:05:03.743506 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9h5g\" (UniqueName: \"kubernetes.io/projected/be3cadcb-de4f-48dd-aae9-c2ff2ca38571-kube-api-access-v9h5g\") pod \"nova-api-db-create-655lk\" (UID: \"be3cadcb-de4f-48dd-aae9-c2ff2ca38571\") " pod="openstack/nova-api-db-create-655lk" Jan 28 07:05:03 crc kubenswrapper[4642]: I0128 07:05:03.763974 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 07:05:03 crc kubenswrapper[4642]: I0128 07:05:03.778022 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-bk4gf"] Jan 28 07:05:03 crc kubenswrapper[4642]: I0128 07:05:03.780331 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-bk4gf" Jan 28 07:05:03 crc kubenswrapper[4642]: I0128 07:05:03.793725 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-bk4gf"] Jan 28 07:05:03 crc kubenswrapper[4642]: I0128 07:05:03.799807 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzcks\" (UniqueName: \"kubernetes.io/projected/9ef3a51e-a476-442b-baf1-5650caa83c8e-kube-api-access-xzcks\") pod \"nova-cell0-db-create-d7p52\" (UID: \"9ef3a51e-a476-442b-baf1-5650caa83c8e\") " pod="openstack/nova-cell0-db-create-d7p52" Jan 28 07:05:03 crc kubenswrapper[4642]: I0128 07:05:03.800047 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3d9b18d-9328-43f4-85d4-cf05abf0ac5f-operator-scripts\") pod \"nova-cell1-db-create-bk4gf\" (UID: \"c3d9b18d-9328-43f4-85d4-cf05abf0ac5f\") " pod="openstack/nova-cell1-db-create-bk4gf" Jan 28 07:05:03 crc kubenswrapper[4642]: I0128 07:05:03.801085 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4rb4\" (UniqueName: \"kubernetes.io/projected/c3d9b18d-9328-43f4-85d4-cf05abf0ac5f-kube-api-access-h4rb4\") pod \"nova-cell1-db-create-bk4gf\" (UID: \"c3d9b18d-9328-43f4-85d4-cf05abf0ac5f\") " pod="openstack/nova-cell1-db-create-bk4gf" Jan 28 07:05:03 crc kubenswrapper[4642]: I0128 07:05:03.801119 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35f9d883-d85a-4ed8-907b-c4e79d737c18-operator-scripts\") pod \"nova-api-40e8-account-create-update-mlk24\" (UID: \"35f9d883-d85a-4ed8-907b-c4e79d737c18\") " pod="openstack/nova-api-40e8-account-create-update-mlk24" Jan 28 07:05:03 crc kubenswrapper[4642]: I0128 07:05:03.801245 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77kxd\" (UniqueName: \"kubernetes.io/projected/35f9d883-d85a-4ed8-907b-c4e79d737c18-kube-api-access-77kxd\") pod \"nova-api-40e8-account-create-update-mlk24\" (UID: \"35f9d883-d85a-4ed8-907b-c4e79d737c18\") " pod="openstack/nova-api-40e8-account-create-update-mlk24" Jan 28 07:05:03 crc kubenswrapper[4642]: I0128 07:05:03.801284 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9ef3a51e-a476-442b-baf1-5650caa83c8e-operator-scripts\") pod \"nova-cell0-db-create-d7p52\" (UID: \"9ef3a51e-a476-442b-baf1-5650caa83c8e\") " pod="openstack/nova-cell0-db-create-d7p52" Jan 28 07:05:03 crc kubenswrapper[4642]: I0128 07:05:03.802096 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9ef3a51e-a476-442b-baf1-5650caa83c8e-operator-scripts\") pod \"nova-cell0-db-create-d7p52\" (UID: \"9ef3a51e-a476-442b-baf1-5650caa83c8e\") " pod="openstack/nova-cell0-db-create-d7p52" Jan 28 07:05:03 crc kubenswrapper[4642]: I0128 07:05:03.805005 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35f9d883-d85a-4ed8-907b-c4e79d737c18-operator-scripts\") pod \"nova-api-40e8-account-create-update-mlk24\" (UID: \"35f9d883-d85a-4ed8-907b-c4e79d737c18\") " pod="openstack/nova-api-40e8-account-create-update-mlk24" Jan 28 07:05:03 crc kubenswrapper[4642]: I0128 07:05:03.815645 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77kxd\" (UniqueName: \"kubernetes.io/projected/35f9d883-d85a-4ed8-907b-c4e79d737c18-kube-api-access-77kxd\") pod \"nova-api-40e8-account-create-update-mlk24\" (UID: \"35f9d883-d85a-4ed8-907b-c4e79d737c18\") " pod="openstack/nova-api-40e8-account-create-update-mlk24" Jan 28 07:05:03 crc kubenswrapper[4642]: I0128 07:05:03.815765 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzcks\" (UniqueName: \"kubernetes.io/projected/9ef3a51e-a476-442b-baf1-5650caa83c8e-kube-api-access-xzcks\") pod \"nova-cell0-db-create-d7p52\" (UID: \"9ef3a51e-a476-442b-baf1-5650caa83c8e\") " pod="openstack/nova-cell0-db-create-d7p52" Jan 28 07:05:03 crc kubenswrapper[4642]: I0128 07:05:03.862843 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-2f19-account-create-update-x86m7"] Jan 28 07:05:03 crc kubenswrapper[4642]: I0128 07:05:03.864008 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-2f19-account-create-update-x86m7" Jan 28 07:05:03 crc kubenswrapper[4642]: I0128 07:05:03.865713 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 28 07:05:03 crc kubenswrapper[4642]: I0128 07:05:03.868946 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-2f19-account-create-update-x86m7"] Jan 28 07:05:03 crc kubenswrapper[4642]: I0128 07:05:03.894245 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-655lk" Jan 28 07:05:03 crc kubenswrapper[4642]: I0128 07:05:03.902759 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r296r\" (UniqueName: \"kubernetes.io/projected/48cf664c-6221-49e7-8eda-bd3b5f059cbe-kube-api-access-r296r\") pod \"nova-cell0-2f19-account-create-update-x86m7\" (UID: \"48cf664c-6221-49e7-8eda-bd3b5f059cbe\") " pod="openstack/nova-cell0-2f19-account-create-update-x86m7" Jan 28 07:05:03 crc kubenswrapper[4642]: I0128 07:05:03.902797 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3d9b18d-9328-43f4-85d4-cf05abf0ac5f-operator-scripts\") pod \"nova-cell1-db-create-bk4gf\" (UID: \"c3d9b18d-9328-43f4-85d4-cf05abf0ac5f\") " pod="openstack/nova-cell1-db-create-bk4gf" Jan 28 07:05:03 crc kubenswrapper[4642]: I0128 07:05:03.902860 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4rb4\" (UniqueName: \"kubernetes.io/projected/c3d9b18d-9328-43f4-85d4-cf05abf0ac5f-kube-api-access-h4rb4\") pod \"nova-cell1-db-create-bk4gf\" (UID: \"c3d9b18d-9328-43f4-85d4-cf05abf0ac5f\") " pod="openstack/nova-cell1-db-create-bk4gf" Jan 28 07:05:03 crc kubenswrapper[4642]: I0128 07:05:03.902935 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48cf664c-6221-49e7-8eda-bd3b5f059cbe-operator-scripts\") pod \"nova-cell0-2f19-account-create-update-x86m7\" (UID: \"48cf664c-6221-49e7-8eda-bd3b5f059cbe\") " pod="openstack/nova-cell0-2f19-account-create-update-x86m7" Jan 28 07:05:03 crc kubenswrapper[4642]: I0128 07:05:03.903422 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3d9b18d-9328-43f4-85d4-cf05abf0ac5f-operator-scripts\") pod \"nova-cell1-db-create-bk4gf\" (UID: \"c3d9b18d-9328-43f4-85d4-cf05abf0ac5f\") " pod="openstack/nova-cell1-db-create-bk4gf" Jan 28 07:05:03 crc kubenswrapper[4642]: I0128 07:05:03.916987 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4rb4\" (UniqueName: \"kubernetes.io/projected/c3d9b18d-9328-43f4-85d4-cf05abf0ac5f-kube-api-access-h4rb4\") pod \"nova-cell1-db-create-bk4gf\" (UID: \"c3d9b18d-9328-43f4-85d4-cf05abf0ac5f\") " pod="openstack/nova-cell1-db-create-bk4gf" Jan 28 07:05:03 crc kubenswrapper[4642]: I0128 07:05:03.984902 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-d7p52" Jan 28 07:05:03 crc kubenswrapper[4642]: I0128 07:05:03.997059 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-40e8-account-create-update-mlk24" Jan 28 07:05:04 crc kubenswrapper[4642]: I0128 07:05:04.004600 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48cf664c-6221-49e7-8eda-bd3b5f059cbe-operator-scripts\") pod \"nova-cell0-2f19-account-create-update-x86m7\" (UID: \"48cf664c-6221-49e7-8eda-bd3b5f059cbe\") " pod="openstack/nova-cell0-2f19-account-create-update-x86m7" Jan 28 07:05:04 crc kubenswrapper[4642]: I0128 07:05:04.004661 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r296r\" (UniqueName: \"kubernetes.io/projected/48cf664c-6221-49e7-8eda-bd3b5f059cbe-kube-api-access-r296r\") pod \"nova-cell0-2f19-account-create-update-x86m7\" (UID: \"48cf664c-6221-49e7-8eda-bd3b5f059cbe\") " pod="openstack/nova-cell0-2f19-account-create-update-x86m7" Jan 28 07:05:04 crc kubenswrapper[4642]: I0128 07:05:04.005611 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48cf664c-6221-49e7-8eda-bd3b5f059cbe-operator-scripts\") pod \"nova-cell0-2f19-account-create-update-x86m7\" (UID: \"48cf664c-6221-49e7-8eda-bd3b5f059cbe\") " pod="openstack/nova-cell0-2f19-account-create-update-x86m7" Jan 28 07:05:04 crc kubenswrapper[4642]: I0128 07:05:04.021411 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r296r\" (UniqueName: \"kubernetes.io/projected/48cf664c-6221-49e7-8eda-bd3b5f059cbe-kube-api-access-r296r\") pod \"nova-cell0-2f19-account-create-update-x86m7\" (UID: \"48cf664c-6221-49e7-8eda-bd3b5f059cbe\") " pod="openstack/nova-cell0-2f19-account-create-update-x86m7" Jan 28 07:05:04 crc kubenswrapper[4642]: I0128 07:05:04.068664 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-9752-account-create-update-vw6jg"] Jan 28 07:05:04 crc kubenswrapper[4642]: I0128 07:05:04.069843 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-9752-account-create-update-vw6jg" Jan 28 07:05:04 crc kubenswrapper[4642]: I0128 07:05:04.075397 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 28 07:05:04 crc kubenswrapper[4642]: I0128 07:05:04.079656 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-9752-account-create-update-vw6jg"] Jan 28 07:05:04 crc kubenswrapper[4642]: I0128 07:05:04.104546 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-bk4gf" Jan 28 07:05:04 crc kubenswrapper[4642]: I0128 07:05:04.105690 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4vcb\" (UniqueName: \"kubernetes.io/projected/41c8e5f3-dd67-49ed-97f1-c8eac8e3e8dd-kube-api-access-s4vcb\") pod \"nova-cell1-9752-account-create-update-vw6jg\" (UID: \"41c8e5f3-dd67-49ed-97f1-c8eac8e3e8dd\") " pod="openstack/nova-cell1-9752-account-create-update-vw6jg" Jan 28 07:05:04 crc kubenswrapper[4642]: I0128 07:05:04.105724 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41c8e5f3-dd67-49ed-97f1-c8eac8e3e8dd-operator-scripts\") pod \"nova-cell1-9752-account-create-update-vw6jg\" (UID: \"41c8e5f3-dd67-49ed-97f1-c8eac8e3e8dd\") " pod="openstack/nova-cell1-9752-account-create-update-vw6jg" Jan 28 07:05:04 crc kubenswrapper[4642]: I0128 07:05:04.192765 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-2f19-account-create-update-x86m7" Jan 28 07:05:04 crc kubenswrapper[4642]: I0128 07:05:04.210029 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4vcb\" (UniqueName: \"kubernetes.io/projected/41c8e5f3-dd67-49ed-97f1-c8eac8e3e8dd-kube-api-access-s4vcb\") pod \"nova-cell1-9752-account-create-update-vw6jg\" (UID: \"41c8e5f3-dd67-49ed-97f1-c8eac8e3e8dd\") " pod="openstack/nova-cell1-9752-account-create-update-vw6jg" Jan 28 07:05:04 crc kubenswrapper[4642]: I0128 07:05:04.210077 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41c8e5f3-dd67-49ed-97f1-c8eac8e3e8dd-operator-scripts\") pod \"nova-cell1-9752-account-create-update-vw6jg\" (UID: \"41c8e5f3-dd67-49ed-97f1-c8eac8e3e8dd\") " pod="openstack/nova-cell1-9752-account-create-update-vw6jg" Jan 28 07:05:04 crc kubenswrapper[4642]: I0128 07:05:04.210752 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41c8e5f3-dd67-49ed-97f1-c8eac8e3e8dd-operator-scripts\") pod \"nova-cell1-9752-account-create-update-vw6jg\" (UID: \"41c8e5f3-dd67-49ed-97f1-c8eac8e3e8dd\") " pod="openstack/nova-cell1-9752-account-create-update-vw6jg" Jan 28 07:05:04 crc kubenswrapper[4642]: I0128 07:05:04.229098 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4vcb\" (UniqueName: \"kubernetes.io/projected/41c8e5f3-dd67-49ed-97f1-c8eac8e3e8dd-kube-api-access-s4vcb\") pod \"nova-cell1-9752-account-create-update-vw6jg\" (UID: \"41c8e5f3-dd67-49ed-97f1-c8eac8e3e8dd\") " pod="openstack/nova-cell1-9752-account-create-update-vw6jg" Jan 28 07:05:04 crc kubenswrapper[4642]: I0128 07:05:04.338312 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-655lk"] Jan 28 07:05:04 crc kubenswrapper[4642]: I0128 07:05:04.396681 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-9752-account-create-update-vw6jg" Jan 28 07:05:04 crc kubenswrapper[4642]: I0128 07:05:04.442257 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-40e8-account-create-update-mlk24"] Jan 28 07:05:04 crc kubenswrapper[4642]: I0128 07:05:04.478459 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-d7p52"] Jan 28 07:05:04 crc kubenswrapper[4642]: I0128 07:05:04.492141 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-655lk" event={"ID":"be3cadcb-de4f-48dd-aae9-c2ff2ca38571","Type":"ContainerStarted","Data":"20a977ef5fc627d5f51887fd40ed8fabf1d034df8ce72b77b84945f5e91bfce8"} Jan 28 07:05:04 crc kubenswrapper[4642]: I0128 07:05:04.494323 4642 generic.go:334] "Generic (PLEG): container finished" podID="e6aa333d-da9b-4f35-86a3-b88b6fe94ebc" containerID="1f390e5ba9dc59d1d4b1ef1707577455571a9a0b34a3ce48db1b3641fc7e6743" exitCode=0 Jan 28 07:05:04 crc kubenswrapper[4642]: I0128 07:05:04.494367 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-68q8l" event={"ID":"e6aa333d-da9b-4f35-86a3-b88b6fe94ebc","Type":"ContainerDied","Data":"1f390e5ba9dc59d1d4b1ef1707577455571a9a0b34a3ce48db1b3641fc7e6743"} Jan 28 07:05:04 crc kubenswrapper[4642]: I0128 07:05:04.497072 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-40e8-account-create-update-mlk24" event={"ID":"35f9d883-d85a-4ed8-907b-c4e79d737c18","Type":"ContainerStarted","Data":"595e45d0f5d10633c2ce427cd958381d1d985483773c4ac8a17bb2b0e19db399"} Jan 28 07:05:04 crc kubenswrapper[4642]: I0128 07:05:04.703706 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-bk4gf"] Jan 28 07:05:04 crc kubenswrapper[4642]: W0128 07:05:04.710585 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc3d9b18d_9328_43f4_85d4_cf05abf0ac5f.slice/crio-e6875fd3f838efb4058642505ea9da2802f57cc9b3667e81d3b0411a0e6e598f WatchSource:0}: Error finding container e6875fd3f838efb4058642505ea9da2802f57cc9b3667e81d3b0411a0e6e598f: Status 404 returned error can't find the container with id e6875fd3f838efb4058642505ea9da2802f57cc9b3667e81d3b0411a0e6e598f Jan 28 07:05:04 crc kubenswrapper[4642]: I0128 07:05:04.841049 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-2f19-account-create-update-x86m7"] Jan 28 07:05:04 crc kubenswrapper[4642]: I0128 07:05:04.929862 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-9752-account-create-update-vw6jg"] Jan 28 07:05:04 crc kubenswrapper[4642]: W0128 07:05:04.958683 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod41c8e5f3_dd67_49ed_97f1_c8eac8e3e8dd.slice/crio-edba5a3357339732ac5479d1257dc4e5ea4acb304d19af2b9cd28610f7dc6f25 WatchSource:0}: Error finding container edba5a3357339732ac5479d1257dc4e5ea4acb304d19af2b9cd28610f7dc6f25: Status 404 returned error can't find the container with id edba5a3357339732ac5479d1257dc4e5ea4acb304d19af2b9cd28610f7dc6f25 Jan 28 07:05:04 crc kubenswrapper[4642]: W0128 07:05:04.959648 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod48cf664c_6221_49e7_8eda_bd3b5f059cbe.slice/crio-1e5f550285aa81d33845e0b94f25617f3ff267588454651febb6fa727957b39c WatchSource:0}: Error finding container 1e5f550285aa81d33845e0b94f25617f3ff267588454651febb6fa727957b39c: Status 404 returned error can't find the container with id 1e5f550285aa81d33845e0b94f25617f3ff267588454651febb6fa727957b39c Jan 28 07:05:05 crc kubenswrapper[4642]: I0128 07:05:05.506137 4642 generic.go:334] "Generic (PLEG): container finished" podID="35f9d883-d85a-4ed8-907b-c4e79d737c18" containerID="79d6f97bcca0dcf2afa4c59c89778fc41fc99a3d080d6fd6b0b2080212833164" exitCode=0 Jan 28 07:05:05 crc kubenswrapper[4642]: I0128 07:05:05.506236 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-40e8-account-create-update-mlk24" event={"ID":"35f9d883-d85a-4ed8-907b-c4e79d737c18","Type":"ContainerDied","Data":"79d6f97bcca0dcf2afa4c59c89778fc41fc99a3d080d6fd6b0b2080212833164"} Jan 28 07:05:05 crc kubenswrapper[4642]: I0128 07:05:05.508174 4642 generic.go:334] "Generic (PLEG): container finished" podID="41c8e5f3-dd67-49ed-97f1-c8eac8e3e8dd" containerID="a4b3874722745c6c64f9be87b3da67e5cde249adac1a852c1222f7d9859238bc" exitCode=0 Jan 28 07:05:05 crc kubenswrapper[4642]: I0128 07:05:05.508263 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-9752-account-create-update-vw6jg" event={"ID":"41c8e5f3-dd67-49ed-97f1-c8eac8e3e8dd","Type":"ContainerDied","Data":"a4b3874722745c6c64f9be87b3da67e5cde249adac1a852c1222f7d9859238bc"} Jan 28 07:05:05 crc kubenswrapper[4642]: I0128 07:05:05.508290 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-9752-account-create-update-vw6jg" event={"ID":"41c8e5f3-dd67-49ed-97f1-c8eac8e3e8dd","Type":"ContainerStarted","Data":"edba5a3357339732ac5479d1257dc4e5ea4acb304d19af2b9cd28610f7dc6f25"} Jan 28 07:05:05 crc kubenswrapper[4642]: I0128 07:05:05.509505 4642 generic.go:334] "Generic (PLEG): container finished" podID="be3cadcb-de4f-48dd-aae9-c2ff2ca38571" containerID="ba9382bf6c4faad83b68a6e1c918e372cc1171069e3cca2bcd5e0f0de77e8f0b" exitCode=0 Jan 28 07:05:05 crc kubenswrapper[4642]: I0128 07:05:05.509547 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-655lk" event={"ID":"be3cadcb-de4f-48dd-aae9-c2ff2ca38571","Type":"ContainerDied","Data":"ba9382bf6c4faad83b68a6e1c918e372cc1171069e3cca2bcd5e0f0de77e8f0b"} Jan 28 07:05:05 crc kubenswrapper[4642]: I0128 07:05:05.510462 4642 generic.go:334] "Generic (PLEG): container finished" podID="9ef3a51e-a476-442b-baf1-5650caa83c8e" containerID="ff06a6b5dbba6b8efc2678a3ffe95bc8f74daab7b5106b010fbd362c1fa375af" exitCode=0 Jan 28 07:05:05 crc kubenswrapper[4642]: I0128 07:05:05.510505 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-d7p52" event={"ID":"9ef3a51e-a476-442b-baf1-5650caa83c8e","Type":"ContainerDied","Data":"ff06a6b5dbba6b8efc2678a3ffe95bc8f74daab7b5106b010fbd362c1fa375af"} Jan 28 07:05:05 crc kubenswrapper[4642]: I0128 07:05:05.510518 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-d7p52" event={"ID":"9ef3a51e-a476-442b-baf1-5650caa83c8e","Type":"ContainerStarted","Data":"3a471cada775c99e5164299d326b74f465857e063efef69dfe23eb50c88009e1"} Jan 28 07:05:05 crc kubenswrapper[4642]: I0128 07:05:05.511360 4642 generic.go:334] "Generic (PLEG): container finished" podID="48cf664c-6221-49e7-8eda-bd3b5f059cbe" containerID="972383baec7d5de61350f07411785e47be5c10fa045d8f2fbd4e8e0ae71872c2" exitCode=0 Jan 28 07:05:05 crc kubenswrapper[4642]: I0128 07:05:05.511397 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-2f19-account-create-update-x86m7" event={"ID":"48cf664c-6221-49e7-8eda-bd3b5f059cbe","Type":"ContainerDied","Data":"972383baec7d5de61350f07411785e47be5c10fa045d8f2fbd4e8e0ae71872c2"} Jan 28 07:05:05 crc kubenswrapper[4642]: I0128 07:05:05.511411 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-2f19-account-create-update-x86m7" event={"ID":"48cf664c-6221-49e7-8eda-bd3b5f059cbe","Type":"ContainerStarted","Data":"1e5f550285aa81d33845e0b94f25617f3ff267588454651febb6fa727957b39c"} Jan 28 07:05:05 crc kubenswrapper[4642]: I0128 07:05:05.512336 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"578099b8-075f-47d7-b5bf-4db9ef550508","Type":"ContainerStarted","Data":"e7976304238b68b824a0bd27fe538ce22dd35d4185faaaca6b9a637511de9e3c"} Jan 28 07:05:05 crc kubenswrapper[4642]: I0128 07:05:05.513255 4642 generic.go:334] "Generic (PLEG): container finished" podID="c3d9b18d-9328-43f4-85d4-cf05abf0ac5f" containerID="b4ba954ce240c66dd88cc9ad803dcec524d4e9f9746504ed0d251c167a523594" exitCode=0 Jan 28 07:05:05 crc kubenswrapper[4642]: I0128 07:05:05.513407 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-bk4gf" event={"ID":"c3d9b18d-9328-43f4-85d4-cf05abf0ac5f","Type":"ContainerDied","Data":"b4ba954ce240c66dd88cc9ad803dcec524d4e9f9746504ed0d251c167a523594"} Jan 28 07:05:05 crc kubenswrapper[4642]: I0128 07:05:05.513427 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-bk4gf" event={"ID":"c3d9b18d-9328-43f4-85d4-cf05abf0ac5f","Type":"ContainerStarted","Data":"e6875fd3f838efb4058642505ea9da2802f57cc9b3667e81d3b0411a0e6e598f"} Jan 28 07:05:05 crc kubenswrapper[4642]: I0128 07:05:05.788344 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-68q8l" Jan 28 07:05:05 crc kubenswrapper[4642]: I0128 07:05:05.839748 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6aa333d-da9b-4f35-86a3-b88b6fe94ebc-combined-ca-bundle\") pod \"e6aa333d-da9b-4f35-86a3-b88b6fe94ebc\" (UID: \"e6aa333d-da9b-4f35-86a3-b88b6fe94ebc\") " Jan 28 07:05:05 crc kubenswrapper[4642]: I0128 07:05:05.839943 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6aa333d-da9b-4f35-86a3-b88b6fe94ebc-config-data\") pod \"e6aa333d-da9b-4f35-86a3-b88b6fe94ebc\" (UID: \"e6aa333d-da9b-4f35-86a3-b88b6fe94ebc\") " Jan 28 07:05:05 crc kubenswrapper[4642]: I0128 07:05:05.840065 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6aa333d-da9b-4f35-86a3-b88b6fe94ebc-scripts\") pod \"e6aa333d-da9b-4f35-86a3-b88b6fe94ebc\" (UID: \"e6aa333d-da9b-4f35-86a3-b88b6fe94ebc\") " Jan 28 07:05:05 crc kubenswrapper[4642]: I0128 07:05:05.840092 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e6aa333d-da9b-4f35-86a3-b88b6fe94ebc-etc-machine-id\") pod \"e6aa333d-da9b-4f35-86a3-b88b6fe94ebc\" (UID: \"e6aa333d-da9b-4f35-86a3-b88b6fe94ebc\") " Jan 28 07:05:05 crc kubenswrapper[4642]: I0128 07:05:05.840127 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e6aa333d-da9b-4f35-86a3-b88b6fe94ebc-db-sync-config-data\") pod \"e6aa333d-da9b-4f35-86a3-b88b6fe94ebc\" (UID: \"e6aa333d-da9b-4f35-86a3-b88b6fe94ebc\") " Jan 28 07:05:05 crc kubenswrapper[4642]: I0128 07:05:05.840282 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-659fz\" (UniqueName: \"kubernetes.io/projected/e6aa333d-da9b-4f35-86a3-b88b6fe94ebc-kube-api-access-659fz\") pod \"e6aa333d-da9b-4f35-86a3-b88b6fe94ebc\" (UID: \"e6aa333d-da9b-4f35-86a3-b88b6fe94ebc\") " Jan 28 07:05:05 crc kubenswrapper[4642]: I0128 07:05:05.841339 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e6aa333d-da9b-4f35-86a3-b88b6fe94ebc-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "e6aa333d-da9b-4f35-86a3-b88b6fe94ebc" (UID: "e6aa333d-da9b-4f35-86a3-b88b6fe94ebc"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 07:05:05 crc kubenswrapper[4642]: I0128 07:05:05.846868 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6aa333d-da9b-4f35-86a3-b88b6fe94ebc-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "e6aa333d-da9b-4f35-86a3-b88b6fe94ebc" (UID: "e6aa333d-da9b-4f35-86a3-b88b6fe94ebc"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:05:05 crc kubenswrapper[4642]: I0128 07:05:05.848289 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6aa333d-da9b-4f35-86a3-b88b6fe94ebc-scripts" (OuterVolumeSpecName: "scripts") pod "e6aa333d-da9b-4f35-86a3-b88b6fe94ebc" (UID: "e6aa333d-da9b-4f35-86a3-b88b6fe94ebc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:05:05 crc kubenswrapper[4642]: I0128 07:05:05.855000 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6aa333d-da9b-4f35-86a3-b88b6fe94ebc-kube-api-access-659fz" (OuterVolumeSpecName: "kube-api-access-659fz") pod "e6aa333d-da9b-4f35-86a3-b88b6fe94ebc" (UID: "e6aa333d-da9b-4f35-86a3-b88b6fe94ebc"). InnerVolumeSpecName "kube-api-access-659fz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:05:05 crc kubenswrapper[4642]: I0128 07:05:05.869471 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6aa333d-da9b-4f35-86a3-b88b6fe94ebc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e6aa333d-da9b-4f35-86a3-b88b6fe94ebc" (UID: "e6aa333d-da9b-4f35-86a3-b88b6fe94ebc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:05:05 crc kubenswrapper[4642]: I0128 07:05:05.884137 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6aa333d-da9b-4f35-86a3-b88b6fe94ebc-config-data" (OuterVolumeSpecName: "config-data") pod "e6aa333d-da9b-4f35-86a3-b88b6fe94ebc" (UID: "e6aa333d-da9b-4f35-86a3-b88b6fe94ebc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:05:05 crc kubenswrapper[4642]: I0128 07:05:05.942406 4642 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6aa333d-da9b-4f35-86a3-b88b6fe94ebc-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 07:05:05 crc kubenswrapper[4642]: I0128 07:05:05.942436 4642 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6aa333d-da9b-4f35-86a3-b88b6fe94ebc-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 07:05:05 crc kubenswrapper[4642]: I0128 07:05:05.942446 4642 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e6aa333d-da9b-4f35-86a3-b88b6fe94ebc-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 28 07:05:05 crc kubenswrapper[4642]: I0128 07:05:05.942456 4642 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e6aa333d-da9b-4f35-86a3-b88b6fe94ebc-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 07:05:05 crc kubenswrapper[4642]: I0128 07:05:05.942466 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-659fz\" (UniqueName: \"kubernetes.io/projected/e6aa333d-da9b-4f35-86a3-b88b6fe94ebc-kube-api-access-659fz\") on node \"crc\" DevicePath \"\"" Jan 28 07:05:05 crc kubenswrapper[4642]: I0128 07:05:05.942482 4642 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6aa333d-da9b-4f35-86a3-b88b6fe94ebc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 07:05:06 crc kubenswrapper[4642]: I0128 07:05:06.025428 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 07:05:06 crc kubenswrapper[4642]: I0128 07:05:06.025694 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="15853341-a13e-4f13-a998-9026f9034213" containerName="glance-log" containerID="cri-o://0fe5311386b56bf9ddcef24a17968865e8a17517d1865ad7f03114327b3756a5" gracePeriod=30 Jan 28 07:05:06 crc kubenswrapper[4642]: I0128 07:05:06.025852 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="15853341-a13e-4f13-a998-9026f9034213" containerName="glance-httpd" containerID="cri-o://aecf1be783189cb9e0bc0b617e50c966034ee267137fb04a5158161548d9a3c5" gracePeriod=30 Jan 28 07:05:06 crc kubenswrapper[4642]: I0128 07:05:06.523375 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-68q8l" event={"ID":"e6aa333d-da9b-4f35-86a3-b88b6fe94ebc","Type":"ContainerDied","Data":"b3c4b4a8793104f62bd36792eb30d3fed9eebcab37605b4605206e780cf8ffbb"} Jan 28 07:05:06 crc kubenswrapper[4642]: I0128 07:05:06.523766 4642 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3c4b4a8793104f62bd36792eb30d3fed9eebcab37605b4605206e780cf8ffbb" Jan 28 07:05:06 crc kubenswrapper[4642]: I0128 07:05:06.523439 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-68q8l" Jan 28 07:05:06 crc kubenswrapper[4642]: I0128 07:05:06.525734 4642 generic.go:334] "Generic (PLEG): container finished" podID="15853341-a13e-4f13-a998-9026f9034213" containerID="0fe5311386b56bf9ddcef24a17968865e8a17517d1865ad7f03114327b3756a5" exitCode=143 Jan 28 07:05:06 crc kubenswrapper[4642]: I0128 07:05:06.525862 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"15853341-a13e-4f13-a998-9026f9034213","Type":"ContainerDied","Data":"0fe5311386b56bf9ddcef24a17968865e8a17517d1865ad7f03114327b3756a5"} Jan 28 07:05:06 crc kubenswrapper[4642]: I0128 07:05:06.750384 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 07:05:06 crc kubenswrapper[4642]: E0128 07:05:06.750980 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6aa333d-da9b-4f35-86a3-b88b6fe94ebc" containerName="cinder-db-sync" Jan 28 07:05:06 crc kubenswrapper[4642]: I0128 07:05:06.750997 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6aa333d-da9b-4f35-86a3-b88b6fe94ebc" containerName="cinder-db-sync" Jan 28 07:05:06 crc kubenswrapper[4642]: I0128 07:05:06.751150 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6aa333d-da9b-4f35-86a3-b88b6fe94ebc" containerName="cinder-db-sync" Jan 28 07:05:06 crc kubenswrapper[4642]: I0128 07:05:06.751978 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 28 07:05:06 crc kubenswrapper[4642]: I0128 07:05:06.756661 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 28 07:05:06 crc kubenswrapper[4642]: I0128 07:05:06.756818 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 28 07:05:06 crc kubenswrapper[4642]: I0128 07:05:06.757016 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-5z67g" Jan 28 07:05:06 crc kubenswrapper[4642]: I0128 07:05:06.758376 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 28 07:05:06 crc kubenswrapper[4642]: I0128 07:05:06.805822 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 07:05:06 crc kubenswrapper[4642]: I0128 07:05:06.821616 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-75f9d8d9f-s9pnh"] Jan 28 07:05:06 crc kubenswrapper[4642]: I0128 07:05:06.822915 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75f9d8d9f-s9pnh" Jan 28 07:05:06 crc kubenswrapper[4642]: I0128 07:05:06.848601 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-655lk" Jan 28 07:05:06 crc kubenswrapper[4642]: I0128 07:05:06.855695 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75f9d8d9f-s9pnh"] Jan 28 07:05:06 crc kubenswrapper[4642]: I0128 07:05:06.859595 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v9h5g\" (UniqueName: \"kubernetes.io/projected/be3cadcb-de4f-48dd-aae9-c2ff2ca38571-kube-api-access-v9h5g\") pod \"be3cadcb-de4f-48dd-aae9-c2ff2ca38571\" (UID: \"be3cadcb-de4f-48dd-aae9-c2ff2ca38571\") " Jan 28 07:05:06 crc kubenswrapper[4642]: I0128 07:05:06.859674 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be3cadcb-de4f-48dd-aae9-c2ff2ca38571-operator-scripts\") pod \"be3cadcb-de4f-48dd-aae9-c2ff2ca38571\" (UID: \"be3cadcb-de4f-48dd-aae9-c2ff2ca38571\") " Jan 28 07:05:06 crc kubenswrapper[4642]: I0128 07:05:06.859907 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a3686e6e-7a8c-45b8-80a5-c64c8fc92aef-dns-swift-storage-0\") pod \"dnsmasq-dns-75f9d8d9f-s9pnh\" (UID: \"a3686e6e-7a8c-45b8-80a5-c64c8fc92aef\") " pod="openstack/dnsmasq-dns-75f9d8d9f-s9pnh" Jan 28 07:05:06 crc kubenswrapper[4642]: I0128 07:05:06.859953 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db3bf7e9-aede-4b15-8c39-49ecafa0e435-scripts\") pod \"cinder-scheduler-0\" (UID: \"db3bf7e9-aede-4b15-8c39-49ecafa0e435\") " pod="openstack/cinder-scheduler-0" Jan 28 07:05:06 crc kubenswrapper[4642]: I0128 07:05:06.859968 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a3686e6e-7a8c-45b8-80a5-c64c8fc92aef-ovsdbserver-sb\") pod \"dnsmasq-dns-75f9d8d9f-s9pnh\" (UID: \"a3686e6e-7a8c-45b8-80a5-c64c8fc92aef\") " pod="openstack/dnsmasq-dns-75f9d8d9f-s9pnh" Jan 28 07:05:06 crc kubenswrapper[4642]: I0128 07:05:06.859987 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mx64s\" (UniqueName: \"kubernetes.io/projected/db3bf7e9-aede-4b15-8c39-49ecafa0e435-kube-api-access-mx64s\") pod \"cinder-scheduler-0\" (UID: \"db3bf7e9-aede-4b15-8c39-49ecafa0e435\") " pod="openstack/cinder-scheduler-0" Jan 28 07:05:06 crc kubenswrapper[4642]: I0128 07:05:06.860001 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db3bf7e9-aede-4b15-8c39-49ecafa0e435-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"db3bf7e9-aede-4b15-8c39-49ecafa0e435\") " pod="openstack/cinder-scheduler-0" Jan 28 07:05:06 crc kubenswrapper[4642]: I0128 07:05:06.860019 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db3bf7e9-aede-4b15-8c39-49ecafa0e435-config-data\") pod \"cinder-scheduler-0\" (UID: \"db3bf7e9-aede-4b15-8c39-49ecafa0e435\") " pod="openstack/cinder-scheduler-0" Jan 28 07:05:06 crc kubenswrapper[4642]: I0128 07:05:06.860036 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3686e6e-7a8c-45b8-80a5-c64c8fc92aef-config\") pod \"dnsmasq-dns-75f9d8d9f-s9pnh\" (UID: \"a3686e6e-7a8c-45b8-80a5-c64c8fc92aef\") " pod="openstack/dnsmasq-dns-75f9d8d9f-s9pnh" Jan 28 07:05:06 crc kubenswrapper[4642]: I0128 07:05:06.860082 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a3686e6e-7a8c-45b8-80a5-c64c8fc92aef-dns-svc\") pod \"dnsmasq-dns-75f9d8d9f-s9pnh\" (UID: \"a3686e6e-7a8c-45b8-80a5-c64c8fc92aef\") " pod="openstack/dnsmasq-dns-75f9d8d9f-s9pnh" Jan 28 07:05:06 crc kubenswrapper[4642]: I0128 07:05:06.860113 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a3686e6e-7a8c-45b8-80a5-c64c8fc92aef-ovsdbserver-nb\") pod \"dnsmasq-dns-75f9d8d9f-s9pnh\" (UID: \"a3686e6e-7a8c-45b8-80a5-c64c8fc92aef\") " pod="openstack/dnsmasq-dns-75f9d8d9f-s9pnh" Jan 28 07:05:06 crc kubenswrapper[4642]: I0128 07:05:06.860138 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqvmt\" (UniqueName: \"kubernetes.io/projected/a3686e6e-7a8c-45b8-80a5-c64c8fc92aef-kube-api-access-hqvmt\") pod \"dnsmasq-dns-75f9d8d9f-s9pnh\" (UID: \"a3686e6e-7a8c-45b8-80a5-c64c8fc92aef\") " pod="openstack/dnsmasq-dns-75f9d8d9f-s9pnh" Jan 28 07:05:06 crc kubenswrapper[4642]: I0128 07:05:06.860158 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/db3bf7e9-aede-4b15-8c39-49ecafa0e435-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"db3bf7e9-aede-4b15-8c39-49ecafa0e435\") " pod="openstack/cinder-scheduler-0" Jan 28 07:05:06 crc kubenswrapper[4642]: I0128 07:05:06.860199 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/db3bf7e9-aede-4b15-8c39-49ecafa0e435-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"db3bf7e9-aede-4b15-8c39-49ecafa0e435\") " pod="openstack/cinder-scheduler-0" Jan 28 07:05:06 crc kubenswrapper[4642]: I0128 07:05:06.861391 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be3cadcb-de4f-48dd-aae9-c2ff2ca38571-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "be3cadcb-de4f-48dd-aae9-c2ff2ca38571" (UID: "be3cadcb-de4f-48dd-aae9-c2ff2ca38571"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:05:06 crc kubenswrapper[4642]: I0128 07:05:06.871065 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be3cadcb-de4f-48dd-aae9-c2ff2ca38571-kube-api-access-v9h5g" (OuterVolumeSpecName: "kube-api-access-v9h5g") pod "be3cadcb-de4f-48dd-aae9-c2ff2ca38571" (UID: "be3cadcb-de4f-48dd-aae9-c2ff2ca38571"). InnerVolumeSpecName "kube-api-access-v9h5g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:05:06 crc kubenswrapper[4642]: I0128 07:05:06.963659 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a3686e6e-7a8c-45b8-80a5-c64c8fc92aef-dns-svc\") pod \"dnsmasq-dns-75f9d8d9f-s9pnh\" (UID: \"a3686e6e-7a8c-45b8-80a5-c64c8fc92aef\") " pod="openstack/dnsmasq-dns-75f9d8d9f-s9pnh" Jan 28 07:05:06 crc kubenswrapper[4642]: I0128 07:05:06.963715 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a3686e6e-7a8c-45b8-80a5-c64c8fc92aef-ovsdbserver-nb\") pod \"dnsmasq-dns-75f9d8d9f-s9pnh\" (UID: \"a3686e6e-7a8c-45b8-80a5-c64c8fc92aef\") " pod="openstack/dnsmasq-dns-75f9d8d9f-s9pnh" Jan 28 07:05:06 crc kubenswrapper[4642]: I0128 07:05:06.963742 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqvmt\" (UniqueName: \"kubernetes.io/projected/a3686e6e-7a8c-45b8-80a5-c64c8fc92aef-kube-api-access-hqvmt\") pod \"dnsmasq-dns-75f9d8d9f-s9pnh\" (UID: \"a3686e6e-7a8c-45b8-80a5-c64c8fc92aef\") " pod="openstack/dnsmasq-dns-75f9d8d9f-s9pnh" Jan 28 07:05:06 crc kubenswrapper[4642]: I0128 07:05:06.963768 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/db3bf7e9-aede-4b15-8c39-49ecafa0e435-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"db3bf7e9-aede-4b15-8c39-49ecafa0e435\") " pod="openstack/cinder-scheduler-0" Jan 28 07:05:06 crc kubenswrapper[4642]: I0128 07:05:06.963799 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/db3bf7e9-aede-4b15-8c39-49ecafa0e435-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"db3bf7e9-aede-4b15-8c39-49ecafa0e435\") " pod="openstack/cinder-scheduler-0" Jan 28 07:05:06 crc kubenswrapper[4642]: I0128 07:05:06.963825 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a3686e6e-7a8c-45b8-80a5-c64c8fc92aef-dns-swift-storage-0\") pod \"dnsmasq-dns-75f9d8d9f-s9pnh\" (UID: \"a3686e6e-7a8c-45b8-80a5-c64c8fc92aef\") " pod="openstack/dnsmasq-dns-75f9d8d9f-s9pnh" Jan 28 07:05:06 crc kubenswrapper[4642]: I0128 07:05:06.963855 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db3bf7e9-aede-4b15-8c39-49ecafa0e435-scripts\") pod \"cinder-scheduler-0\" (UID: \"db3bf7e9-aede-4b15-8c39-49ecafa0e435\") " pod="openstack/cinder-scheduler-0" Jan 28 07:05:06 crc kubenswrapper[4642]: I0128 07:05:06.963869 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a3686e6e-7a8c-45b8-80a5-c64c8fc92aef-ovsdbserver-sb\") pod \"dnsmasq-dns-75f9d8d9f-s9pnh\" (UID: \"a3686e6e-7a8c-45b8-80a5-c64c8fc92aef\") " pod="openstack/dnsmasq-dns-75f9d8d9f-s9pnh" Jan 28 07:05:06 crc kubenswrapper[4642]: I0128 07:05:06.963889 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mx64s\" (UniqueName: \"kubernetes.io/projected/db3bf7e9-aede-4b15-8c39-49ecafa0e435-kube-api-access-mx64s\") pod \"cinder-scheduler-0\" (UID: \"db3bf7e9-aede-4b15-8c39-49ecafa0e435\") " pod="openstack/cinder-scheduler-0" Jan 28 07:05:06 crc kubenswrapper[4642]: I0128 07:05:06.963904 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db3bf7e9-aede-4b15-8c39-49ecafa0e435-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"db3bf7e9-aede-4b15-8c39-49ecafa0e435\") " pod="openstack/cinder-scheduler-0" Jan 28 07:05:06 crc kubenswrapper[4642]: I0128 07:05:06.963922 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db3bf7e9-aede-4b15-8c39-49ecafa0e435-config-data\") pod \"cinder-scheduler-0\" (UID: \"db3bf7e9-aede-4b15-8c39-49ecafa0e435\") " pod="openstack/cinder-scheduler-0" Jan 28 07:05:06 crc kubenswrapper[4642]: I0128 07:05:06.963937 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3686e6e-7a8c-45b8-80a5-c64c8fc92aef-config\") pod \"dnsmasq-dns-75f9d8d9f-s9pnh\" (UID: \"a3686e6e-7a8c-45b8-80a5-c64c8fc92aef\") " pod="openstack/dnsmasq-dns-75f9d8d9f-s9pnh" Jan 28 07:05:06 crc kubenswrapper[4642]: I0128 07:05:06.963976 4642 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be3cadcb-de4f-48dd-aae9-c2ff2ca38571-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 07:05:06 crc kubenswrapper[4642]: I0128 07:05:06.963987 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v9h5g\" (UniqueName: \"kubernetes.io/projected/be3cadcb-de4f-48dd-aae9-c2ff2ca38571-kube-api-access-v9h5g\") on node \"crc\" DevicePath \"\"" Jan 28 07:05:06 crc kubenswrapper[4642]: I0128 07:05:06.964776 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3686e6e-7a8c-45b8-80a5-c64c8fc92aef-config\") pod \"dnsmasq-dns-75f9d8d9f-s9pnh\" (UID: \"a3686e6e-7a8c-45b8-80a5-c64c8fc92aef\") " pod="openstack/dnsmasq-dns-75f9d8d9f-s9pnh" Jan 28 07:05:06 crc kubenswrapper[4642]: I0128 07:05:06.965279 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a3686e6e-7a8c-45b8-80a5-c64c8fc92aef-dns-swift-storage-0\") pod \"dnsmasq-dns-75f9d8d9f-s9pnh\" (UID: \"a3686e6e-7a8c-45b8-80a5-c64c8fc92aef\") " pod="openstack/dnsmasq-dns-75f9d8d9f-s9pnh" Jan 28 07:05:06 crc kubenswrapper[4642]: I0128 07:05:06.965888 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a3686e6e-7a8c-45b8-80a5-c64c8fc92aef-dns-svc\") pod \"dnsmasq-dns-75f9d8d9f-s9pnh\" (UID: \"a3686e6e-7a8c-45b8-80a5-c64c8fc92aef\") " pod="openstack/dnsmasq-dns-75f9d8d9f-s9pnh" Jan 28 07:05:06 crc kubenswrapper[4642]: I0128 07:05:06.966402 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a3686e6e-7a8c-45b8-80a5-c64c8fc92aef-ovsdbserver-nb\") pod \"dnsmasq-dns-75f9d8d9f-s9pnh\" (UID: \"a3686e6e-7a8c-45b8-80a5-c64c8fc92aef\") " pod="openstack/dnsmasq-dns-75f9d8d9f-s9pnh" Jan 28 07:05:06 crc kubenswrapper[4642]: I0128 07:05:06.968252 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/db3bf7e9-aede-4b15-8c39-49ecafa0e435-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"db3bf7e9-aede-4b15-8c39-49ecafa0e435\") " pod="openstack/cinder-scheduler-0" Jan 28 07:05:06 crc kubenswrapper[4642]: I0128 07:05:06.969454 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/db3bf7e9-aede-4b15-8c39-49ecafa0e435-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"db3bf7e9-aede-4b15-8c39-49ecafa0e435\") " pod="openstack/cinder-scheduler-0" Jan 28 07:05:06 crc kubenswrapper[4642]: I0128 07:05:06.971755 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a3686e6e-7a8c-45b8-80a5-c64c8fc92aef-ovsdbserver-sb\") pod \"dnsmasq-dns-75f9d8d9f-s9pnh\" (UID: \"a3686e6e-7a8c-45b8-80a5-c64c8fc92aef\") " pod="openstack/dnsmasq-dns-75f9d8d9f-s9pnh" Jan 28 07:05:06 crc kubenswrapper[4642]: I0128 07:05:06.972392 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db3bf7e9-aede-4b15-8c39-49ecafa0e435-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"db3bf7e9-aede-4b15-8c39-49ecafa0e435\") " pod="openstack/cinder-scheduler-0" Jan 28 07:05:06 crc kubenswrapper[4642]: I0128 07:05:06.972897 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db3bf7e9-aede-4b15-8c39-49ecafa0e435-config-data\") pod \"cinder-scheduler-0\" (UID: \"db3bf7e9-aede-4b15-8c39-49ecafa0e435\") " pod="openstack/cinder-scheduler-0" Jan 28 07:05:06 crc kubenswrapper[4642]: I0128 07:05:06.973048 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db3bf7e9-aede-4b15-8c39-49ecafa0e435-scripts\") pod \"cinder-scheduler-0\" (UID: \"db3bf7e9-aede-4b15-8c39-49ecafa0e435\") " pod="openstack/cinder-scheduler-0" Jan 28 07:05:06 crc kubenswrapper[4642]: I0128 07:05:06.983853 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqvmt\" (UniqueName: \"kubernetes.io/projected/a3686e6e-7a8c-45b8-80a5-c64c8fc92aef-kube-api-access-hqvmt\") pod \"dnsmasq-dns-75f9d8d9f-s9pnh\" (UID: \"a3686e6e-7a8c-45b8-80a5-c64c8fc92aef\") " pod="openstack/dnsmasq-dns-75f9d8d9f-s9pnh" Jan 28 07:05:06 crc kubenswrapper[4642]: I0128 07:05:06.987681 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mx64s\" (UniqueName: \"kubernetes.io/projected/db3bf7e9-aede-4b15-8c39-49ecafa0e435-kube-api-access-mx64s\") pod \"cinder-scheduler-0\" (UID: \"db3bf7e9-aede-4b15-8c39-49ecafa0e435\") " pod="openstack/cinder-scheduler-0" Jan 28 07:05:07 crc kubenswrapper[4642]: I0128 07:05:07.081654 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 28 07:05:07 crc kubenswrapper[4642]: E0128 07:05:07.081982 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be3cadcb-de4f-48dd-aae9-c2ff2ca38571" containerName="mariadb-database-create" Jan 28 07:05:07 crc kubenswrapper[4642]: I0128 07:05:07.081994 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="be3cadcb-de4f-48dd-aae9-c2ff2ca38571" containerName="mariadb-database-create" Jan 28 07:05:07 crc kubenswrapper[4642]: I0128 07:05:07.082128 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="be3cadcb-de4f-48dd-aae9-c2ff2ca38571" containerName="mariadb-database-create" Jan 28 07:05:07 crc kubenswrapper[4642]: I0128 07:05:07.097635 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 28 07:05:07 crc kubenswrapper[4642]: I0128 07:05:07.103114 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 28 07:05:07 crc kubenswrapper[4642]: I0128 07:05:07.152864 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 28 07:05:07 crc kubenswrapper[4642]: I0128 07:05:07.175875 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/905b6d81-1e2e-4b4f-837b-85aa47a1c706-scripts\") pod \"cinder-api-0\" (UID: \"905b6d81-1e2e-4b4f-837b-85aa47a1c706\") " pod="openstack/cinder-api-0" Jan 28 07:05:07 crc kubenswrapper[4642]: I0128 07:05:07.176009 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/905b6d81-1e2e-4b4f-837b-85aa47a1c706-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"905b6d81-1e2e-4b4f-837b-85aa47a1c706\") " pod="openstack/cinder-api-0" Jan 28 07:05:07 crc kubenswrapper[4642]: I0128 07:05:07.176071 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/905b6d81-1e2e-4b4f-837b-85aa47a1c706-logs\") pod \"cinder-api-0\" (UID: \"905b6d81-1e2e-4b4f-837b-85aa47a1c706\") " pod="openstack/cinder-api-0" Jan 28 07:05:07 crc kubenswrapper[4642]: I0128 07:05:07.176222 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/905b6d81-1e2e-4b4f-837b-85aa47a1c706-etc-machine-id\") pod \"cinder-api-0\" (UID: \"905b6d81-1e2e-4b4f-837b-85aa47a1c706\") " pod="openstack/cinder-api-0" Jan 28 07:05:07 crc kubenswrapper[4642]: I0128 07:05:07.176331 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/905b6d81-1e2e-4b4f-837b-85aa47a1c706-config-data\") pod \"cinder-api-0\" (UID: \"905b6d81-1e2e-4b4f-837b-85aa47a1c706\") " pod="openstack/cinder-api-0" Jan 28 07:05:07 crc kubenswrapper[4642]: I0128 07:05:07.176403 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r28gh\" (UniqueName: \"kubernetes.io/projected/905b6d81-1e2e-4b4f-837b-85aa47a1c706-kube-api-access-r28gh\") pod \"cinder-api-0\" (UID: \"905b6d81-1e2e-4b4f-837b-85aa47a1c706\") " pod="openstack/cinder-api-0" Jan 28 07:05:07 crc kubenswrapper[4642]: I0128 07:05:07.176423 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/905b6d81-1e2e-4b4f-837b-85aa47a1c706-config-data-custom\") pod \"cinder-api-0\" (UID: \"905b6d81-1e2e-4b4f-837b-85aa47a1c706\") " pod="openstack/cinder-api-0" Jan 28 07:05:07 crc kubenswrapper[4642]: I0128 07:05:07.190139 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 28 07:05:07 crc kubenswrapper[4642]: I0128 07:05:07.196231 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75f9d8d9f-s9pnh" Jan 28 07:05:07 crc kubenswrapper[4642]: I0128 07:05:07.281167 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/905b6d81-1e2e-4b4f-837b-85aa47a1c706-scripts\") pod \"cinder-api-0\" (UID: \"905b6d81-1e2e-4b4f-837b-85aa47a1c706\") " pod="openstack/cinder-api-0" Jan 28 07:05:07 crc kubenswrapper[4642]: I0128 07:05:07.281244 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/905b6d81-1e2e-4b4f-837b-85aa47a1c706-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"905b6d81-1e2e-4b4f-837b-85aa47a1c706\") " pod="openstack/cinder-api-0" Jan 28 07:05:07 crc kubenswrapper[4642]: I0128 07:05:07.281298 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/905b6d81-1e2e-4b4f-837b-85aa47a1c706-logs\") pod \"cinder-api-0\" (UID: \"905b6d81-1e2e-4b4f-837b-85aa47a1c706\") " pod="openstack/cinder-api-0" Jan 28 07:05:07 crc kubenswrapper[4642]: I0128 07:05:07.281349 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/905b6d81-1e2e-4b4f-837b-85aa47a1c706-etc-machine-id\") pod \"cinder-api-0\" (UID: \"905b6d81-1e2e-4b4f-837b-85aa47a1c706\") " pod="openstack/cinder-api-0" Jan 28 07:05:07 crc kubenswrapper[4642]: I0128 07:05:07.281372 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/905b6d81-1e2e-4b4f-837b-85aa47a1c706-config-data\") pod \"cinder-api-0\" (UID: \"905b6d81-1e2e-4b4f-837b-85aa47a1c706\") " pod="openstack/cinder-api-0" Jan 28 07:05:07 crc kubenswrapper[4642]: I0128 07:05:07.281392 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r28gh\" (UniqueName: \"kubernetes.io/projected/905b6d81-1e2e-4b4f-837b-85aa47a1c706-kube-api-access-r28gh\") pod \"cinder-api-0\" (UID: \"905b6d81-1e2e-4b4f-837b-85aa47a1c706\") " pod="openstack/cinder-api-0" Jan 28 07:05:07 crc kubenswrapper[4642]: I0128 07:05:07.281407 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/905b6d81-1e2e-4b4f-837b-85aa47a1c706-config-data-custom\") pod \"cinder-api-0\" (UID: \"905b6d81-1e2e-4b4f-837b-85aa47a1c706\") " pod="openstack/cinder-api-0" Jan 28 07:05:07 crc kubenswrapper[4642]: I0128 07:05:07.281903 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/905b6d81-1e2e-4b4f-837b-85aa47a1c706-etc-machine-id\") pod \"cinder-api-0\" (UID: \"905b6d81-1e2e-4b4f-837b-85aa47a1c706\") " pod="openstack/cinder-api-0" Jan 28 07:05:07 crc kubenswrapper[4642]: I0128 07:05:07.282286 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/905b6d81-1e2e-4b4f-837b-85aa47a1c706-logs\") pod \"cinder-api-0\" (UID: \"905b6d81-1e2e-4b4f-837b-85aa47a1c706\") " pod="openstack/cinder-api-0" Jan 28 07:05:07 crc kubenswrapper[4642]: I0128 07:05:07.284634 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/905b6d81-1e2e-4b4f-837b-85aa47a1c706-scripts\") pod \"cinder-api-0\" (UID: \"905b6d81-1e2e-4b4f-837b-85aa47a1c706\") " pod="openstack/cinder-api-0" Jan 28 07:05:07 crc kubenswrapper[4642]: I0128 07:05:07.285752 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/905b6d81-1e2e-4b4f-837b-85aa47a1c706-config-data\") pod \"cinder-api-0\" (UID: \"905b6d81-1e2e-4b4f-837b-85aa47a1c706\") " pod="openstack/cinder-api-0" Jan 28 07:05:07 crc kubenswrapper[4642]: I0128 07:05:07.286725 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/905b6d81-1e2e-4b4f-837b-85aa47a1c706-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"905b6d81-1e2e-4b4f-837b-85aa47a1c706\") " pod="openstack/cinder-api-0" Jan 28 07:05:07 crc kubenswrapper[4642]: I0128 07:05:07.292730 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/905b6d81-1e2e-4b4f-837b-85aa47a1c706-config-data-custom\") pod \"cinder-api-0\" (UID: \"905b6d81-1e2e-4b4f-837b-85aa47a1c706\") " pod="openstack/cinder-api-0" Jan 28 07:05:07 crc kubenswrapper[4642]: I0128 07:05:07.324693 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r28gh\" (UniqueName: \"kubernetes.io/projected/905b6d81-1e2e-4b4f-837b-85aa47a1c706-kube-api-access-r28gh\") pod \"cinder-api-0\" (UID: \"905b6d81-1e2e-4b4f-837b-85aa47a1c706\") " pod="openstack/cinder-api-0" Jan 28 07:05:07 crc kubenswrapper[4642]: I0128 07:05:07.448561 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-9752-account-create-update-vw6jg" Jan 28 07:05:07 crc kubenswrapper[4642]: I0128 07:05:07.467860 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 28 07:05:07 crc kubenswrapper[4642]: I0128 07:05:07.482397 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-d7p52" Jan 28 07:05:07 crc kubenswrapper[4642]: I0128 07:05:07.509341 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-2f19-account-create-update-x86m7" Jan 28 07:05:07 crc kubenswrapper[4642]: I0128 07:05:07.512270 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-40e8-account-create-update-mlk24" Jan 28 07:05:07 crc kubenswrapper[4642]: I0128 07:05:07.512350 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-bk4gf" Jan 28 07:05:07 crc kubenswrapper[4642]: I0128 07:05:07.578659 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-d7p52" event={"ID":"9ef3a51e-a476-442b-baf1-5650caa83c8e","Type":"ContainerDied","Data":"3a471cada775c99e5164299d326b74f465857e063efef69dfe23eb50c88009e1"} Jan 28 07:05:07 crc kubenswrapper[4642]: I0128 07:05:07.578691 4642 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a471cada775c99e5164299d326b74f465857e063efef69dfe23eb50c88009e1" Jan 28 07:05:07 crc kubenswrapper[4642]: I0128 07:05:07.578734 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-d7p52" Jan 28 07:05:07 crc kubenswrapper[4642]: I0128 07:05:07.584525 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-2f19-account-create-update-x86m7" Jan 28 07:05:07 crc kubenswrapper[4642]: I0128 07:05:07.584540 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-2f19-account-create-update-x86m7" event={"ID":"48cf664c-6221-49e7-8eda-bd3b5f059cbe","Type":"ContainerDied","Data":"1e5f550285aa81d33845e0b94f25617f3ff267588454651febb6fa727957b39c"} Jan 28 07:05:07 crc kubenswrapper[4642]: I0128 07:05:07.584559 4642 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e5f550285aa81d33845e0b94f25617f3ff267588454651febb6fa727957b39c" Jan 28 07:05:07 crc kubenswrapper[4642]: I0128 07:05:07.584539 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9ef3a51e-a476-442b-baf1-5650caa83c8e-operator-scripts\") pod \"9ef3a51e-a476-442b-baf1-5650caa83c8e\" (UID: \"9ef3a51e-a476-442b-baf1-5650caa83c8e\") " Jan 28 07:05:07 crc kubenswrapper[4642]: I0128 07:05:07.584865 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4vcb\" (UniqueName: \"kubernetes.io/projected/41c8e5f3-dd67-49ed-97f1-c8eac8e3e8dd-kube-api-access-s4vcb\") pod \"41c8e5f3-dd67-49ed-97f1-c8eac8e3e8dd\" (UID: \"41c8e5f3-dd67-49ed-97f1-c8eac8e3e8dd\") " Jan 28 07:05:07 crc kubenswrapper[4642]: I0128 07:05:07.584905 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzcks\" (UniqueName: \"kubernetes.io/projected/9ef3a51e-a476-442b-baf1-5650caa83c8e-kube-api-access-xzcks\") pod \"9ef3a51e-a476-442b-baf1-5650caa83c8e\" (UID: \"9ef3a51e-a476-442b-baf1-5650caa83c8e\") " Jan 28 07:05:07 crc kubenswrapper[4642]: I0128 07:05:07.584934 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41c8e5f3-dd67-49ed-97f1-c8eac8e3e8dd-operator-scripts\") pod \"41c8e5f3-dd67-49ed-97f1-c8eac8e3e8dd\" (UID: \"41c8e5f3-dd67-49ed-97f1-c8eac8e3e8dd\") " Jan 28 07:05:07 crc kubenswrapper[4642]: I0128 07:05:07.585030 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ef3a51e-a476-442b-baf1-5650caa83c8e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9ef3a51e-a476-442b-baf1-5650caa83c8e" (UID: "9ef3a51e-a476-442b-baf1-5650caa83c8e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:05:07 crc kubenswrapper[4642]: I0128 07:05:07.585718 4642 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9ef3a51e-a476-442b-baf1-5650caa83c8e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 07:05:07 crc kubenswrapper[4642]: I0128 07:05:07.585977 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41c8e5f3-dd67-49ed-97f1-c8eac8e3e8dd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "41c8e5f3-dd67-49ed-97f1-c8eac8e3e8dd" (UID: "41c8e5f3-dd67-49ed-97f1-c8eac8e3e8dd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:05:07 crc kubenswrapper[4642]: I0128 07:05:07.587409 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-bk4gf" event={"ID":"c3d9b18d-9328-43f4-85d4-cf05abf0ac5f","Type":"ContainerDied","Data":"e6875fd3f838efb4058642505ea9da2802f57cc9b3667e81d3b0411a0e6e598f"} Jan 28 07:05:07 crc kubenswrapper[4642]: I0128 07:05:07.587429 4642 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6875fd3f838efb4058642505ea9da2802f57cc9b3667e81d3b0411a0e6e598f" Jan 28 07:05:07 crc kubenswrapper[4642]: I0128 07:05:07.587513 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-bk4gf" Jan 28 07:05:07 crc kubenswrapper[4642]: I0128 07:05:07.590027 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-40e8-account-create-update-mlk24" Jan 28 07:05:07 crc kubenswrapper[4642]: I0128 07:05:07.590048 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-40e8-account-create-update-mlk24" event={"ID":"35f9d883-d85a-4ed8-907b-c4e79d737c18","Type":"ContainerDied","Data":"595e45d0f5d10633c2ce427cd958381d1d985483773c4ac8a17bb2b0e19db399"} Jan 28 07:05:07 crc kubenswrapper[4642]: I0128 07:05:07.590093 4642 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="595e45d0f5d10633c2ce427cd958381d1d985483773c4ac8a17bb2b0e19db399" Jan 28 07:05:07 crc kubenswrapper[4642]: I0128 07:05:07.592117 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41c8e5f3-dd67-49ed-97f1-c8eac8e3e8dd-kube-api-access-s4vcb" (OuterVolumeSpecName: "kube-api-access-s4vcb") pod "41c8e5f3-dd67-49ed-97f1-c8eac8e3e8dd" (UID: "41c8e5f3-dd67-49ed-97f1-c8eac8e3e8dd"). InnerVolumeSpecName "kube-api-access-s4vcb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:05:07 crc kubenswrapper[4642]: I0128 07:05:07.592850 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ef3a51e-a476-442b-baf1-5650caa83c8e-kube-api-access-xzcks" (OuterVolumeSpecName: "kube-api-access-xzcks") pod "9ef3a51e-a476-442b-baf1-5650caa83c8e" (UID: "9ef3a51e-a476-442b-baf1-5650caa83c8e"). InnerVolumeSpecName "kube-api-access-xzcks". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:05:07 crc kubenswrapper[4642]: I0128 07:05:07.594530 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-9752-account-create-update-vw6jg" event={"ID":"41c8e5f3-dd67-49ed-97f1-c8eac8e3e8dd","Type":"ContainerDied","Data":"edba5a3357339732ac5479d1257dc4e5ea4acb304d19af2b9cd28610f7dc6f25"} Jan 28 07:05:07 crc kubenswrapper[4642]: I0128 07:05:07.594559 4642 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="edba5a3357339732ac5479d1257dc4e5ea4acb304d19af2b9cd28610f7dc6f25" Jan 28 07:05:07 crc kubenswrapper[4642]: I0128 07:05:07.594601 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-9752-account-create-update-vw6jg" Jan 28 07:05:07 crc kubenswrapper[4642]: I0128 07:05:07.607438 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-655lk" Jan 28 07:05:07 crc kubenswrapper[4642]: I0128 07:05:07.607505 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-655lk" event={"ID":"be3cadcb-de4f-48dd-aae9-c2ff2ca38571","Type":"ContainerDied","Data":"20a977ef5fc627d5f51887fd40ed8fabf1d034df8ce72b77b84945f5e91bfce8"} Jan 28 07:05:07 crc kubenswrapper[4642]: I0128 07:05:07.607529 4642 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20a977ef5fc627d5f51887fd40ed8fabf1d034df8ce72b77b84945f5e91bfce8" Jan 28 07:05:07 crc kubenswrapper[4642]: I0128 07:05:07.686868 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35f9d883-d85a-4ed8-907b-c4e79d737c18-operator-scripts\") pod \"35f9d883-d85a-4ed8-907b-c4e79d737c18\" (UID: \"35f9d883-d85a-4ed8-907b-c4e79d737c18\") " Jan 28 07:05:07 crc kubenswrapper[4642]: I0128 07:05:07.687021 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3d9b18d-9328-43f4-85d4-cf05abf0ac5f-operator-scripts\") pod \"c3d9b18d-9328-43f4-85d4-cf05abf0ac5f\" (UID: \"c3d9b18d-9328-43f4-85d4-cf05abf0ac5f\") " Jan 28 07:05:07 crc kubenswrapper[4642]: I0128 07:05:07.687126 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48cf664c-6221-49e7-8eda-bd3b5f059cbe-operator-scripts\") pod \"48cf664c-6221-49e7-8eda-bd3b5f059cbe\" (UID: \"48cf664c-6221-49e7-8eda-bd3b5f059cbe\") " Jan 28 07:05:07 crc kubenswrapper[4642]: I0128 07:05:07.687255 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r296r\" (UniqueName: \"kubernetes.io/projected/48cf664c-6221-49e7-8eda-bd3b5f059cbe-kube-api-access-r296r\") pod \"48cf664c-6221-49e7-8eda-bd3b5f059cbe\" (UID: \"48cf664c-6221-49e7-8eda-bd3b5f059cbe\") " Jan 28 07:05:07 crc kubenswrapper[4642]: I0128 07:05:07.687319 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4rb4\" (UniqueName: \"kubernetes.io/projected/c3d9b18d-9328-43f4-85d4-cf05abf0ac5f-kube-api-access-h4rb4\") pod \"c3d9b18d-9328-43f4-85d4-cf05abf0ac5f\" (UID: \"c3d9b18d-9328-43f4-85d4-cf05abf0ac5f\") " Jan 28 07:05:07 crc kubenswrapper[4642]: I0128 07:05:07.687520 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77kxd\" (UniqueName: \"kubernetes.io/projected/35f9d883-d85a-4ed8-907b-c4e79d737c18-kube-api-access-77kxd\") pod \"35f9d883-d85a-4ed8-907b-c4e79d737c18\" (UID: \"35f9d883-d85a-4ed8-907b-c4e79d737c18\") " Jan 28 07:05:07 crc kubenswrapper[4642]: I0128 07:05:07.688053 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4vcb\" (UniqueName: \"kubernetes.io/projected/41c8e5f3-dd67-49ed-97f1-c8eac8e3e8dd-kube-api-access-s4vcb\") on node \"crc\" DevicePath \"\"" Jan 28 07:05:07 crc kubenswrapper[4642]: I0128 07:05:07.688133 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzcks\" (UniqueName: \"kubernetes.io/projected/9ef3a51e-a476-442b-baf1-5650caa83c8e-kube-api-access-xzcks\") on node \"crc\" DevicePath \"\"" Jan 28 07:05:07 crc kubenswrapper[4642]: I0128 07:05:07.688198 4642 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41c8e5f3-dd67-49ed-97f1-c8eac8e3e8dd-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 07:05:07 crc kubenswrapper[4642]: I0128 07:05:07.687368 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35f9d883-d85a-4ed8-907b-c4e79d737c18-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "35f9d883-d85a-4ed8-907b-c4e79d737c18" (UID: "35f9d883-d85a-4ed8-907b-c4e79d737c18"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:05:07 crc kubenswrapper[4642]: I0128 07:05:07.687820 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3d9b18d-9328-43f4-85d4-cf05abf0ac5f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c3d9b18d-9328-43f4-85d4-cf05abf0ac5f" (UID: "c3d9b18d-9328-43f4-85d4-cf05abf0ac5f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:05:07 crc kubenswrapper[4642]: I0128 07:05:07.688618 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48cf664c-6221-49e7-8eda-bd3b5f059cbe-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "48cf664c-6221-49e7-8eda-bd3b5f059cbe" (UID: "48cf664c-6221-49e7-8eda-bd3b5f059cbe"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:05:07 crc kubenswrapper[4642]: I0128 07:05:07.691376 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35f9d883-d85a-4ed8-907b-c4e79d737c18-kube-api-access-77kxd" (OuterVolumeSpecName: "kube-api-access-77kxd") pod "35f9d883-d85a-4ed8-907b-c4e79d737c18" (UID: "35f9d883-d85a-4ed8-907b-c4e79d737c18"). InnerVolumeSpecName "kube-api-access-77kxd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:05:07 crc kubenswrapper[4642]: I0128 07:05:07.692519 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 07:05:07 crc kubenswrapper[4642]: I0128 07:05:07.697288 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3d9b18d-9328-43f4-85d4-cf05abf0ac5f-kube-api-access-h4rb4" (OuterVolumeSpecName: "kube-api-access-h4rb4") pod "c3d9b18d-9328-43f4-85d4-cf05abf0ac5f" (UID: "c3d9b18d-9328-43f4-85d4-cf05abf0ac5f"). InnerVolumeSpecName "kube-api-access-h4rb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:05:07 crc kubenswrapper[4642]: I0128 07:05:07.697341 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48cf664c-6221-49e7-8eda-bd3b5f059cbe-kube-api-access-r296r" (OuterVolumeSpecName: "kube-api-access-r296r") pod "48cf664c-6221-49e7-8eda-bd3b5f059cbe" (UID: "48cf664c-6221-49e7-8eda-bd3b5f059cbe"). InnerVolumeSpecName "kube-api-access-r296r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:05:07 crc kubenswrapper[4642]: I0128 07:05:07.788756 4642 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35f9d883-d85a-4ed8-907b-c4e79d737c18-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 07:05:07 crc kubenswrapper[4642]: I0128 07:05:07.788778 4642 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3d9b18d-9328-43f4-85d4-cf05abf0ac5f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 07:05:07 crc kubenswrapper[4642]: I0128 07:05:07.788788 4642 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48cf664c-6221-49e7-8eda-bd3b5f059cbe-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 07:05:07 crc kubenswrapper[4642]: I0128 07:05:07.788798 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r296r\" (UniqueName: \"kubernetes.io/projected/48cf664c-6221-49e7-8eda-bd3b5f059cbe-kube-api-access-r296r\") on node \"crc\" DevicePath \"\"" Jan 28 07:05:07 crc kubenswrapper[4642]: I0128 07:05:07.788808 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4rb4\" (UniqueName: \"kubernetes.io/projected/c3d9b18d-9328-43f4-85d4-cf05abf0ac5f-kube-api-access-h4rb4\") on node \"crc\" DevicePath \"\"" Jan 28 07:05:07 crc kubenswrapper[4642]: I0128 07:05:07.788818 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-77kxd\" (UniqueName: \"kubernetes.io/projected/35f9d883-d85a-4ed8-907b-c4e79d737c18-kube-api-access-77kxd\") on node \"crc\" DevicePath \"\"" Jan 28 07:05:07 crc kubenswrapper[4642]: I0128 07:05:07.858246 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75f9d8d9f-s9pnh"] Jan 28 07:05:07 crc kubenswrapper[4642]: W0128 07:05:07.858253 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda3686e6e_7a8c_45b8_80a5_c64c8fc92aef.slice/crio-29f617298ebb96a9bfa5d52561fcb5b49d7e75233af9d0830c1a931c39d8fa35 WatchSource:0}: Error finding container 29f617298ebb96a9bfa5d52561fcb5b49d7e75233af9d0830c1a931c39d8fa35: Status 404 returned error can't find the container with id 29f617298ebb96a9bfa5d52561fcb5b49d7e75233af9d0830c1a931c39d8fa35 Jan 28 07:05:08 crc kubenswrapper[4642]: I0128 07:05:08.004270 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 28 07:05:08 crc kubenswrapper[4642]: I0128 07:05:08.642082 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 28 07:05:08 crc kubenswrapper[4642]: I0128 07:05:08.651212 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"578099b8-075f-47d7-b5bf-4db9ef550508","Type":"ContainerStarted","Data":"bcbe11afad18464512ee5925632ff3f6fcdf7b00b8da4fb04433ea90efbf105a"} Jan 28 07:05:08 crc kubenswrapper[4642]: I0128 07:05:08.651399 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"578099b8-075f-47d7-b5bf-4db9ef550508","Type":"ContainerStarted","Data":"dc2b0b0c8db6f448891b664cf960813ffe5c714b02e7adbb4ee459416b5d6f2e"} Jan 28 07:05:08 crc kubenswrapper[4642]: I0128 07:05:08.654082 4642 generic.go:334] "Generic (PLEG): container finished" podID="a3686e6e-7a8c-45b8-80a5-c64c8fc92aef" containerID="93aea3e48c85f16e1ada5dacad447eee7a5bf96198766a6027b13c19b77eef9f" exitCode=0 Jan 28 07:05:08 crc kubenswrapper[4642]: I0128 07:05:08.654151 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75f9d8d9f-s9pnh" event={"ID":"a3686e6e-7a8c-45b8-80a5-c64c8fc92aef","Type":"ContainerDied","Data":"93aea3e48c85f16e1ada5dacad447eee7a5bf96198766a6027b13c19b77eef9f"} Jan 28 07:05:08 crc kubenswrapper[4642]: I0128 07:05:08.654559 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75f9d8d9f-s9pnh" event={"ID":"a3686e6e-7a8c-45b8-80a5-c64c8fc92aef","Type":"ContainerStarted","Data":"29f617298ebb96a9bfa5d52561fcb5b49d7e75233af9d0830c1a931c39d8fa35"} Jan 28 07:05:08 crc kubenswrapper[4642]: I0128 07:05:08.658318 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"905b6d81-1e2e-4b4f-837b-85aa47a1c706","Type":"ContainerStarted","Data":"1d7e79820c494cbe6bc52034276901ea4bbb157e2136e2c1711fb84243d16a94"} Jan 28 07:05:08 crc kubenswrapper[4642]: I0128 07:05:08.658358 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"905b6d81-1e2e-4b4f-837b-85aa47a1c706","Type":"ContainerStarted","Data":"f4c05ae2ae566bb7c77b7d00815505c9325593f1b88507ef13914f557b5abf5d"} Jan 28 07:05:08 crc kubenswrapper[4642]: I0128 07:05:08.661023 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"db3bf7e9-aede-4b15-8c39-49ecafa0e435","Type":"ContainerStarted","Data":"dc35eb106092e44e1857570cec276dd3dc2d1d0545fcfbfb6f6a59cf5b84b05f"} Jan 28 07:05:09 crc kubenswrapper[4642]: I0128 07:05:09.128425 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-5n5zh"] Jan 28 07:05:09 crc kubenswrapper[4642]: E0128 07:05:09.128954 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35f9d883-d85a-4ed8-907b-c4e79d737c18" containerName="mariadb-account-create-update" Jan 28 07:05:09 crc kubenswrapper[4642]: I0128 07:05:09.128971 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="35f9d883-d85a-4ed8-907b-c4e79d737c18" containerName="mariadb-account-create-update" Jan 28 07:05:09 crc kubenswrapper[4642]: E0128 07:05:09.128988 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3d9b18d-9328-43f4-85d4-cf05abf0ac5f" containerName="mariadb-database-create" Jan 28 07:05:09 crc kubenswrapper[4642]: I0128 07:05:09.128995 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3d9b18d-9328-43f4-85d4-cf05abf0ac5f" containerName="mariadb-database-create" Jan 28 07:05:09 crc kubenswrapper[4642]: E0128 07:05:09.129002 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ef3a51e-a476-442b-baf1-5650caa83c8e" containerName="mariadb-database-create" Jan 28 07:05:09 crc kubenswrapper[4642]: I0128 07:05:09.129007 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ef3a51e-a476-442b-baf1-5650caa83c8e" containerName="mariadb-database-create" Jan 28 07:05:09 crc kubenswrapper[4642]: E0128 07:05:09.129015 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41c8e5f3-dd67-49ed-97f1-c8eac8e3e8dd" containerName="mariadb-account-create-update" Jan 28 07:05:09 crc kubenswrapper[4642]: I0128 07:05:09.129020 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="41c8e5f3-dd67-49ed-97f1-c8eac8e3e8dd" containerName="mariadb-account-create-update" Jan 28 07:05:09 crc kubenswrapper[4642]: E0128 07:05:09.129030 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48cf664c-6221-49e7-8eda-bd3b5f059cbe" containerName="mariadb-account-create-update" Jan 28 07:05:09 crc kubenswrapper[4642]: I0128 07:05:09.129035 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="48cf664c-6221-49e7-8eda-bd3b5f059cbe" containerName="mariadb-account-create-update" Jan 28 07:05:09 crc kubenswrapper[4642]: I0128 07:05:09.129210 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3d9b18d-9328-43f4-85d4-cf05abf0ac5f" containerName="mariadb-database-create" Jan 28 07:05:09 crc kubenswrapper[4642]: I0128 07:05:09.129229 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="35f9d883-d85a-4ed8-907b-c4e79d737c18" containerName="mariadb-account-create-update" Jan 28 07:05:09 crc kubenswrapper[4642]: I0128 07:05:09.129238 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="48cf664c-6221-49e7-8eda-bd3b5f059cbe" containerName="mariadb-account-create-update" Jan 28 07:05:09 crc kubenswrapper[4642]: I0128 07:05:09.129249 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="41c8e5f3-dd67-49ed-97f1-c8eac8e3e8dd" containerName="mariadb-account-create-update" Jan 28 07:05:09 crc kubenswrapper[4642]: I0128 07:05:09.129258 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ef3a51e-a476-442b-baf1-5650caa83c8e" containerName="mariadb-database-create" Jan 28 07:05:09 crc kubenswrapper[4642]: I0128 07:05:09.129767 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-5n5zh" Jan 28 07:05:09 crc kubenswrapper[4642]: I0128 07:05:09.134685 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-668vb" Jan 28 07:05:09 crc kubenswrapper[4642]: I0128 07:05:09.134918 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 28 07:05:09 crc kubenswrapper[4642]: I0128 07:05:09.135094 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 28 07:05:09 crc kubenswrapper[4642]: I0128 07:05:09.137583 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-5n5zh"] Jan 28 07:05:09 crc kubenswrapper[4642]: I0128 07:05:09.238557 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hkmf\" (UniqueName: \"kubernetes.io/projected/40790093-08bf-4d1a-8718-dc943de05f37-kube-api-access-2hkmf\") pod \"nova-cell0-conductor-db-sync-5n5zh\" (UID: \"40790093-08bf-4d1a-8718-dc943de05f37\") " pod="openstack/nova-cell0-conductor-db-sync-5n5zh" Jan 28 07:05:09 crc kubenswrapper[4642]: I0128 07:05:09.238618 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40790093-08bf-4d1a-8718-dc943de05f37-config-data\") pod \"nova-cell0-conductor-db-sync-5n5zh\" (UID: \"40790093-08bf-4d1a-8718-dc943de05f37\") " pod="openstack/nova-cell0-conductor-db-sync-5n5zh" Jan 28 07:05:09 crc kubenswrapper[4642]: I0128 07:05:09.238645 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40790093-08bf-4d1a-8718-dc943de05f37-scripts\") pod \"nova-cell0-conductor-db-sync-5n5zh\" (UID: \"40790093-08bf-4d1a-8718-dc943de05f37\") " pod="openstack/nova-cell0-conductor-db-sync-5n5zh" Jan 28 07:05:09 crc kubenswrapper[4642]: I0128 07:05:09.238686 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40790093-08bf-4d1a-8718-dc943de05f37-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-5n5zh\" (UID: \"40790093-08bf-4d1a-8718-dc943de05f37\") " pod="openstack/nova-cell0-conductor-db-sync-5n5zh" Jan 28 07:05:09 crc kubenswrapper[4642]: I0128 07:05:09.340428 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40790093-08bf-4d1a-8718-dc943de05f37-config-data\") pod \"nova-cell0-conductor-db-sync-5n5zh\" (UID: \"40790093-08bf-4d1a-8718-dc943de05f37\") " pod="openstack/nova-cell0-conductor-db-sync-5n5zh" Jan 28 07:05:09 crc kubenswrapper[4642]: I0128 07:05:09.340696 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40790093-08bf-4d1a-8718-dc943de05f37-scripts\") pod \"nova-cell0-conductor-db-sync-5n5zh\" (UID: \"40790093-08bf-4d1a-8718-dc943de05f37\") " pod="openstack/nova-cell0-conductor-db-sync-5n5zh" Jan 28 07:05:09 crc kubenswrapper[4642]: I0128 07:05:09.340752 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40790093-08bf-4d1a-8718-dc943de05f37-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-5n5zh\" (UID: \"40790093-08bf-4d1a-8718-dc943de05f37\") " pod="openstack/nova-cell0-conductor-db-sync-5n5zh" Jan 28 07:05:09 crc kubenswrapper[4642]: I0128 07:05:09.340839 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hkmf\" (UniqueName: \"kubernetes.io/projected/40790093-08bf-4d1a-8718-dc943de05f37-kube-api-access-2hkmf\") pod \"nova-cell0-conductor-db-sync-5n5zh\" (UID: \"40790093-08bf-4d1a-8718-dc943de05f37\") " pod="openstack/nova-cell0-conductor-db-sync-5n5zh" Jan 28 07:05:09 crc kubenswrapper[4642]: I0128 07:05:09.346871 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40790093-08bf-4d1a-8718-dc943de05f37-scripts\") pod \"nova-cell0-conductor-db-sync-5n5zh\" (UID: \"40790093-08bf-4d1a-8718-dc943de05f37\") " pod="openstack/nova-cell0-conductor-db-sync-5n5zh" Jan 28 07:05:09 crc kubenswrapper[4642]: I0128 07:05:09.360922 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40790093-08bf-4d1a-8718-dc943de05f37-config-data\") pod \"nova-cell0-conductor-db-sync-5n5zh\" (UID: \"40790093-08bf-4d1a-8718-dc943de05f37\") " pod="openstack/nova-cell0-conductor-db-sync-5n5zh" Jan 28 07:05:09 crc kubenswrapper[4642]: I0128 07:05:09.361340 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40790093-08bf-4d1a-8718-dc943de05f37-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-5n5zh\" (UID: \"40790093-08bf-4d1a-8718-dc943de05f37\") " pod="openstack/nova-cell0-conductor-db-sync-5n5zh" Jan 28 07:05:09 crc kubenswrapper[4642]: I0128 07:05:09.377723 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hkmf\" (UniqueName: \"kubernetes.io/projected/40790093-08bf-4d1a-8718-dc943de05f37-kube-api-access-2hkmf\") pod \"nova-cell0-conductor-db-sync-5n5zh\" (UID: \"40790093-08bf-4d1a-8718-dc943de05f37\") " pod="openstack/nova-cell0-conductor-db-sync-5n5zh" Jan 28 07:05:09 crc kubenswrapper[4642]: I0128 07:05:09.460244 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-5n5zh" Jan 28 07:05:09 crc kubenswrapper[4642]: I0128 07:05:09.629777 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 07:05:09 crc kubenswrapper[4642]: I0128 07:05:09.752632 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15853341-a13e-4f13-a998-9026f9034213-logs\") pod \"15853341-a13e-4f13-a998-9026f9034213\" (UID: \"15853341-a13e-4f13-a998-9026f9034213\") " Jan 28 07:05:09 crc kubenswrapper[4642]: I0128 07:05:09.752672 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/15853341-a13e-4f13-a998-9026f9034213-httpd-run\") pod \"15853341-a13e-4f13-a998-9026f9034213\" (UID: \"15853341-a13e-4f13-a998-9026f9034213\") " Jan 28 07:05:09 crc kubenswrapper[4642]: I0128 07:05:09.752692 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15853341-a13e-4f13-a998-9026f9034213-combined-ca-bundle\") pod \"15853341-a13e-4f13-a998-9026f9034213\" (UID: \"15853341-a13e-4f13-a998-9026f9034213\") " Jan 28 07:05:09 crc kubenswrapper[4642]: I0128 07:05:09.752724 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15853341-a13e-4f13-a998-9026f9034213-scripts\") pod \"15853341-a13e-4f13-a998-9026f9034213\" (UID: \"15853341-a13e-4f13-a998-9026f9034213\") " Jan 28 07:05:09 crc kubenswrapper[4642]: I0128 07:05:09.752798 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/15853341-a13e-4f13-a998-9026f9034213-public-tls-certs\") pod \"15853341-a13e-4f13-a998-9026f9034213\" (UID: \"15853341-a13e-4f13-a998-9026f9034213\") " Jan 28 07:05:09 crc kubenswrapper[4642]: I0128 07:05:09.752861 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15853341-a13e-4f13-a998-9026f9034213-config-data\") pod \"15853341-a13e-4f13-a998-9026f9034213\" (UID: \"15853341-a13e-4f13-a998-9026f9034213\") " Jan 28 07:05:09 crc kubenswrapper[4642]: I0128 07:05:09.752918 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"15853341-a13e-4f13-a998-9026f9034213\" (UID: \"15853341-a13e-4f13-a998-9026f9034213\") " Jan 28 07:05:09 crc kubenswrapper[4642]: I0128 07:05:09.752937 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cndqk\" (UniqueName: \"kubernetes.io/projected/15853341-a13e-4f13-a998-9026f9034213-kube-api-access-cndqk\") pod \"15853341-a13e-4f13-a998-9026f9034213\" (UID: \"15853341-a13e-4f13-a998-9026f9034213\") " Jan 28 07:05:09 crc kubenswrapper[4642]: I0128 07:05:09.754100 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15853341-a13e-4f13-a998-9026f9034213-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "15853341-a13e-4f13-a998-9026f9034213" (UID: "15853341-a13e-4f13-a998-9026f9034213"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 07:05:09 crc kubenswrapper[4642]: I0128 07:05:09.754457 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15853341-a13e-4f13-a998-9026f9034213-logs" (OuterVolumeSpecName: "logs") pod "15853341-a13e-4f13-a998-9026f9034213" (UID: "15853341-a13e-4f13-a998-9026f9034213"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 07:05:09 crc kubenswrapper[4642]: I0128 07:05:09.779843 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15853341-a13e-4f13-a998-9026f9034213-scripts" (OuterVolumeSpecName: "scripts") pod "15853341-a13e-4f13-a998-9026f9034213" (UID: "15853341-a13e-4f13-a998-9026f9034213"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:05:09 crc kubenswrapper[4642]: I0128 07:05:09.779844 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15853341-a13e-4f13-a998-9026f9034213-kube-api-access-cndqk" (OuterVolumeSpecName: "kube-api-access-cndqk") pod "15853341-a13e-4f13-a998-9026f9034213" (UID: "15853341-a13e-4f13-a998-9026f9034213"). InnerVolumeSpecName "kube-api-access-cndqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:05:09 crc kubenswrapper[4642]: I0128 07:05:09.784413 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "15853341-a13e-4f13-a998-9026f9034213" (UID: "15853341-a13e-4f13-a998-9026f9034213"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 28 07:05:09 crc kubenswrapper[4642]: I0128 07:05:09.788687 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"905b6d81-1e2e-4b4f-837b-85aa47a1c706","Type":"ContainerStarted","Data":"cc500df21c7709c936e98be5808bfc115b4255bcfd165c9c4c2304b72c8e708d"} Jan 28 07:05:09 crc kubenswrapper[4642]: I0128 07:05:09.788801 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="905b6d81-1e2e-4b4f-837b-85aa47a1c706" containerName="cinder-api-log" containerID="cri-o://1d7e79820c494cbe6bc52034276901ea4bbb157e2136e2c1711fb84243d16a94" gracePeriod=30 Jan 28 07:05:09 crc kubenswrapper[4642]: I0128 07:05:09.788900 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="905b6d81-1e2e-4b4f-837b-85aa47a1c706" containerName="cinder-api" containerID="cri-o://cc500df21c7709c936e98be5808bfc115b4255bcfd165c9c4c2304b72c8e708d" gracePeriod=30 Jan 28 07:05:09 crc kubenswrapper[4642]: I0128 07:05:09.788918 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 28 07:05:09 crc kubenswrapper[4642]: I0128 07:05:09.794655 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"db3bf7e9-aede-4b15-8c39-49ecafa0e435","Type":"ContainerStarted","Data":"7c7b3cbf200a25150acd856562731a67fe6021ff382ddfe47c92ff5214a190d6"} Jan 28 07:05:09 crc kubenswrapper[4642]: I0128 07:05:09.798623 4642 generic.go:334] "Generic (PLEG): container finished" podID="15853341-a13e-4f13-a998-9026f9034213" containerID="aecf1be783189cb9e0bc0b617e50c966034ee267137fb04a5158161548d9a3c5" exitCode=0 Jan 28 07:05:09 crc kubenswrapper[4642]: I0128 07:05:09.798653 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 07:05:09 crc kubenswrapper[4642]: I0128 07:05:09.798674 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"15853341-a13e-4f13-a998-9026f9034213","Type":"ContainerDied","Data":"aecf1be783189cb9e0bc0b617e50c966034ee267137fb04a5158161548d9a3c5"} Jan 28 07:05:09 crc kubenswrapper[4642]: I0128 07:05:09.798699 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"15853341-a13e-4f13-a998-9026f9034213","Type":"ContainerDied","Data":"532e81ff6fe727a8a522238c743ea92aa193c8a7ba84f6f36081a2ed8fbe67d0"} Jan 28 07:05:09 crc kubenswrapper[4642]: I0128 07:05:09.798726 4642 scope.go:117] "RemoveContainer" containerID="aecf1be783189cb9e0bc0b617e50c966034ee267137fb04a5158161548d9a3c5" Jan 28 07:05:09 crc kubenswrapper[4642]: I0128 07:05:09.811659 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75f9d8d9f-s9pnh" event={"ID":"a3686e6e-7a8c-45b8-80a5-c64c8fc92aef","Type":"ContainerStarted","Data":"f4e3b6fc49653592508788121dcaaba70eb8cf6d338fa2e21cc923e1aaf15d0b"} Jan 28 07:05:09 crc kubenswrapper[4642]: I0128 07:05:09.812007 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15853341-a13e-4f13-a998-9026f9034213-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "15853341-a13e-4f13-a998-9026f9034213" (UID: "15853341-a13e-4f13-a998-9026f9034213"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:05:09 crc kubenswrapper[4642]: I0128 07:05:09.812442 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-75f9d8d9f-s9pnh" Jan 28 07:05:09 crc kubenswrapper[4642]: I0128 07:05:09.827042 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=2.827026251 podStartE2EDuration="2.827026251s" podCreationTimestamp="2026-01-28 07:05:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 07:05:09.821631218 +0000 UTC m=+1033.053720027" watchObservedRunningTime="2026-01-28 07:05:09.827026251 +0000 UTC m=+1033.059115060" Jan 28 07:05:09 crc kubenswrapper[4642]: I0128 07:05:09.843268 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-75f9d8d9f-s9pnh" podStartSLOduration=3.843255717 podStartE2EDuration="3.843255717s" podCreationTimestamp="2026-01-28 07:05:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 07:05:09.84106157 +0000 UTC m=+1033.073150379" watchObservedRunningTime="2026-01-28 07:05:09.843255717 +0000 UTC m=+1033.075344526" Jan 28 07:05:09 crc kubenswrapper[4642]: I0128 07:05:09.855292 4642 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15853341-a13e-4f13-a998-9026f9034213-logs\") on node \"crc\" DevicePath \"\"" Jan 28 07:05:09 crc kubenswrapper[4642]: I0128 07:05:09.855320 4642 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/15853341-a13e-4f13-a998-9026f9034213-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 28 07:05:09 crc kubenswrapper[4642]: I0128 07:05:09.855330 4642 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15853341-a13e-4f13-a998-9026f9034213-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 07:05:09 crc kubenswrapper[4642]: I0128 07:05:09.855340 4642 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15853341-a13e-4f13-a998-9026f9034213-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 07:05:09 crc kubenswrapper[4642]: I0128 07:05:09.855364 4642 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Jan 28 07:05:09 crc kubenswrapper[4642]: I0128 07:05:09.855373 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cndqk\" (UniqueName: \"kubernetes.io/projected/15853341-a13e-4f13-a998-9026f9034213-kube-api-access-cndqk\") on node \"crc\" DevicePath \"\"" Jan 28 07:05:09 crc kubenswrapper[4642]: I0128 07:05:09.870859 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15853341-a13e-4f13-a998-9026f9034213-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "15853341-a13e-4f13-a998-9026f9034213" (UID: "15853341-a13e-4f13-a998-9026f9034213"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:05:09 crc kubenswrapper[4642]: I0128 07:05:09.870915 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15853341-a13e-4f13-a998-9026f9034213-config-data" (OuterVolumeSpecName: "config-data") pod "15853341-a13e-4f13-a998-9026f9034213" (UID: "15853341-a13e-4f13-a998-9026f9034213"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:05:09 crc kubenswrapper[4642]: I0128 07:05:09.879488 4642 scope.go:117] "RemoveContainer" containerID="0fe5311386b56bf9ddcef24a17968865e8a17517d1865ad7f03114327b3756a5" Jan 28 07:05:09 crc kubenswrapper[4642]: I0128 07:05:09.888383 4642 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Jan 28 07:05:09 crc kubenswrapper[4642]: I0128 07:05:09.915115 4642 scope.go:117] "RemoveContainer" containerID="aecf1be783189cb9e0bc0b617e50c966034ee267137fb04a5158161548d9a3c5" Jan 28 07:05:09 crc kubenswrapper[4642]: E0128 07:05:09.917552 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aecf1be783189cb9e0bc0b617e50c966034ee267137fb04a5158161548d9a3c5\": container with ID starting with aecf1be783189cb9e0bc0b617e50c966034ee267137fb04a5158161548d9a3c5 not found: ID does not exist" containerID="aecf1be783189cb9e0bc0b617e50c966034ee267137fb04a5158161548d9a3c5" Jan 28 07:05:09 crc kubenswrapper[4642]: I0128 07:05:09.917595 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aecf1be783189cb9e0bc0b617e50c966034ee267137fb04a5158161548d9a3c5"} err="failed to get container status \"aecf1be783189cb9e0bc0b617e50c966034ee267137fb04a5158161548d9a3c5\": rpc error: code = NotFound desc = could not find container \"aecf1be783189cb9e0bc0b617e50c966034ee267137fb04a5158161548d9a3c5\": container with ID starting with aecf1be783189cb9e0bc0b617e50c966034ee267137fb04a5158161548d9a3c5 not found: ID does not exist" Jan 28 07:05:09 crc kubenswrapper[4642]: I0128 07:05:09.917619 4642 scope.go:117] "RemoveContainer" containerID="0fe5311386b56bf9ddcef24a17968865e8a17517d1865ad7f03114327b3756a5" Jan 28 07:05:09 crc kubenswrapper[4642]: E0128 07:05:09.917941 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0fe5311386b56bf9ddcef24a17968865e8a17517d1865ad7f03114327b3756a5\": container with ID starting with 0fe5311386b56bf9ddcef24a17968865e8a17517d1865ad7f03114327b3756a5 not found: ID does not exist" containerID="0fe5311386b56bf9ddcef24a17968865e8a17517d1865ad7f03114327b3756a5" Jan 28 07:05:09 crc kubenswrapper[4642]: I0128 07:05:09.917969 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0fe5311386b56bf9ddcef24a17968865e8a17517d1865ad7f03114327b3756a5"} err="failed to get container status \"0fe5311386b56bf9ddcef24a17968865e8a17517d1865ad7f03114327b3756a5\": rpc error: code = NotFound desc = could not find container \"0fe5311386b56bf9ddcef24a17968865e8a17517d1865ad7f03114327b3756a5\": container with ID starting with 0fe5311386b56bf9ddcef24a17968865e8a17517d1865ad7f03114327b3756a5 not found: ID does not exist" Jan 28 07:05:09 crc kubenswrapper[4642]: I0128 07:05:09.958259 4642 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/15853341-a13e-4f13-a998-9026f9034213-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 07:05:09 crc kubenswrapper[4642]: I0128 07:05:09.958938 4642 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15853341-a13e-4f13-a998-9026f9034213-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 07:05:09 crc kubenswrapper[4642]: I0128 07:05:09.958970 4642 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.096682 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-5n5zh"] Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.160234 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.167168 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.182238 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 07:05:10 crc kubenswrapper[4642]: E0128 07:05:10.182615 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15853341-a13e-4f13-a998-9026f9034213" containerName="glance-httpd" Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.182634 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="15853341-a13e-4f13-a998-9026f9034213" containerName="glance-httpd" Jan 28 07:05:10 crc kubenswrapper[4642]: E0128 07:05:10.182656 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15853341-a13e-4f13-a998-9026f9034213" containerName="glance-log" Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.182663 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="15853341-a13e-4f13-a998-9026f9034213" containerName="glance-log" Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.182815 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="15853341-a13e-4f13-a998-9026f9034213" containerName="glance-httpd" Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.182832 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="15853341-a13e-4f13-a998-9026f9034213" containerName="glance-log" Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.183651 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.188251 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.189106 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.189311 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.372260 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2b7c0c6-df21-4342-9aec-f6b7ba5188be-logs\") pod \"glance-default-external-api-0\" (UID: \"e2b7c0c6-df21-4342-9aec-f6b7ba5188be\") " pod="openstack/glance-default-external-api-0" Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.372514 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2b7c0c6-df21-4342-9aec-f6b7ba5188be-config-data\") pod \"glance-default-external-api-0\" (UID: \"e2b7c0c6-df21-4342-9aec-f6b7ba5188be\") " pod="openstack/glance-default-external-api-0" Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.372619 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2b7c0c6-df21-4342-9aec-f6b7ba5188be-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e2b7c0c6-df21-4342-9aec-f6b7ba5188be\") " pod="openstack/glance-default-external-api-0" Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.372741 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"e2b7c0c6-df21-4342-9aec-f6b7ba5188be\") " pod="openstack/glance-default-external-api-0" Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.372840 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nk5t\" (UniqueName: \"kubernetes.io/projected/e2b7c0c6-df21-4342-9aec-f6b7ba5188be-kube-api-access-6nk5t\") pod \"glance-default-external-api-0\" (UID: \"e2b7c0c6-df21-4342-9aec-f6b7ba5188be\") " pod="openstack/glance-default-external-api-0" Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.372927 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e2b7c0c6-df21-4342-9aec-f6b7ba5188be-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e2b7c0c6-df21-4342-9aec-f6b7ba5188be\") " pod="openstack/glance-default-external-api-0" Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.373029 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2b7c0c6-df21-4342-9aec-f6b7ba5188be-scripts\") pod \"glance-default-external-api-0\" (UID: \"e2b7c0c6-df21-4342-9aec-f6b7ba5188be\") " pod="openstack/glance-default-external-api-0" Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.373095 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2b7c0c6-df21-4342-9aec-f6b7ba5188be-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e2b7c0c6-df21-4342-9aec-f6b7ba5188be\") " pod="openstack/glance-default-external-api-0" Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.410298 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6f4ff4554c-l99rf" Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.421220 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6f4ff4554c-l99rf" Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.447999 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.474584 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2b7c0c6-df21-4342-9aec-f6b7ba5188be-scripts\") pod \"glance-default-external-api-0\" (UID: \"e2b7c0c6-df21-4342-9aec-f6b7ba5188be\") " pod="openstack/glance-default-external-api-0" Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.474656 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2b7c0c6-df21-4342-9aec-f6b7ba5188be-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e2b7c0c6-df21-4342-9aec-f6b7ba5188be\") " pod="openstack/glance-default-external-api-0" Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.474723 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2b7c0c6-df21-4342-9aec-f6b7ba5188be-logs\") pod \"glance-default-external-api-0\" (UID: \"e2b7c0c6-df21-4342-9aec-f6b7ba5188be\") " pod="openstack/glance-default-external-api-0" Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.474796 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2b7c0c6-df21-4342-9aec-f6b7ba5188be-config-data\") pod \"glance-default-external-api-0\" (UID: \"e2b7c0c6-df21-4342-9aec-f6b7ba5188be\") " pod="openstack/glance-default-external-api-0" Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.474816 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2b7c0c6-df21-4342-9aec-f6b7ba5188be-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e2b7c0c6-df21-4342-9aec-f6b7ba5188be\") " pod="openstack/glance-default-external-api-0" Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.474873 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"e2b7c0c6-df21-4342-9aec-f6b7ba5188be\") " pod="openstack/glance-default-external-api-0" Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.474909 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nk5t\" (UniqueName: \"kubernetes.io/projected/e2b7c0c6-df21-4342-9aec-f6b7ba5188be-kube-api-access-6nk5t\") pod \"glance-default-external-api-0\" (UID: \"e2b7c0c6-df21-4342-9aec-f6b7ba5188be\") " pod="openstack/glance-default-external-api-0" Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.474952 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e2b7c0c6-df21-4342-9aec-f6b7ba5188be-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e2b7c0c6-df21-4342-9aec-f6b7ba5188be\") " pod="openstack/glance-default-external-api-0" Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.475486 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e2b7c0c6-df21-4342-9aec-f6b7ba5188be-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e2b7c0c6-df21-4342-9aec-f6b7ba5188be\") " pod="openstack/glance-default-external-api-0" Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.478352 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2b7c0c6-df21-4342-9aec-f6b7ba5188be-logs\") pod \"glance-default-external-api-0\" (UID: \"e2b7c0c6-df21-4342-9aec-f6b7ba5188be\") " pod="openstack/glance-default-external-api-0" Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.478408 4642 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"e2b7c0c6-df21-4342-9aec-f6b7ba5188be\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.486856 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2b7c0c6-df21-4342-9aec-f6b7ba5188be-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e2b7c0c6-df21-4342-9aec-f6b7ba5188be\") " pod="openstack/glance-default-external-api-0" Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.487130 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2b7c0c6-df21-4342-9aec-f6b7ba5188be-config-data\") pod \"glance-default-external-api-0\" (UID: \"e2b7c0c6-df21-4342-9aec-f6b7ba5188be\") " pod="openstack/glance-default-external-api-0" Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.491761 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2b7c0c6-df21-4342-9aec-f6b7ba5188be-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e2b7c0c6-df21-4342-9aec-f6b7ba5188be\") " pod="openstack/glance-default-external-api-0" Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.503238 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2b7c0c6-df21-4342-9aec-f6b7ba5188be-scripts\") pod \"glance-default-external-api-0\" (UID: \"e2b7c0c6-df21-4342-9aec-f6b7ba5188be\") " pod="openstack/glance-default-external-api-0" Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.504030 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nk5t\" (UniqueName: \"kubernetes.io/projected/e2b7c0c6-df21-4342-9aec-f6b7ba5188be-kube-api-access-6nk5t\") pod \"glance-default-external-api-0\" (UID: \"e2b7c0c6-df21-4342-9aec-f6b7ba5188be\") " pod="openstack/glance-default-external-api-0" Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.528602 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"e2b7c0c6-df21-4342-9aec-f6b7ba5188be\") " pod="openstack/glance-default-external-api-0" Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.576364 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/905b6d81-1e2e-4b4f-837b-85aa47a1c706-config-data\") pod \"905b6d81-1e2e-4b4f-837b-85aa47a1c706\" (UID: \"905b6d81-1e2e-4b4f-837b-85aa47a1c706\") " Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.576418 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/905b6d81-1e2e-4b4f-837b-85aa47a1c706-etc-machine-id\") pod \"905b6d81-1e2e-4b4f-837b-85aa47a1c706\" (UID: \"905b6d81-1e2e-4b4f-837b-85aa47a1c706\") " Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.576469 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/905b6d81-1e2e-4b4f-837b-85aa47a1c706-scripts\") pod \"905b6d81-1e2e-4b4f-837b-85aa47a1c706\" (UID: \"905b6d81-1e2e-4b4f-837b-85aa47a1c706\") " Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.576553 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/905b6d81-1e2e-4b4f-837b-85aa47a1c706-config-data-custom\") pod \"905b6d81-1e2e-4b4f-837b-85aa47a1c706\" (UID: \"905b6d81-1e2e-4b4f-837b-85aa47a1c706\") " Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.576610 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/905b6d81-1e2e-4b4f-837b-85aa47a1c706-combined-ca-bundle\") pod \"905b6d81-1e2e-4b4f-837b-85aa47a1c706\" (UID: \"905b6d81-1e2e-4b4f-837b-85aa47a1c706\") " Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.576699 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/905b6d81-1e2e-4b4f-837b-85aa47a1c706-logs\") pod \"905b6d81-1e2e-4b4f-837b-85aa47a1c706\" (UID: \"905b6d81-1e2e-4b4f-837b-85aa47a1c706\") " Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.576740 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r28gh\" (UniqueName: \"kubernetes.io/projected/905b6d81-1e2e-4b4f-837b-85aa47a1c706-kube-api-access-r28gh\") pod \"905b6d81-1e2e-4b4f-837b-85aa47a1c706\" (UID: \"905b6d81-1e2e-4b4f-837b-85aa47a1c706\") " Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.580760 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/905b6d81-1e2e-4b4f-837b-85aa47a1c706-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "905b6d81-1e2e-4b4f-837b-85aa47a1c706" (UID: "905b6d81-1e2e-4b4f-837b-85aa47a1c706"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.581655 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/905b6d81-1e2e-4b4f-837b-85aa47a1c706-logs" (OuterVolumeSpecName: "logs") pod "905b6d81-1e2e-4b4f-837b-85aa47a1c706" (UID: "905b6d81-1e2e-4b4f-837b-85aa47a1c706"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.585514 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/905b6d81-1e2e-4b4f-837b-85aa47a1c706-kube-api-access-r28gh" (OuterVolumeSpecName: "kube-api-access-r28gh") pod "905b6d81-1e2e-4b4f-837b-85aa47a1c706" (UID: "905b6d81-1e2e-4b4f-837b-85aa47a1c706"). InnerVolumeSpecName "kube-api-access-r28gh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.591333 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/905b6d81-1e2e-4b4f-837b-85aa47a1c706-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "905b6d81-1e2e-4b4f-837b-85aa47a1c706" (UID: "905b6d81-1e2e-4b4f-837b-85aa47a1c706"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.596385 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/905b6d81-1e2e-4b4f-837b-85aa47a1c706-scripts" (OuterVolumeSpecName: "scripts") pod "905b6d81-1e2e-4b4f-837b-85aa47a1c706" (UID: "905b6d81-1e2e-4b4f-837b-85aa47a1c706"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.608304 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/905b6d81-1e2e-4b4f-837b-85aa47a1c706-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "905b6d81-1e2e-4b4f-837b-85aa47a1c706" (UID: "905b6d81-1e2e-4b4f-837b-85aa47a1c706"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.628745 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/905b6d81-1e2e-4b4f-837b-85aa47a1c706-config-data" (OuterVolumeSpecName: "config-data") pod "905b6d81-1e2e-4b4f-837b-85aa47a1c706" (UID: "905b6d81-1e2e-4b4f-837b-85aa47a1c706"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.678756 4642 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/905b6d81-1e2e-4b4f-837b-85aa47a1c706-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.678787 4642 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/905b6d81-1e2e-4b4f-837b-85aa47a1c706-logs\") on node \"crc\" DevicePath \"\"" Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.678797 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r28gh\" (UniqueName: \"kubernetes.io/projected/905b6d81-1e2e-4b4f-837b-85aa47a1c706-kube-api-access-r28gh\") on node \"crc\" DevicePath \"\"" Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.678809 4642 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/905b6d81-1e2e-4b4f-837b-85aa47a1c706-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.678817 4642 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/905b6d81-1e2e-4b4f-837b-85aa47a1c706-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.678824 4642 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/905b6d81-1e2e-4b4f-837b-85aa47a1c706-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.678832 4642 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/905b6d81-1e2e-4b4f-837b-85aa47a1c706-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.805378 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.820599 4642 generic.go:334] "Generic (PLEG): container finished" podID="905b6d81-1e2e-4b4f-837b-85aa47a1c706" containerID="cc500df21c7709c936e98be5808bfc115b4255bcfd165c9c4c2304b72c8e708d" exitCode=0 Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.820628 4642 generic.go:334] "Generic (PLEG): container finished" podID="905b6d81-1e2e-4b4f-837b-85aa47a1c706" containerID="1d7e79820c494cbe6bc52034276901ea4bbb157e2136e2c1711fb84243d16a94" exitCode=143 Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.820666 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"905b6d81-1e2e-4b4f-837b-85aa47a1c706","Type":"ContainerDied","Data":"cc500df21c7709c936e98be5808bfc115b4255bcfd165c9c4c2304b72c8e708d"} Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.820693 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"905b6d81-1e2e-4b4f-837b-85aa47a1c706","Type":"ContainerDied","Data":"1d7e79820c494cbe6bc52034276901ea4bbb157e2136e2c1711fb84243d16a94"} Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.820702 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"905b6d81-1e2e-4b4f-837b-85aa47a1c706","Type":"ContainerDied","Data":"f4c05ae2ae566bb7c77b7d00815505c9325593f1b88507ef13914f557b5abf5d"} Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.820716 4642 scope.go:117] "RemoveContainer" containerID="cc500df21c7709c936e98be5808bfc115b4255bcfd165c9c4c2304b72c8e708d" Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.820809 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.823850 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"db3bf7e9-aede-4b15-8c39-49ecafa0e435","Type":"ContainerStarted","Data":"6c837c17f8e7eb134bb44017a6d3b082b078af8cb100436a10ec121c0c43a7f6"} Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.834060 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"578099b8-075f-47d7-b5bf-4db9ef550508","Type":"ContainerStarted","Data":"274c2458d84d458b1bd5ccf216a8ba61523825ee2125201a5726cca9fe519a5d"} Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.834171 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="578099b8-075f-47d7-b5bf-4db9ef550508" containerName="ceilometer-central-agent" containerID="cri-o://e7976304238b68b824a0bd27fe538ce22dd35d4185faaaca6b9a637511de9e3c" gracePeriod=30 Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.834249 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.834260 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="578099b8-075f-47d7-b5bf-4db9ef550508" containerName="proxy-httpd" containerID="cri-o://274c2458d84d458b1bd5ccf216a8ba61523825ee2125201a5726cca9fe519a5d" gracePeriod=30 Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.834287 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="578099b8-075f-47d7-b5bf-4db9ef550508" containerName="ceilometer-notification-agent" containerID="cri-o://dc2b0b0c8db6f448891b664cf960813ffe5c714b02e7adbb4ee459416b5d6f2e" gracePeriod=30 Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.834325 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="578099b8-075f-47d7-b5bf-4db9ef550508" containerName="sg-core" containerID="cri-o://bcbe11afad18464512ee5925632ff3f6fcdf7b00b8da4fb04433ea90efbf105a" gracePeriod=30 Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.838036 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-5n5zh" event={"ID":"40790093-08bf-4d1a-8718-dc943de05f37","Type":"ContainerStarted","Data":"e34c2710e4e72f8d4e99c4c33dd45c3622f977b0a5c22553847736304b6e4c1f"} Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.840609 4642 scope.go:117] "RemoveContainer" containerID="1d7e79820c494cbe6bc52034276901ea4bbb157e2136e2c1711fb84243d16a94" Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.860127 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.671715282 podStartE2EDuration="4.86009392s" podCreationTimestamp="2026-01-28 07:05:06 +0000 UTC" firstStartedPulling="2026-01-28 07:05:07.693703005 +0000 UTC m=+1030.925791814" lastFinishedPulling="2026-01-28 07:05:08.882081644 +0000 UTC m=+1032.114170452" observedRunningTime="2026-01-28 07:05:10.853852444 +0000 UTC m=+1034.085941253" watchObservedRunningTime="2026-01-28 07:05:10.86009392 +0000 UTC m=+1034.092182729" Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.877308 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.892256 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.900069 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.227468667 podStartE2EDuration="8.900032493s" podCreationTimestamp="2026-01-28 07:05:02 +0000 UTC" firstStartedPulling="2026-01-28 07:05:03.265135475 +0000 UTC m=+1026.497224284" lastFinishedPulling="2026-01-28 07:05:09.937699301 +0000 UTC m=+1033.169788110" observedRunningTime="2026-01-28 07:05:10.883505148 +0000 UTC m=+1034.115593957" watchObservedRunningTime="2026-01-28 07:05:10.900032493 +0000 UTC m=+1034.132121303" Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.912739 4642 scope.go:117] "RemoveContainer" containerID="cc500df21c7709c936e98be5808bfc115b4255bcfd165c9c4c2304b72c8e708d" Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.912832 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 28 07:05:10 crc kubenswrapper[4642]: E0128 07:05:10.913133 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="905b6d81-1e2e-4b4f-837b-85aa47a1c706" containerName="cinder-api-log" Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.913151 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="905b6d81-1e2e-4b4f-837b-85aa47a1c706" containerName="cinder-api-log" Jan 28 07:05:10 crc kubenswrapper[4642]: E0128 07:05:10.913181 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="905b6d81-1e2e-4b4f-837b-85aa47a1c706" containerName="cinder-api" Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.913202 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="905b6d81-1e2e-4b4f-837b-85aa47a1c706" containerName="cinder-api" Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.913342 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="905b6d81-1e2e-4b4f-837b-85aa47a1c706" containerName="cinder-api" Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.913366 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="905b6d81-1e2e-4b4f-837b-85aa47a1c706" containerName="cinder-api-log" Jan 28 07:05:10 crc kubenswrapper[4642]: E0128 07:05:10.914348 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc500df21c7709c936e98be5808bfc115b4255bcfd165c9c4c2304b72c8e708d\": container with ID starting with cc500df21c7709c936e98be5808bfc115b4255bcfd165c9c4c2304b72c8e708d not found: ID does not exist" containerID="cc500df21c7709c936e98be5808bfc115b4255bcfd165c9c4c2304b72c8e708d" Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.914392 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc500df21c7709c936e98be5808bfc115b4255bcfd165c9c4c2304b72c8e708d"} err="failed to get container status \"cc500df21c7709c936e98be5808bfc115b4255bcfd165c9c4c2304b72c8e708d\": rpc error: code = NotFound desc = could not find container \"cc500df21c7709c936e98be5808bfc115b4255bcfd165c9c4c2304b72c8e708d\": container with ID starting with cc500df21c7709c936e98be5808bfc115b4255bcfd165c9c4c2304b72c8e708d not found: ID does not exist" Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.914418 4642 scope.go:117] "RemoveContainer" containerID="1d7e79820c494cbe6bc52034276901ea4bbb157e2136e2c1711fb84243d16a94" Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.914947 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.921335 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.921457 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.921629 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 28 07:05:10 crc kubenswrapper[4642]: E0128 07:05:10.925833 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d7e79820c494cbe6bc52034276901ea4bbb157e2136e2c1711fb84243d16a94\": container with ID starting with 1d7e79820c494cbe6bc52034276901ea4bbb157e2136e2c1711fb84243d16a94 not found: ID does not exist" containerID="1d7e79820c494cbe6bc52034276901ea4bbb157e2136e2c1711fb84243d16a94" Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.925865 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d7e79820c494cbe6bc52034276901ea4bbb157e2136e2c1711fb84243d16a94"} err="failed to get container status \"1d7e79820c494cbe6bc52034276901ea4bbb157e2136e2c1711fb84243d16a94\": rpc error: code = NotFound desc = could not find container \"1d7e79820c494cbe6bc52034276901ea4bbb157e2136e2c1711fb84243d16a94\": container with ID starting with 1d7e79820c494cbe6bc52034276901ea4bbb157e2136e2c1711fb84243d16a94 not found: ID does not exist" Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.925887 4642 scope.go:117] "RemoveContainer" containerID="cc500df21c7709c936e98be5808bfc115b4255bcfd165c9c4c2304b72c8e708d" Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.936697 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.944171 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc500df21c7709c936e98be5808bfc115b4255bcfd165c9c4c2304b72c8e708d"} err="failed to get container status \"cc500df21c7709c936e98be5808bfc115b4255bcfd165c9c4c2304b72c8e708d\": rpc error: code = NotFound desc = could not find container \"cc500df21c7709c936e98be5808bfc115b4255bcfd165c9c4c2304b72c8e708d\": container with ID starting with cc500df21c7709c936e98be5808bfc115b4255bcfd165c9c4c2304b72c8e708d not found: ID does not exist" Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.944458 4642 scope.go:117] "RemoveContainer" containerID="1d7e79820c494cbe6bc52034276901ea4bbb157e2136e2c1711fb84243d16a94" Jan 28 07:05:10 crc kubenswrapper[4642]: I0128 07:05:10.950504 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d7e79820c494cbe6bc52034276901ea4bbb157e2136e2c1711fb84243d16a94"} err="failed to get container status \"1d7e79820c494cbe6bc52034276901ea4bbb157e2136e2c1711fb84243d16a94\": rpc error: code = NotFound desc = could not find container \"1d7e79820c494cbe6bc52034276901ea4bbb157e2136e2c1711fb84243d16a94\": container with ID starting with 1d7e79820c494cbe6bc52034276901ea4bbb157e2136e2c1711fb84243d16a94 not found: ID does not exist" Jan 28 07:05:11 crc kubenswrapper[4642]: I0128 07:05:11.089429 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cb1c066-b293-4f15-8056-5422fe062a98-scripts\") pod \"cinder-api-0\" (UID: \"7cb1c066-b293-4f15-8056-5422fe062a98\") " pod="openstack/cinder-api-0" Jan 28 07:05:11 crc kubenswrapper[4642]: I0128 07:05:11.089472 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f76jr\" (UniqueName: \"kubernetes.io/projected/7cb1c066-b293-4f15-8056-5422fe062a98-kube-api-access-f76jr\") pod \"cinder-api-0\" (UID: \"7cb1c066-b293-4f15-8056-5422fe062a98\") " pod="openstack/cinder-api-0" Jan 28 07:05:11 crc kubenswrapper[4642]: I0128 07:05:11.089528 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7cb1c066-b293-4f15-8056-5422fe062a98-public-tls-certs\") pod \"cinder-api-0\" (UID: \"7cb1c066-b293-4f15-8056-5422fe062a98\") " pod="openstack/cinder-api-0" Jan 28 07:05:11 crc kubenswrapper[4642]: I0128 07:05:11.089557 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cb1c066-b293-4f15-8056-5422fe062a98-config-data\") pod \"cinder-api-0\" (UID: \"7cb1c066-b293-4f15-8056-5422fe062a98\") " pod="openstack/cinder-api-0" Jan 28 07:05:11 crc kubenswrapper[4642]: I0128 07:05:11.089656 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7cb1c066-b293-4f15-8056-5422fe062a98-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"7cb1c066-b293-4f15-8056-5422fe062a98\") " pod="openstack/cinder-api-0" Jan 28 07:05:11 crc kubenswrapper[4642]: I0128 07:05:11.089688 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7cb1c066-b293-4f15-8056-5422fe062a98-etc-machine-id\") pod \"cinder-api-0\" (UID: \"7cb1c066-b293-4f15-8056-5422fe062a98\") " pod="openstack/cinder-api-0" Jan 28 07:05:11 crc kubenswrapper[4642]: I0128 07:05:11.089721 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7cb1c066-b293-4f15-8056-5422fe062a98-config-data-custom\") pod \"cinder-api-0\" (UID: \"7cb1c066-b293-4f15-8056-5422fe062a98\") " pod="openstack/cinder-api-0" Jan 28 07:05:11 crc kubenswrapper[4642]: I0128 07:05:11.089746 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cb1c066-b293-4f15-8056-5422fe062a98-logs\") pod \"cinder-api-0\" (UID: \"7cb1c066-b293-4f15-8056-5422fe062a98\") " pod="openstack/cinder-api-0" Jan 28 07:05:11 crc kubenswrapper[4642]: I0128 07:05:11.089759 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cb1c066-b293-4f15-8056-5422fe062a98-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"7cb1c066-b293-4f15-8056-5422fe062a98\") " pod="openstack/cinder-api-0" Jan 28 07:05:11 crc kubenswrapper[4642]: I0128 07:05:11.111765 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15853341-a13e-4f13-a998-9026f9034213" path="/var/lib/kubelet/pods/15853341-a13e-4f13-a998-9026f9034213/volumes" Jan 28 07:05:11 crc kubenswrapper[4642]: I0128 07:05:11.112622 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="905b6d81-1e2e-4b4f-837b-85aa47a1c706" path="/var/lib/kubelet/pods/905b6d81-1e2e-4b4f-837b-85aa47a1c706/volumes" Jan 28 07:05:11 crc kubenswrapper[4642]: E0128 07:05:11.159012 4642 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod578099b8_075f_47d7_b5bf_4db9ef550508.slice/crio-conmon-274c2458d84d458b1bd5ccf216a8ba61523825ee2125201a5726cca9fe519a5d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod578099b8_075f_47d7_b5bf_4db9ef550508.slice/crio-274c2458d84d458b1bd5ccf216a8ba61523825ee2125201a5726cca9fe519a5d.scope\": RecentStats: unable to find data in memory cache]" Jan 28 07:05:11 crc kubenswrapper[4642]: I0128 07:05:11.192362 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cb1c066-b293-4f15-8056-5422fe062a98-logs\") pod \"cinder-api-0\" (UID: \"7cb1c066-b293-4f15-8056-5422fe062a98\") " pod="openstack/cinder-api-0" Jan 28 07:05:11 crc kubenswrapper[4642]: I0128 07:05:11.192877 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cb1c066-b293-4f15-8056-5422fe062a98-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"7cb1c066-b293-4f15-8056-5422fe062a98\") " pod="openstack/cinder-api-0" Jan 28 07:05:11 crc kubenswrapper[4642]: I0128 07:05:11.192980 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cb1c066-b293-4f15-8056-5422fe062a98-scripts\") pod \"cinder-api-0\" (UID: \"7cb1c066-b293-4f15-8056-5422fe062a98\") " pod="openstack/cinder-api-0" Jan 28 07:05:11 crc kubenswrapper[4642]: I0128 07:05:11.193052 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f76jr\" (UniqueName: \"kubernetes.io/projected/7cb1c066-b293-4f15-8056-5422fe062a98-kube-api-access-f76jr\") pod \"cinder-api-0\" (UID: \"7cb1c066-b293-4f15-8056-5422fe062a98\") " pod="openstack/cinder-api-0" Jan 28 07:05:11 crc kubenswrapper[4642]: I0128 07:05:11.193152 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7cb1c066-b293-4f15-8056-5422fe062a98-public-tls-certs\") pod \"cinder-api-0\" (UID: \"7cb1c066-b293-4f15-8056-5422fe062a98\") " pod="openstack/cinder-api-0" Jan 28 07:05:11 crc kubenswrapper[4642]: I0128 07:05:11.193251 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cb1c066-b293-4f15-8056-5422fe062a98-config-data\") pod \"cinder-api-0\" (UID: \"7cb1c066-b293-4f15-8056-5422fe062a98\") " pod="openstack/cinder-api-0" Jan 28 07:05:11 crc kubenswrapper[4642]: I0128 07:05:11.193795 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7cb1c066-b293-4f15-8056-5422fe062a98-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"7cb1c066-b293-4f15-8056-5422fe062a98\") " pod="openstack/cinder-api-0" Jan 28 07:05:11 crc kubenswrapper[4642]: I0128 07:05:11.193881 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7cb1c066-b293-4f15-8056-5422fe062a98-etc-machine-id\") pod \"cinder-api-0\" (UID: \"7cb1c066-b293-4f15-8056-5422fe062a98\") " pod="openstack/cinder-api-0" Jan 28 07:05:11 crc kubenswrapper[4642]: I0128 07:05:11.193965 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7cb1c066-b293-4f15-8056-5422fe062a98-config-data-custom\") pod \"cinder-api-0\" (UID: \"7cb1c066-b293-4f15-8056-5422fe062a98\") " pod="openstack/cinder-api-0" Jan 28 07:05:11 crc kubenswrapper[4642]: I0128 07:05:11.194542 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7cb1c066-b293-4f15-8056-5422fe062a98-etc-machine-id\") pod \"cinder-api-0\" (UID: \"7cb1c066-b293-4f15-8056-5422fe062a98\") " pod="openstack/cinder-api-0" Jan 28 07:05:11 crc kubenswrapper[4642]: I0128 07:05:11.192758 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cb1c066-b293-4f15-8056-5422fe062a98-logs\") pod \"cinder-api-0\" (UID: \"7cb1c066-b293-4f15-8056-5422fe062a98\") " pod="openstack/cinder-api-0" Jan 28 07:05:11 crc kubenswrapper[4642]: I0128 07:05:11.198140 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7cb1c066-b293-4f15-8056-5422fe062a98-public-tls-certs\") pod \"cinder-api-0\" (UID: \"7cb1c066-b293-4f15-8056-5422fe062a98\") " pod="openstack/cinder-api-0" Jan 28 07:05:11 crc kubenswrapper[4642]: I0128 07:05:11.198306 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cb1c066-b293-4f15-8056-5422fe062a98-config-data\") pod \"cinder-api-0\" (UID: \"7cb1c066-b293-4f15-8056-5422fe062a98\") " pod="openstack/cinder-api-0" Jan 28 07:05:11 crc kubenswrapper[4642]: I0128 07:05:11.198939 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cb1c066-b293-4f15-8056-5422fe062a98-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"7cb1c066-b293-4f15-8056-5422fe062a98\") " pod="openstack/cinder-api-0" Jan 28 07:05:11 crc kubenswrapper[4642]: I0128 07:05:11.199083 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7cb1c066-b293-4f15-8056-5422fe062a98-config-data-custom\") pod \"cinder-api-0\" (UID: \"7cb1c066-b293-4f15-8056-5422fe062a98\") " pod="openstack/cinder-api-0" Jan 28 07:05:11 crc kubenswrapper[4642]: I0128 07:05:11.199600 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cb1c066-b293-4f15-8056-5422fe062a98-scripts\") pod \"cinder-api-0\" (UID: \"7cb1c066-b293-4f15-8056-5422fe062a98\") " pod="openstack/cinder-api-0" Jan 28 07:05:11 crc kubenswrapper[4642]: I0128 07:05:11.206411 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7cb1c066-b293-4f15-8056-5422fe062a98-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"7cb1c066-b293-4f15-8056-5422fe062a98\") " pod="openstack/cinder-api-0" Jan 28 07:05:11 crc kubenswrapper[4642]: I0128 07:05:11.208082 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f76jr\" (UniqueName: \"kubernetes.io/projected/7cb1c066-b293-4f15-8056-5422fe062a98-kube-api-access-f76jr\") pod \"cinder-api-0\" (UID: \"7cb1c066-b293-4f15-8056-5422fe062a98\") " pod="openstack/cinder-api-0" Jan 28 07:05:11 crc kubenswrapper[4642]: I0128 07:05:11.242370 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 28 07:05:11 crc kubenswrapper[4642]: I0128 07:05:11.420924 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 07:05:11 crc kubenswrapper[4642]: I0128 07:05:11.591923 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 07:05:11 crc kubenswrapper[4642]: I0128 07:05:11.592118 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="2266e1d6-2cec-4bfc-9a24-b6408860e980" containerName="glance-log" containerID="cri-o://15dbe696b934bd09d8341707ccd9135f43d5140feb7d1dcfa639f46c873bc41a" gracePeriod=30 Jan 28 07:05:11 crc kubenswrapper[4642]: I0128 07:05:11.592499 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="2266e1d6-2cec-4bfc-9a24-b6408860e980" containerName="glance-httpd" containerID="cri-o://a98f299adcf7ea63064992435916a114d91f4a8ea25a12db110f82f97974ec91" gracePeriod=30 Jan 28 07:05:11 crc kubenswrapper[4642]: I0128 07:05:11.713720 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 28 07:05:11 crc kubenswrapper[4642]: I0128 07:05:11.863755 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7cb1c066-b293-4f15-8056-5422fe062a98","Type":"ContainerStarted","Data":"d49fb234f9b8f0d3961e3506047b5ca7f40f9767d2373954df2a0246b11127e2"} Jan 28 07:05:11 crc kubenswrapper[4642]: I0128 07:05:11.866818 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e2b7c0c6-df21-4342-9aec-f6b7ba5188be","Type":"ContainerStarted","Data":"04c58e4025085ed763fac36c7a6ca6607f08ef006fc158d58ab899a30c75d93b"} Jan 28 07:05:11 crc kubenswrapper[4642]: I0128 07:05:11.868774 4642 generic.go:334] "Generic (PLEG): container finished" podID="2266e1d6-2cec-4bfc-9a24-b6408860e980" containerID="15dbe696b934bd09d8341707ccd9135f43d5140feb7d1dcfa639f46c873bc41a" exitCode=143 Jan 28 07:05:11 crc kubenswrapper[4642]: I0128 07:05:11.868836 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2266e1d6-2cec-4bfc-9a24-b6408860e980","Type":"ContainerDied","Data":"15dbe696b934bd09d8341707ccd9135f43d5140feb7d1dcfa639f46c873bc41a"} Jan 28 07:05:11 crc kubenswrapper[4642]: I0128 07:05:11.873203 4642 generic.go:334] "Generic (PLEG): container finished" podID="578099b8-075f-47d7-b5bf-4db9ef550508" containerID="274c2458d84d458b1bd5ccf216a8ba61523825ee2125201a5726cca9fe519a5d" exitCode=0 Jan 28 07:05:11 crc kubenswrapper[4642]: I0128 07:05:11.873225 4642 generic.go:334] "Generic (PLEG): container finished" podID="578099b8-075f-47d7-b5bf-4db9ef550508" containerID="bcbe11afad18464512ee5925632ff3f6fcdf7b00b8da4fb04433ea90efbf105a" exitCode=2 Jan 28 07:05:11 crc kubenswrapper[4642]: I0128 07:05:11.873234 4642 generic.go:334] "Generic (PLEG): container finished" podID="578099b8-075f-47d7-b5bf-4db9ef550508" containerID="dc2b0b0c8db6f448891b664cf960813ffe5c714b02e7adbb4ee459416b5d6f2e" exitCode=0 Jan 28 07:05:11 crc kubenswrapper[4642]: I0128 07:05:11.873212 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"578099b8-075f-47d7-b5bf-4db9ef550508","Type":"ContainerDied","Data":"274c2458d84d458b1bd5ccf216a8ba61523825ee2125201a5726cca9fe519a5d"} Jan 28 07:05:11 crc kubenswrapper[4642]: I0128 07:05:11.873332 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"578099b8-075f-47d7-b5bf-4db9ef550508","Type":"ContainerDied","Data":"bcbe11afad18464512ee5925632ff3f6fcdf7b00b8da4fb04433ea90efbf105a"} Jan 28 07:05:11 crc kubenswrapper[4642]: I0128 07:05:11.873360 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"578099b8-075f-47d7-b5bf-4db9ef550508","Type":"ContainerDied","Data":"dc2b0b0c8db6f448891b664cf960813ffe5c714b02e7adbb4ee459416b5d6f2e"} Jan 28 07:05:11 crc kubenswrapper[4642]: I0128 07:05:11.976746 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7476bb99fc-vvh9d" Jan 28 07:05:12 crc kubenswrapper[4642]: I0128 07:05:12.023047 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5586f7766d-jr6js"] Jan 28 07:05:12 crc kubenswrapper[4642]: I0128 07:05:12.023269 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5586f7766d-jr6js" podUID="6ba95f7c-c848-4c02-b886-357bbd0f1223" containerName="neutron-api" containerID="cri-o://71cae0757ce0c13e45e0ae78ffa52ba7ea2c406bf65d33326d1dde217acd405c" gracePeriod=30 Jan 28 07:05:12 crc kubenswrapper[4642]: I0128 07:05:12.023471 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5586f7766d-jr6js" podUID="6ba95f7c-c848-4c02-b886-357bbd0f1223" containerName="neutron-httpd" containerID="cri-o://ace730037e7765af679a97c0a9ccf6eacea51057e861a67847ef6f6d76eafbfe" gracePeriod=30 Jan 28 07:05:12 crc kubenswrapper[4642]: I0128 07:05:12.154697 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 28 07:05:12 crc kubenswrapper[4642]: I0128 07:05:12.695891 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 07:05:12 crc kubenswrapper[4642]: I0128 07:05:12.828960 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/578099b8-075f-47d7-b5bf-4db9ef550508-config-data\") pod \"578099b8-075f-47d7-b5bf-4db9ef550508\" (UID: \"578099b8-075f-47d7-b5bf-4db9ef550508\") " Jan 28 07:05:12 crc kubenswrapper[4642]: I0128 07:05:12.829008 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/578099b8-075f-47d7-b5bf-4db9ef550508-sg-core-conf-yaml\") pod \"578099b8-075f-47d7-b5bf-4db9ef550508\" (UID: \"578099b8-075f-47d7-b5bf-4db9ef550508\") " Jan 28 07:05:12 crc kubenswrapper[4642]: I0128 07:05:12.829070 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/578099b8-075f-47d7-b5bf-4db9ef550508-scripts\") pod \"578099b8-075f-47d7-b5bf-4db9ef550508\" (UID: \"578099b8-075f-47d7-b5bf-4db9ef550508\") " Jan 28 07:05:12 crc kubenswrapper[4642]: I0128 07:05:12.829116 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/578099b8-075f-47d7-b5bf-4db9ef550508-combined-ca-bundle\") pod \"578099b8-075f-47d7-b5bf-4db9ef550508\" (UID: \"578099b8-075f-47d7-b5bf-4db9ef550508\") " Jan 28 07:05:12 crc kubenswrapper[4642]: I0128 07:05:12.829219 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzq59\" (UniqueName: \"kubernetes.io/projected/578099b8-075f-47d7-b5bf-4db9ef550508-kube-api-access-fzq59\") pod \"578099b8-075f-47d7-b5bf-4db9ef550508\" (UID: \"578099b8-075f-47d7-b5bf-4db9ef550508\") " Jan 28 07:05:12 crc kubenswrapper[4642]: I0128 07:05:12.829270 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/578099b8-075f-47d7-b5bf-4db9ef550508-run-httpd\") pod \"578099b8-075f-47d7-b5bf-4db9ef550508\" (UID: \"578099b8-075f-47d7-b5bf-4db9ef550508\") " Jan 28 07:05:12 crc kubenswrapper[4642]: I0128 07:05:12.829374 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/578099b8-075f-47d7-b5bf-4db9ef550508-log-httpd\") pod \"578099b8-075f-47d7-b5bf-4db9ef550508\" (UID: \"578099b8-075f-47d7-b5bf-4db9ef550508\") " Jan 28 07:05:12 crc kubenswrapper[4642]: I0128 07:05:12.830153 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/578099b8-075f-47d7-b5bf-4db9ef550508-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "578099b8-075f-47d7-b5bf-4db9ef550508" (UID: "578099b8-075f-47d7-b5bf-4db9ef550508"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 07:05:12 crc kubenswrapper[4642]: I0128 07:05:12.834145 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/578099b8-075f-47d7-b5bf-4db9ef550508-scripts" (OuterVolumeSpecName: "scripts") pod "578099b8-075f-47d7-b5bf-4db9ef550508" (UID: "578099b8-075f-47d7-b5bf-4db9ef550508"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:05:12 crc kubenswrapper[4642]: I0128 07:05:12.834457 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/578099b8-075f-47d7-b5bf-4db9ef550508-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "578099b8-075f-47d7-b5bf-4db9ef550508" (UID: "578099b8-075f-47d7-b5bf-4db9ef550508"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 07:05:12 crc kubenswrapper[4642]: I0128 07:05:12.843923 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/578099b8-075f-47d7-b5bf-4db9ef550508-kube-api-access-fzq59" (OuterVolumeSpecName: "kube-api-access-fzq59") pod "578099b8-075f-47d7-b5bf-4db9ef550508" (UID: "578099b8-075f-47d7-b5bf-4db9ef550508"). InnerVolumeSpecName "kube-api-access-fzq59". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:05:12 crc kubenswrapper[4642]: I0128 07:05:12.884454 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/578099b8-075f-47d7-b5bf-4db9ef550508-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "578099b8-075f-47d7-b5bf-4db9ef550508" (UID: "578099b8-075f-47d7-b5bf-4db9ef550508"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:05:12 crc kubenswrapper[4642]: I0128 07:05:12.910086 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e2b7c0c6-df21-4342-9aec-f6b7ba5188be","Type":"ContainerStarted","Data":"fefd9cb67f48433cc9e028c91c025c1c2ec0f566edf9795d1693d280057b3f6b"} Jan 28 07:05:12 crc kubenswrapper[4642]: I0128 07:05:12.910132 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e2b7c0c6-df21-4342-9aec-f6b7ba5188be","Type":"ContainerStarted","Data":"3552eb27ab32a9b6123f6760734bdfa92fa70dbbe9b7a36048a016596ab511f4"} Jan 28 07:05:12 crc kubenswrapper[4642]: I0128 07:05:12.912249 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/578099b8-075f-47d7-b5bf-4db9ef550508-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "578099b8-075f-47d7-b5bf-4db9ef550508" (UID: "578099b8-075f-47d7-b5bf-4db9ef550508"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:05:12 crc kubenswrapper[4642]: I0128 07:05:12.918977 4642 generic.go:334] "Generic (PLEG): container finished" podID="578099b8-075f-47d7-b5bf-4db9ef550508" containerID="e7976304238b68b824a0bd27fe538ce22dd35d4185faaaca6b9a637511de9e3c" exitCode=0 Jan 28 07:05:12 crc kubenswrapper[4642]: I0128 07:05:12.919031 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"578099b8-075f-47d7-b5bf-4db9ef550508","Type":"ContainerDied","Data":"e7976304238b68b824a0bd27fe538ce22dd35d4185faaaca6b9a637511de9e3c"} Jan 28 07:05:12 crc kubenswrapper[4642]: I0128 07:05:12.919074 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"578099b8-075f-47d7-b5bf-4db9ef550508","Type":"ContainerDied","Data":"8832e83e7ccb8c1f2718e462fd58be46ffda2b90a9de3c5f3304dad30db2ef53"} Jan 28 07:05:12 crc kubenswrapper[4642]: I0128 07:05:12.919090 4642 scope.go:117] "RemoveContainer" containerID="274c2458d84d458b1bd5ccf216a8ba61523825ee2125201a5726cca9fe519a5d" Jan 28 07:05:12 crc kubenswrapper[4642]: I0128 07:05:12.919292 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 07:05:12 crc kubenswrapper[4642]: I0128 07:05:12.932611 4642 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/578099b8-075f-47d7-b5bf-4db9ef550508-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 07:05:12 crc kubenswrapper[4642]: I0128 07:05:12.932638 4642 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/578099b8-075f-47d7-b5bf-4db9ef550508-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 07:05:12 crc kubenswrapper[4642]: I0128 07:05:12.932648 4642 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/578099b8-075f-47d7-b5bf-4db9ef550508-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 28 07:05:12 crc kubenswrapper[4642]: I0128 07:05:12.932657 4642 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/578099b8-075f-47d7-b5bf-4db9ef550508-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 07:05:12 crc kubenswrapper[4642]: I0128 07:05:12.932665 4642 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/578099b8-075f-47d7-b5bf-4db9ef550508-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 07:05:12 crc kubenswrapper[4642]: I0128 07:05:12.932731 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fzq59\" (UniqueName: \"kubernetes.io/projected/578099b8-075f-47d7-b5bf-4db9ef550508-kube-api-access-fzq59\") on node \"crc\" DevicePath \"\"" Jan 28 07:05:12 crc kubenswrapper[4642]: I0128 07:05:12.938339 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=2.938327044 podStartE2EDuration="2.938327044s" podCreationTimestamp="2026-01-28 07:05:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 07:05:12.929649799 +0000 UTC m=+1036.161738608" watchObservedRunningTime="2026-01-28 07:05:12.938327044 +0000 UTC m=+1036.170415854" Jan 28 07:05:12 crc kubenswrapper[4642]: I0128 07:05:12.940835 4642 generic.go:334] "Generic (PLEG): container finished" podID="6ba95f7c-c848-4c02-b886-357bbd0f1223" containerID="ace730037e7765af679a97c0a9ccf6eacea51057e861a67847ef6f6d76eafbfe" exitCode=0 Jan 28 07:05:12 crc kubenswrapper[4642]: I0128 07:05:12.940891 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5586f7766d-jr6js" event={"ID":"6ba95f7c-c848-4c02-b886-357bbd0f1223","Type":"ContainerDied","Data":"ace730037e7765af679a97c0a9ccf6eacea51057e861a67847ef6f6d76eafbfe"} Jan 28 07:05:12 crc kubenswrapper[4642]: I0128 07:05:12.945056 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7cb1c066-b293-4f15-8056-5422fe062a98","Type":"ContainerStarted","Data":"1cad1d7098aedfe9c5e00892cbf226c7ef03d1ffba017e619fc7ba7bb437992b"} Jan 28 07:05:12 crc kubenswrapper[4642]: I0128 07:05:12.960767 4642 scope.go:117] "RemoveContainer" containerID="bcbe11afad18464512ee5925632ff3f6fcdf7b00b8da4fb04433ea90efbf105a" Jan 28 07:05:12 crc kubenswrapper[4642]: I0128 07:05:12.961254 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/578099b8-075f-47d7-b5bf-4db9ef550508-config-data" (OuterVolumeSpecName: "config-data") pod "578099b8-075f-47d7-b5bf-4db9ef550508" (UID: "578099b8-075f-47d7-b5bf-4db9ef550508"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:05:12 crc kubenswrapper[4642]: I0128 07:05:12.992621 4642 scope.go:117] "RemoveContainer" containerID="dc2b0b0c8db6f448891b664cf960813ffe5c714b02e7adbb4ee459416b5d6f2e" Jan 28 07:05:13 crc kubenswrapper[4642]: I0128 07:05:13.026266 4642 scope.go:117] "RemoveContainer" containerID="e7976304238b68b824a0bd27fe538ce22dd35d4185faaaca6b9a637511de9e3c" Jan 28 07:05:13 crc kubenswrapper[4642]: I0128 07:05:13.035624 4642 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/578099b8-075f-47d7-b5bf-4db9ef550508-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 07:05:13 crc kubenswrapper[4642]: I0128 07:05:13.060857 4642 scope.go:117] "RemoveContainer" containerID="274c2458d84d458b1bd5ccf216a8ba61523825ee2125201a5726cca9fe519a5d" Jan 28 07:05:13 crc kubenswrapper[4642]: E0128 07:05:13.061533 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"274c2458d84d458b1bd5ccf216a8ba61523825ee2125201a5726cca9fe519a5d\": container with ID starting with 274c2458d84d458b1bd5ccf216a8ba61523825ee2125201a5726cca9fe519a5d not found: ID does not exist" containerID="274c2458d84d458b1bd5ccf216a8ba61523825ee2125201a5726cca9fe519a5d" Jan 28 07:05:13 crc kubenswrapper[4642]: I0128 07:05:13.061562 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"274c2458d84d458b1bd5ccf216a8ba61523825ee2125201a5726cca9fe519a5d"} err="failed to get container status \"274c2458d84d458b1bd5ccf216a8ba61523825ee2125201a5726cca9fe519a5d\": rpc error: code = NotFound desc = could not find container \"274c2458d84d458b1bd5ccf216a8ba61523825ee2125201a5726cca9fe519a5d\": container with ID starting with 274c2458d84d458b1bd5ccf216a8ba61523825ee2125201a5726cca9fe519a5d not found: ID does not exist" Jan 28 07:05:13 crc kubenswrapper[4642]: I0128 07:05:13.061583 4642 scope.go:117] "RemoveContainer" containerID="bcbe11afad18464512ee5925632ff3f6fcdf7b00b8da4fb04433ea90efbf105a" Jan 28 07:05:13 crc kubenswrapper[4642]: E0128 07:05:13.062024 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bcbe11afad18464512ee5925632ff3f6fcdf7b00b8da4fb04433ea90efbf105a\": container with ID starting with bcbe11afad18464512ee5925632ff3f6fcdf7b00b8da4fb04433ea90efbf105a not found: ID does not exist" containerID="bcbe11afad18464512ee5925632ff3f6fcdf7b00b8da4fb04433ea90efbf105a" Jan 28 07:05:13 crc kubenswrapper[4642]: I0128 07:05:13.062045 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcbe11afad18464512ee5925632ff3f6fcdf7b00b8da4fb04433ea90efbf105a"} err="failed to get container status \"bcbe11afad18464512ee5925632ff3f6fcdf7b00b8da4fb04433ea90efbf105a\": rpc error: code = NotFound desc = could not find container \"bcbe11afad18464512ee5925632ff3f6fcdf7b00b8da4fb04433ea90efbf105a\": container with ID starting with bcbe11afad18464512ee5925632ff3f6fcdf7b00b8da4fb04433ea90efbf105a not found: ID does not exist" Jan 28 07:05:13 crc kubenswrapper[4642]: I0128 07:05:13.062060 4642 scope.go:117] "RemoveContainer" containerID="dc2b0b0c8db6f448891b664cf960813ffe5c714b02e7adbb4ee459416b5d6f2e" Jan 28 07:05:13 crc kubenswrapper[4642]: E0128 07:05:13.062437 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc2b0b0c8db6f448891b664cf960813ffe5c714b02e7adbb4ee459416b5d6f2e\": container with ID starting with dc2b0b0c8db6f448891b664cf960813ffe5c714b02e7adbb4ee459416b5d6f2e not found: ID does not exist" containerID="dc2b0b0c8db6f448891b664cf960813ffe5c714b02e7adbb4ee459416b5d6f2e" Jan 28 07:05:13 crc kubenswrapper[4642]: I0128 07:05:13.062457 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc2b0b0c8db6f448891b664cf960813ffe5c714b02e7adbb4ee459416b5d6f2e"} err="failed to get container status \"dc2b0b0c8db6f448891b664cf960813ffe5c714b02e7adbb4ee459416b5d6f2e\": rpc error: code = NotFound desc = could not find container \"dc2b0b0c8db6f448891b664cf960813ffe5c714b02e7adbb4ee459416b5d6f2e\": container with ID starting with dc2b0b0c8db6f448891b664cf960813ffe5c714b02e7adbb4ee459416b5d6f2e not found: ID does not exist" Jan 28 07:05:13 crc kubenswrapper[4642]: I0128 07:05:13.062498 4642 scope.go:117] "RemoveContainer" containerID="e7976304238b68b824a0bd27fe538ce22dd35d4185faaaca6b9a637511de9e3c" Jan 28 07:05:13 crc kubenswrapper[4642]: E0128 07:05:13.062819 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7976304238b68b824a0bd27fe538ce22dd35d4185faaaca6b9a637511de9e3c\": container with ID starting with e7976304238b68b824a0bd27fe538ce22dd35d4185faaaca6b9a637511de9e3c not found: ID does not exist" containerID="e7976304238b68b824a0bd27fe538ce22dd35d4185faaaca6b9a637511de9e3c" Jan 28 07:05:13 crc kubenswrapper[4642]: I0128 07:05:13.062837 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7976304238b68b824a0bd27fe538ce22dd35d4185faaaca6b9a637511de9e3c"} err="failed to get container status \"e7976304238b68b824a0bd27fe538ce22dd35d4185faaaca6b9a637511de9e3c\": rpc error: code = NotFound desc = could not find container \"e7976304238b68b824a0bd27fe538ce22dd35d4185faaaca6b9a637511de9e3c\": container with ID starting with e7976304238b68b824a0bd27fe538ce22dd35d4185faaaca6b9a637511de9e3c not found: ID does not exist" Jan 28 07:05:13 crc kubenswrapper[4642]: I0128 07:05:13.237587 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 07:05:13 crc kubenswrapper[4642]: I0128 07:05:13.244037 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 28 07:05:13 crc kubenswrapper[4642]: I0128 07:05:13.306213 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 28 07:05:13 crc kubenswrapper[4642]: E0128 07:05:13.307346 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="578099b8-075f-47d7-b5bf-4db9ef550508" containerName="proxy-httpd" Jan 28 07:05:13 crc kubenswrapper[4642]: I0128 07:05:13.307359 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="578099b8-075f-47d7-b5bf-4db9ef550508" containerName="proxy-httpd" Jan 28 07:05:13 crc kubenswrapper[4642]: E0128 07:05:13.307374 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="578099b8-075f-47d7-b5bf-4db9ef550508" containerName="ceilometer-central-agent" Jan 28 07:05:13 crc kubenswrapper[4642]: I0128 07:05:13.307380 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="578099b8-075f-47d7-b5bf-4db9ef550508" containerName="ceilometer-central-agent" Jan 28 07:05:13 crc kubenswrapper[4642]: E0128 07:05:13.307429 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="578099b8-075f-47d7-b5bf-4db9ef550508" containerName="ceilometer-notification-agent" Jan 28 07:05:13 crc kubenswrapper[4642]: I0128 07:05:13.307437 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="578099b8-075f-47d7-b5bf-4db9ef550508" containerName="ceilometer-notification-agent" Jan 28 07:05:13 crc kubenswrapper[4642]: E0128 07:05:13.307452 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="578099b8-075f-47d7-b5bf-4db9ef550508" containerName="sg-core" Jan 28 07:05:13 crc kubenswrapper[4642]: I0128 07:05:13.307457 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="578099b8-075f-47d7-b5bf-4db9ef550508" containerName="sg-core" Jan 28 07:05:13 crc kubenswrapper[4642]: I0128 07:05:13.307831 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="578099b8-075f-47d7-b5bf-4db9ef550508" containerName="ceilometer-notification-agent" Jan 28 07:05:13 crc kubenswrapper[4642]: I0128 07:05:13.307846 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="578099b8-075f-47d7-b5bf-4db9ef550508" containerName="ceilometer-central-agent" Jan 28 07:05:13 crc kubenswrapper[4642]: I0128 07:05:13.307854 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="578099b8-075f-47d7-b5bf-4db9ef550508" containerName="sg-core" Jan 28 07:05:13 crc kubenswrapper[4642]: I0128 07:05:13.307866 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="578099b8-075f-47d7-b5bf-4db9ef550508" containerName="proxy-httpd" Jan 28 07:05:13 crc kubenswrapper[4642]: I0128 07:05:13.318820 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 07:05:13 crc kubenswrapper[4642]: I0128 07:05:13.320624 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 28 07:05:13 crc kubenswrapper[4642]: I0128 07:05:13.320788 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 28 07:05:13 crc kubenswrapper[4642]: I0128 07:05:13.328795 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 07:05:13 crc kubenswrapper[4642]: I0128 07:05:13.447094 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c68c4f2-ad0e-4540-acd5-185f6a8568eb-config-data\") pod \"ceilometer-0\" (UID: \"8c68c4f2-ad0e-4540-acd5-185f6a8568eb\") " pod="openstack/ceilometer-0" Jan 28 07:05:13 crc kubenswrapper[4642]: I0128 07:05:13.447134 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8c68c4f2-ad0e-4540-acd5-185f6a8568eb-run-httpd\") pod \"ceilometer-0\" (UID: \"8c68c4f2-ad0e-4540-acd5-185f6a8568eb\") " pod="openstack/ceilometer-0" Jan 28 07:05:13 crc kubenswrapper[4642]: I0128 07:05:13.447153 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c68c4f2-ad0e-4540-acd5-185f6a8568eb-scripts\") pod \"ceilometer-0\" (UID: \"8c68c4f2-ad0e-4540-acd5-185f6a8568eb\") " pod="openstack/ceilometer-0" Jan 28 07:05:13 crc kubenswrapper[4642]: I0128 07:05:13.447303 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48bzh\" (UniqueName: \"kubernetes.io/projected/8c68c4f2-ad0e-4540-acd5-185f6a8568eb-kube-api-access-48bzh\") pod \"ceilometer-0\" (UID: \"8c68c4f2-ad0e-4540-acd5-185f6a8568eb\") " pod="openstack/ceilometer-0" Jan 28 07:05:13 crc kubenswrapper[4642]: I0128 07:05:13.447419 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8c68c4f2-ad0e-4540-acd5-185f6a8568eb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8c68c4f2-ad0e-4540-acd5-185f6a8568eb\") " pod="openstack/ceilometer-0" Jan 28 07:05:13 crc kubenswrapper[4642]: I0128 07:05:13.447454 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c68c4f2-ad0e-4540-acd5-185f6a8568eb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8c68c4f2-ad0e-4540-acd5-185f6a8568eb\") " pod="openstack/ceilometer-0" Jan 28 07:05:13 crc kubenswrapper[4642]: I0128 07:05:13.447567 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8c68c4f2-ad0e-4540-acd5-185f6a8568eb-log-httpd\") pod \"ceilometer-0\" (UID: \"8c68c4f2-ad0e-4540-acd5-185f6a8568eb\") " pod="openstack/ceilometer-0" Jan 28 07:05:13 crc kubenswrapper[4642]: I0128 07:05:13.550390 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c68c4f2-ad0e-4540-acd5-185f6a8568eb-config-data\") pod \"ceilometer-0\" (UID: \"8c68c4f2-ad0e-4540-acd5-185f6a8568eb\") " pod="openstack/ceilometer-0" Jan 28 07:05:13 crc kubenswrapper[4642]: I0128 07:05:13.550466 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8c68c4f2-ad0e-4540-acd5-185f6a8568eb-run-httpd\") pod \"ceilometer-0\" (UID: \"8c68c4f2-ad0e-4540-acd5-185f6a8568eb\") " pod="openstack/ceilometer-0" Jan 28 07:05:13 crc kubenswrapper[4642]: I0128 07:05:13.550506 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c68c4f2-ad0e-4540-acd5-185f6a8568eb-scripts\") pod \"ceilometer-0\" (UID: \"8c68c4f2-ad0e-4540-acd5-185f6a8568eb\") " pod="openstack/ceilometer-0" Jan 28 07:05:13 crc kubenswrapper[4642]: I0128 07:05:13.550574 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48bzh\" (UniqueName: \"kubernetes.io/projected/8c68c4f2-ad0e-4540-acd5-185f6a8568eb-kube-api-access-48bzh\") pod \"ceilometer-0\" (UID: \"8c68c4f2-ad0e-4540-acd5-185f6a8568eb\") " pod="openstack/ceilometer-0" Jan 28 07:05:13 crc kubenswrapper[4642]: I0128 07:05:13.550679 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8c68c4f2-ad0e-4540-acd5-185f6a8568eb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8c68c4f2-ad0e-4540-acd5-185f6a8568eb\") " pod="openstack/ceilometer-0" Jan 28 07:05:13 crc kubenswrapper[4642]: I0128 07:05:13.550703 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c68c4f2-ad0e-4540-acd5-185f6a8568eb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8c68c4f2-ad0e-4540-acd5-185f6a8568eb\") " pod="openstack/ceilometer-0" Jan 28 07:05:13 crc kubenswrapper[4642]: I0128 07:05:13.550801 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8c68c4f2-ad0e-4540-acd5-185f6a8568eb-log-httpd\") pod \"ceilometer-0\" (UID: \"8c68c4f2-ad0e-4540-acd5-185f6a8568eb\") " pod="openstack/ceilometer-0" Jan 28 07:05:13 crc kubenswrapper[4642]: I0128 07:05:13.550974 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8c68c4f2-ad0e-4540-acd5-185f6a8568eb-run-httpd\") pod \"ceilometer-0\" (UID: \"8c68c4f2-ad0e-4540-acd5-185f6a8568eb\") " pod="openstack/ceilometer-0" Jan 28 07:05:13 crc kubenswrapper[4642]: I0128 07:05:13.551285 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8c68c4f2-ad0e-4540-acd5-185f6a8568eb-log-httpd\") pod \"ceilometer-0\" (UID: \"8c68c4f2-ad0e-4540-acd5-185f6a8568eb\") " pod="openstack/ceilometer-0" Jan 28 07:05:13 crc kubenswrapper[4642]: I0128 07:05:13.557385 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c68c4f2-ad0e-4540-acd5-185f6a8568eb-scripts\") pod \"ceilometer-0\" (UID: \"8c68c4f2-ad0e-4540-acd5-185f6a8568eb\") " pod="openstack/ceilometer-0" Jan 28 07:05:13 crc kubenswrapper[4642]: I0128 07:05:13.558105 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c68c4f2-ad0e-4540-acd5-185f6a8568eb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8c68c4f2-ad0e-4540-acd5-185f6a8568eb\") " pod="openstack/ceilometer-0" Jan 28 07:05:13 crc kubenswrapper[4642]: I0128 07:05:13.558602 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8c68c4f2-ad0e-4540-acd5-185f6a8568eb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8c68c4f2-ad0e-4540-acd5-185f6a8568eb\") " pod="openstack/ceilometer-0" Jan 28 07:05:13 crc kubenswrapper[4642]: I0128 07:05:13.567301 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c68c4f2-ad0e-4540-acd5-185f6a8568eb-config-data\") pod \"ceilometer-0\" (UID: \"8c68c4f2-ad0e-4540-acd5-185f6a8568eb\") " pod="openstack/ceilometer-0" Jan 28 07:05:13 crc kubenswrapper[4642]: I0128 07:05:13.569767 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48bzh\" (UniqueName: \"kubernetes.io/projected/8c68c4f2-ad0e-4540-acd5-185f6a8568eb-kube-api-access-48bzh\") pod \"ceilometer-0\" (UID: \"8c68c4f2-ad0e-4540-acd5-185f6a8568eb\") " pod="openstack/ceilometer-0" Jan 28 07:05:13 crc kubenswrapper[4642]: I0128 07:05:13.657717 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 07:05:13 crc kubenswrapper[4642]: I0128 07:05:13.776829 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5586f7766d-jr6js" Jan 28 07:05:13 crc kubenswrapper[4642]: I0128 07:05:13.954112 4642 generic.go:334] "Generic (PLEG): container finished" podID="6ba95f7c-c848-4c02-b886-357bbd0f1223" containerID="71cae0757ce0c13e45e0ae78ffa52ba7ea2c406bf65d33326d1dde217acd405c" exitCode=0 Jan 28 07:05:13 crc kubenswrapper[4642]: I0128 07:05:13.954147 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5586f7766d-jr6js" event={"ID":"6ba95f7c-c848-4c02-b886-357bbd0f1223","Type":"ContainerDied","Data":"71cae0757ce0c13e45e0ae78ffa52ba7ea2c406bf65d33326d1dde217acd405c"} Jan 28 07:05:13 crc kubenswrapper[4642]: I0128 07:05:13.954165 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5586f7766d-jr6js" Jan 28 07:05:13 crc kubenswrapper[4642]: I0128 07:05:13.954201 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5586f7766d-jr6js" event={"ID":"6ba95f7c-c848-4c02-b886-357bbd0f1223","Type":"ContainerDied","Data":"0b33548272365b99119db86110f1e9dae2bf010bbd78472ecca863008f673d1c"} Jan 28 07:05:13 crc kubenswrapper[4642]: I0128 07:05:13.954243 4642 scope.go:117] "RemoveContainer" containerID="ace730037e7765af679a97c0a9ccf6eacea51057e861a67847ef6f6d76eafbfe" Jan 28 07:05:13 crc kubenswrapper[4642]: I0128 07:05:13.956511 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7cb1c066-b293-4f15-8056-5422fe062a98","Type":"ContainerStarted","Data":"fc9131b13811a46af3f4d720e867dcb3f7251060ca41f26e0c1766489943681d"} Jan 28 07:05:13 crc kubenswrapper[4642]: I0128 07:05:13.956665 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 28 07:05:13 crc kubenswrapper[4642]: I0128 07:05:13.960296 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ba95f7c-c848-4c02-b886-357bbd0f1223-ovndb-tls-certs\") pod \"6ba95f7c-c848-4c02-b886-357bbd0f1223\" (UID: \"6ba95f7c-c848-4c02-b886-357bbd0f1223\") " Jan 28 07:05:13 crc kubenswrapper[4642]: I0128 07:05:13.960399 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ba95f7c-c848-4c02-b886-357bbd0f1223-combined-ca-bundle\") pod \"6ba95f7c-c848-4c02-b886-357bbd0f1223\" (UID: \"6ba95f7c-c848-4c02-b886-357bbd0f1223\") " Jan 28 07:05:13 crc kubenswrapper[4642]: I0128 07:05:13.960451 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6ba95f7c-c848-4c02-b886-357bbd0f1223-config\") pod \"6ba95f7c-c848-4c02-b886-357bbd0f1223\" (UID: \"6ba95f7c-c848-4c02-b886-357bbd0f1223\") " Jan 28 07:05:13 crc kubenswrapper[4642]: I0128 07:05:13.960485 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6hmhp\" (UniqueName: \"kubernetes.io/projected/6ba95f7c-c848-4c02-b886-357bbd0f1223-kube-api-access-6hmhp\") pod \"6ba95f7c-c848-4c02-b886-357bbd0f1223\" (UID: \"6ba95f7c-c848-4c02-b886-357bbd0f1223\") " Jan 28 07:05:13 crc kubenswrapper[4642]: I0128 07:05:13.960579 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6ba95f7c-c848-4c02-b886-357bbd0f1223-httpd-config\") pod \"6ba95f7c-c848-4c02-b886-357bbd0f1223\" (UID: \"6ba95f7c-c848-4c02-b886-357bbd0f1223\") " Jan 28 07:05:13 crc kubenswrapper[4642]: I0128 07:05:13.964625 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ba95f7c-c848-4c02-b886-357bbd0f1223-kube-api-access-6hmhp" (OuterVolumeSpecName: "kube-api-access-6hmhp") pod "6ba95f7c-c848-4c02-b886-357bbd0f1223" (UID: "6ba95f7c-c848-4c02-b886-357bbd0f1223"). InnerVolumeSpecName "kube-api-access-6hmhp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:05:13 crc kubenswrapper[4642]: I0128 07:05:13.966282 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ba95f7c-c848-4c02-b886-357bbd0f1223-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "6ba95f7c-c848-4c02-b886-357bbd0f1223" (UID: "6ba95f7c-c848-4c02-b886-357bbd0f1223"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:05:13 crc kubenswrapper[4642]: I0128 07:05:13.974880 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.974861061 podStartE2EDuration="3.974861061s" podCreationTimestamp="2026-01-28 07:05:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 07:05:13.97186594 +0000 UTC m=+1037.203954748" watchObservedRunningTime="2026-01-28 07:05:13.974861061 +0000 UTC m=+1037.206949870" Jan 28 07:05:13 crc kubenswrapper[4642]: I0128 07:05:13.978441 4642 scope.go:117] "RemoveContainer" containerID="71cae0757ce0c13e45e0ae78ffa52ba7ea2c406bf65d33326d1dde217acd405c" Jan 28 07:05:14 crc kubenswrapper[4642]: I0128 07:05:14.004378 4642 scope.go:117] "RemoveContainer" containerID="ace730037e7765af679a97c0a9ccf6eacea51057e861a67847ef6f6d76eafbfe" Jan 28 07:05:14 crc kubenswrapper[4642]: I0128 07:05:14.010123 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ba95f7c-c848-4c02-b886-357bbd0f1223-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6ba95f7c-c848-4c02-b886-357bbd0f1223" (UID: "6ba95f7c-c848-4c02-b886-357bbd0f1223"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:05:14 crc kubenswrapper[4642]: I0128 07:05:14.010135 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ba95f7c-c848-4c02-b886-357bbd0f1223-config" (OuterVolumeSpecName: "config") pod "6ba95f7c-c848-4c02-b886-357bbd0f1223" (UID: "6ba95f7c-c848-4c02-b886-357bbd0f1223"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:05:14 crc kubenswrapper[4642]: E0128 07:05:14.012399 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ace730037e7765af679a97c0a9ccf6eacea51057e861a67847ef6f6d76eafbfe\": container with ID starting with ace730037e7765af679a97c0a9ccf6eacea51057e861a67847ef6f6d76eafbfe not found: ID does not exist" containerID="ace730037e7765af679a97c0a9ccf6eacea51057e861a67847ef6f6d76eafbfe" Jan 28 07:05:14 crc kubenswrapper[4642]: I0128 07:05:14.012435 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ace730037e7765af679a97c0a9ccf6eacea51057e861a67847ef6f6d76eafbfe"} err="failed to get container status \"ace730037e7765af679a97c0a9ccf6eacea51057e861a67847ef6f6d76eafbfe\": rpc error: code = NotFound desc = could not find container \"ace730037e7765af679a97c0a9ccf6eacea51057e861a67847ef6f6d76eafbfe\": container with ID starting with ace730037e7765af679a97c0a9ccf6eacea51057e861a67847ef6f6d76eafbfe not found: ID does not exist" Jan 28 07:05:14 crc kubenswrapper[4642]: I0128 07:05:14.012461 4642 scope.go:117] "RemoveContainer" containerID="71cae0757ce0c13e45e0ae78ffa52ba7ea2c406bf65d33326d1dde217acd405c" Jan 28 07:05:14 crc kubenswrapper[4642]: E0128 07:05:14.012861 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71cae0757ce0c13e45e0ae78ffa52ba7ea2c406bf65d33326d1dde217acd405c\": container with ID starting with 71cae0757ce0c13e45e0ae78ffa52ba7ea2c406bf65d33326d1dde217acd405c not found: ID does not exist" containerID="71cae0757ce0c13e45e0ae78ffa52ba7ea2c406bf65d33326d1dde217acd405c" Jan 28 07:05:14 crc kubenswrapper[4642]: I0128 07:05:14.012886 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71cae0757ce0c13e45e0ae78ffa52ba7ea2c406bf65d33326d1dde217acd405c"} err="failed to get container status \"71cae0757ce0c13e45e0ae78ffa52ba7ea2c406bf65d33326d1dde217acd405c\": rpc error: code = NotFound desc = could not find container \"71cae0757ce0c13e45e0ae78ffa52ba7ea2c406bf65d33326d1dde217acd405c\": container with ID starting with 71cae0757ce0c13e45e0ae78ffa52ba7ea2c406bf65d33326d1dde217acd405c not found: ID does not exist" Jan 28 07:05:14 crc kubenswrapper[4642]: I0128 07:05:14.043015 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ba95f7c-c848-4c02-b886-357bbd0f1223-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "6ba95f7c-c848-4c02-b886-357bbd0f1223" (UID: "6ba95f7c-c848-4c02-b886-357bbd0f1223"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:05:14 crc kubenswrapper[4642]: I0128 07:05:14.062639 4642 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ba95f7c-c848-4c02-b886-357bbd0f1223-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 07:05:14 crc kubenswrapper[4642]: I0128 07:05:14.062663 4642 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/6ba95f7c-c848-4c02-b886-357bbd0f1223-config\") on node \"crc\" DevicePath \"\"" Jan 28 07:05:14 crc kubenswrapper[4642]: I0128 07:05:14.062672 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6hmhp\" (UniqueName: \"kubernetes.io/projected/6ba95f7c-c848-4c02-b886-357bbd0f1223-kube-api-access-6hmhp\") on node \"crc\" DevicePath \"\"" Jan 28 07:05:14 crc kubenswrapper[4642]: I0128 07:05:14.062685 4642 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6ba95f7c-c848-4c02-b886-357bbd0f1223-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 28 07:05:14 crc kubenswrapper[4642]: I0128 07:05:14.062694 4642 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ba95f7c-c848-4c02-b886-357bbd0f1223-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 07:05:14 crc kubenswrapper[4642]: I0128 07:05:14.077730 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 07:05:14 crc kubenswrapper[4642]: I0128 07:05:14.287144 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5586f7766d-jr6js"] Jan 28 07:05:14 crc kubenswrapper[4642]: I0128 07:05:14.292918 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5586f7766d-jr6js"] Jan 28 07:05:14 crc kubenswrapper[4642]: I0128 07:05:14.749452 4642 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="2266e1d6-2cec-4bfc-9a24-b6408860e980" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.150:9292/healthcheck\": read tcp 10.217.0.2:47324->10.217.0.150:9292: read: connection reset by peer" Jan 28 07:05:14 crc kubenswrapper[4642]: I0128 07:05:14.749464 4642 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="2266e1d6-2cec-4bfc-9a24-b6408860e980" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.150:9292/healthcheck\": read tcp 10.217.0.2:47308->10.217.0.150:9292: read: connection reset by peer" Jan 28 07:05:14 crc kubenswrapper[4642]: I0128 07:05:14.972090 4642 generic.go:334] "Generic (PLEG): container finished" podID="2266e1d6-2cec-4bfc-9a24-b6408860e980" containerID="a98f299adcf7ea63064992435916a114d91f4a8ea25a12db110f82f97974ec91" exitCode=0 Jan 28 07:05:14 crc kubenswrapper[4642]: I0128 07:05:14.972232 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2266e1d6-2cec-4bfc-9a24-b6408860e980","Type":"ContainerDied","Data":"a98f299adcf7ea63064992435916a114d91f4a8ea25a12db110f82f97974ec91"} Jan 28 07:05:14 crc kubenswrapper[4642]: I0128 07:05:14.973814 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8c68c4f2-ad0e-4540-acd5-185f6a8568eb","Type":"ContainerStarted","Data":"f4ac43677dc3d6f42fe29284a3d3134143b483c2d32c346f145a66da7d204fbf"} Jan 28 07:05:15 crc kubenswrapper[4642]: I0128 07:05:15.109798 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="578099b8-075f-47d7-b5bf-4db9ef550508" path="/var/lib/kubelet/pods/578099b8-075f-47d7-b5bf-4db9ef550508/volumes" Jan 28 07:05:15 crc kubenswrapper[4642]: I0128 07:05:15.110850 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ba95f7c-c848-4c02-b886-357bbd0f1223" path="/var/lib/kubelet/pods/6ba95f7c-c848-4c02-b886-357bbd0f1223/volumes" Jan 28 07:05:15 crc kubenswrapper[4642]: I0128 07:05:15.232978 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 07:05:15 crc kubenswrapper[4642]: I0128 07:05:15.388254 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2266e1d6-2cec-4bfc-9a24-b6408860e980-combined-ca-bundle\") pod \"2266e1d6-2cec-4bfc-9a24-b6408860e980\" (UID: \"2266e1d6-2cec-4bfc-9a24-b6408860e980\") " Jan 28 07:05:15 crc kubenswrapper[4642]: I0128 07:05:15.388621 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqb44\" (UniqueName: \"kubernetes.io/projected/2266e1d6-2cec-4bfc-9a24-b6408860e980-kube-api-access-xqb44\") pod \"2266e1d6-2cec-4bfc-9a24-b6408860e980\" (UID: \"2266e1d6-2cec-4bfc-9a24-b6408860e980\") " Jan 28 07:05:15 crc kubenswrapper[4642]: I0128 07:05:15.388798 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"2266e1d6-2cec-4bfc-9a24-b6408860e980\" (UID: \"2266e1d6-2cec-4bfc-9a24-b6408860e980\") " Jan 28 07:05:15 crc kubenswrapper[4642]: I0128 07:05:15.388877 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2266e1d6-2cec-4bfc-9a24-b6408860e980-scripts\") pod \"2266e1d6-2cec-4bfc-9a24-b6408860e980\" (UID: \"2266e1d6-2cec-4bfc-9a24-b6408860e980\") " Jan 28 07:05:15 crc kubenswrapper[4642]: I0128 07:05:15.388999 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2266e1d6-2cec-4bfc-9a24-b6408860e980-logs\") pod \"2266e1d6-2cec-4bfc-9a24-b6408860e980\" (UID: \"2266e1d6-2cec-4bfc-9a24-b6408860e980\") " Jan 28 07:05:15 crc kubenswrapper[4642]: I0128 07:05:15.389088 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2266e1d6-2cec-4bfc-9a24-b6408860e980-config-data\") pod \"2266e1d6-2cec-4bfc-9a24-b6408860e980\" (UID: \"2266e1d6-2cec-4bfc-9a24-b6408860e980\") " Jan 28 07:05:15 crc kubenswrapper[4642]: I0128 07:05:15.389168 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2266e1d6-2cec-4bfc-9a24-b6408860e980-httpd-run\") pod \"2266e1d6-2cec-4bfc-9a24-b6408860e980\" (UID: \"2266e1d6-2cec-4bfc-9a24-b6408860e980\") " Jan 28 07:05:15 crc kubenswrapper[4642]: I0128 07:05:15.389407 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2266e1d6-2cec-4bfc-9a24-b6408860e980-internal-tls-certs\") pod \"2266e1d6-2cec-4bfc-9a24-b6408860e980\" (UID: \"2266e1d6-2cec-4bfc-9a24-b6408860e980\") " Jan 28 07:05:15 crc kubenswrapper[4642]: I0128 07:05:15.389616 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2266e1d6-2cec-4bfc-9a24-b6408860e980-logs" (OuterVolumeSpecName: "logs") pod "2266e1d6-2cec-4bfc-9a24-b6408860e980" (UID: "2266e1d6-2cec-4bfc-9a24-b6408860e980"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 07:05:15 crc kubenswrapper[4642]: I0128 07:05:15.389637 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2266e1d6-2cec-4bfc-9a24-b6408860e980-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "2266e1d6-2cec-4bfc-9a24-b6408860e980" (UID: "2266e1d6-2cec-4bfc-9a24-b6408860e980"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 07:05:15 crc kubenswrapper[4642]: I0128 07:05:15.390326 4642 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2266e1d6-2cec-4bfc-9a24-b6408860e980-logs\") on node \"crc\" DevicePath \"\"" Jan 28 07:05:15 crc kubenswrapper[4642]: I0128 07:05:15.390403 4642 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2266e1d6-2cec-4bfc-9a24-b6408860e980-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 28 07:05:15 crc kubenswrapper[4642]: I0128 07:05:15.395785 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2266e1d6-2cec-4bfc-9a24-b6408860e980-scripts" (OuterVolumeSpecName: "scripts") pod "2266e1d6-2cec-4bfc-9a24-b6408860e980" (UID: "2266e1d6-2cec-4bfc-9a24-b6408860e980"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:05:15 crc kubenswrapper[4642]: I0128 07:05:15.410281 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2266e1d6-2cec-4bfc-9a24-b6408860e980-kube-api-access-xqb44" (OuterVolumeSpecName: "kube-api-access-xqb44") pod "2266e1d6-2cec-4bfc-9a24-b6408860e980" (UID: "2266e1d6-2cec-4bfc-9a24-b6408860e980"). InnerVolumeSpecName "kube-api-access-xqb44". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:05:15 crc kubenswrapper[4642]: I0128 07:05:15.410326 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "2266e1d6-2cec-4bfc-9a24-b6408860e980" (UID: "2266e1d6-2cec-4bfc-9a24-b6408860e980"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 28 07:05:15 crc kubenswrapper[4642]: I0128 07:05:15.415818 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2266e1d6-2cec-4bfc-9a24-b6408860e980-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2266e1d6-2cec-4bfc-9a24-b6408860e980" (UID: "2266e1d6-2cec-4bfc-9a24-b6408860e980"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:05:15 crc kubenswrapper[4642]: E0128 07:05:15.438739 4642 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2266e1d6-2cec-4bfc-9a24-b6408860e980-config-data podName:2266e1d6-2cec-4bfc-9a24-b6408860e980 nodeName:}" failed. No retries permitted until 2026-01-28 07:05:15.938714694 +0000 UTC m=+1039.170803493 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "config-data" (UniqueName: "kubernetes.io/secret/2266e1d6-2cec-4bfc-9a24-b6408860e980-config-data") pod "2266e1d6-2cec-4bfc-9a24-b6408860e980" (UID: "2266e1d6-2cec-4bfc-9a24-b6408860e980") : error deleting /var/lib/kubelet/pods/2266e1d6-2cec-4bfc-9a24-b6408860e980/volume-subpaths: remove /var/lib/kubelet/pods/2266e1d6-2cec-4bfc-9a24-b6408860e980/volume-subpaths: no such file or directory Jan 28 07:05:15 crc kubenswrapper[4642]: I0128 07:05:15.440777 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2266e1d6-2cec-4bfc-9a24-b6408860e980-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "2266e1d6-2cec-4bfc-9a24-b6408860e980" (UID: "2266e1d6-2cec-4bfc-9a24-b6408860e980"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:05:15 crc kubenswrapper[4642]: I0128 07:05:15.493856 4642 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2266e1d6-2cec-4bfc-9a24-b6408860e980-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 07:05:15 crc kubenswrapper[4642]: I0128 07:05:15.493882 4642 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2266e1d6-2cec-4bfc-9a24-b6408860e980-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 07:05:15 crc kubenswrapper[4642]: I0128 07:05:15.493895 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xqb44\" (UniqueName: \"kubernetes.io/projected/2266e1d6-2cec-4bfc-9a24-b6408860e980-kube-api-access-xqb44\") on node \"crc\" DevicePath \"\"" Jan 28 07:05:15 crc kubenswrapper[4642]: I0128 07:05:15.493932 4642 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Jan 28 07:05:15 crc kubenswrapper[4642]: I0128 07:05:15.493943 4642 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2266e1d6-2cec-4bfc-9a24-b6408860e980-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 07:05:15 crc kubenswrapper[4642]: I0128 07:05:15.512738 4642 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Jan 28 07:05:15 crc kubenswrapper[4642]: I0128 07:05:15.596327 4642 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Jan 28 07:05:15 crc kubenswrapper[4642]: I0128 07:05:15.987272 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2266e1d6-2cec-4bfc-9a24-b6408860e980","Type":"ContainerDied","Data":"7b44222e576f8c1ce45730f7a6081d9eb3fac5af21415e2a6920c0cf5630721e"} Jan 28 07:05:15 crc kubenswrapper[4642]: I0128 07:05:15.987315 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 07:05:15 crc kubenswrapper[4642]: I0128 07:05:15.987332 4642 scope.go:117] "RemoveContainer" containerID="a98f299adcf7ea63064992435916a114d91f4a8ea25a12db110f82f97974ec91" Jan 28 07:05:16 crc kubenswrapper[4642]: I0128 07:05:16.006306 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2266e1d6-2cec-4bfc-9a24-b6408860e980-config-data\") pod \"2266e1d6-2cec-4bfc-9a24-b6408860e980\" (UID: \"2266e1d6-2cec-4bfc-9a24-b6408860e980\") " Jan 28 07:05:16 crc kubenswrapper[4642]: I0128 07:05:16.008453 4642 scope.go:117] "RemoveContainer" containerID="15dbe696b934bd09d8341707ccd9135f43d5140feb7d1dcfa639f46c873bc41a" Jan 28 07:05:16 crc kubenswrapper[4642]: I0128 07:05:16.012895 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2266e1d6-2cec-4bfc-9a24-b6408860e980-config-data" (OuterVolumeSpecName: "config-data") pod "2266e1d6-2cec-4bfc-9a24-b6408860e980" (UID: "2266e1d6-2cec-4bfc-9a24-b6408860e980"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:05:16 crc kubenswrapper[4642]: I0128 07:05:16.109550 4642 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2266e1d6-2cec-4bfc-9a24-b6408860e980-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 07:05:16 crc kubenswrapper[4642]: I0128 07:05:16.333943 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 07:05:16 crc kubenswrapper[4642]: I0128 07:05:16.352016 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 07:05:16 crc kubenswrapper[4642]: I0128 07:05:16.364822 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 07:05:16 crc kubenswrapper[4642]: E0128 07:05:16.365421 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2266e1d6-2cec-4bfc-9a24-b6408860e980" containerName="glance-httpd" Jan 28 07:05:16 crc kubenswrapper[4642]: I0128 07:05:16.365444 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="2266e1d6-2cec-4bfc-9a24-b6408860e980" containerName="glance-httpd" Jan 28 07:05:16 crc kubenswrapper[4642]: E0128 07:05:16.365463 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ba95f7c-c848-4c02-b886-357bbd0f1223" containerName="neutron-api" Jan 28 07:05:16 crc kubenswrapper[4642]: I0128 07:05:16.365471 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ba95f7c-c848-4c02-b886-357bbd0f1223" containerName="neutron-api" Jan 28 07:05:16 crc kubenswrapper[4642]: E0128 07:05:16.365513 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ba95f7c-c848-4c02-b886-357bbd0f1223" containerName="neutron-httpd" Jan 28 07:05:16 crc kubenswrapper[4642]: I0128 07:05:16.365521 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ba95f7c-c848-4c02-b886-357bbd0f1223" containerName="neutron-httpd" Jan 28 07:05:16 crc kubenswrapper[4642]: E0128 07:05:16.365540 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2266e1d6-2cec-4bfc-9a24-b6408860e980" containerName="glance-log" Jan 28 07:05:16 crc kubenswrapper[4642]: I0128 07:05:16.365548 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="2266e1d6-2cec-4bfc-9a24-b6408860e980" containerName="glance-log" Jan 28 07:05:16 crc kubenswrapper[4642]: I0128 07:05:16.365800 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="2266e1d6-2cec-4bfc-9a24-b6408860e980" containerName="glance-httpd" Jan 28 07:05:16 crc kubenswrapper[4642]: I0128 07:05:16.365821 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ba95f7c-c848-4c02-b886-357bbd0f1223" containerName="neutron-httpd" Jan 28 07:05:16 crc kubenswrapper[4642]: I0128 07:05:16.365834 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="2266e1d6-2cec-4bfc-9a24-b6408860e980" containerName="glance-log" Jan 28 07:05:16 crc kubenswrapper[4642]: I0128 07:05:16.365850 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ba95f7c-c848-4c02-b886-357bbd0f1223" containerName="neutron-api" Jan 28 07:05:16 crc kubenswrapper[4642]: I0128 07:05:16.367278 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 07:05:16 crc kubenswrapper[4642]: I0128 07:05:16.371120 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 28 07:05:16 crc kubenswrapper[4642]: I0128 07:05:16.371348 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 28 07:05:16 crc kubenswrapper[4642]: I0128 07:05:16.377859 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 07:05:16 crc kubenswrapper[4642]: I0128 07:05:16.519627 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/16c4c401-8d2d-479c-bbb2-75b0f3ac300a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"16c4c401-8d2d-479c-bbb2-75b0f3ac300a\") " pod="openstack/glance-default-internal-api-0" Jan 28 07:05:16 crc kubenswrapper[4642]: I0128 07:05:16.519683 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16c4c401-8d2d-479c-bbb2-75b0f3ac300a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"16c4c401-8d2d-479c-bbb2-75b0f3ac300a\") " pod="openstack/glance-default-internal-api-0" Jan 28 07:05:16 crc kubenswrapper[4642]: I0128 07:05:16.519710 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/16c4c401-8d2d-479c-bbb2-75b0f3ac300a-logs\") pod \"glance-default-internal-api-0\" (UID: \"16c4c401-8d2d-479c-bbb2-75b0f3ac300a\") " pod="openstack/glance-default-internal-api-0" Jan 28 07:05:16 crc kubenswrapper[4642]: I0128 07:05:16.519734 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"16c4c401-8d2d-479c-bbb2-75b0f3ac300a\") " pod="openstack/glance-default-internal-api-0" Jan 28 07:05:16 crc kubenswrapper[4642]: I0128 07:05:16.519764 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98z9x\" (UniqueName: \"kubernetes.io/projected/16c4c401-8d2d-479c-bbb2-75b0f3ac300a-kube-api-access-98z9x\") pod \"glance-default-internal-api-0\" (UID: \"16c4c401-8d2d-479c-bbb2-75b0f3ac300a\") " pod="openstack/glance-default-internal-api-0" Jan 28 07:05:16 crc kubenswrapper[4642]: I0128 07:05:16.519888 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16c4c401-8d2d-479c-bbb2-75b0f3ac300a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"16c4c401-8d2d-479c-bbb2-75b0f3ac300a\") " pod="openstack/glance-default-internal-api-0" Jan 28 07:05:16 crc kubenswrapper[4642]: I0128 07:05:16.519987 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16c4c401-8d2d-479c-bbb2-75b0f3ac300a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"16c4c401-8d2d-479c-bbb2-75b0f3ac300a\") " pod="openstack/glance-default-internal-api-0" Jan 28 07:05:16 crc kubenswrapper[4642]: I0128 07:05:16.520091 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/16c4c401-8d2d-479c-bbb2-75b0f3ac300a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"16c4c401-8d2d-479c-bbb2-75b0f3ac300a\") " pod="openstack/glance-default-internal-api-0" Jan 28 07:05:16 crc kubenswrapper[4642]: I0128 07:05:16.621585 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16c4c401-8d2d-479c-bbb2-75b0f3ac300a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"16c4c401-8d2d-479c-bbb2-75b0f3ac300a\") " pod="openstack/glance-default-internal-api-0" Jan 28 07:05:16 crc kubenswrapper[4642]: I0128 07:05:16.621918 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16c4c401-8d2d-479c-bbb2-75b0f3ac300a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"16c4c401-8d2d-479c-bbb2-75b0f3ac300a\") " pod="openstack/glance-default-internal-api-0" Jan 28 07:05:16 crc kubenswrapper[4642]: I0128 07:05:16.621953 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/16c4c401-8d2d-479c-bbb2-75b0f3ac300a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"16c4c401-8d2d-479c-bbb2-75b0f3ac300a\") " pod="openstack/glance-default-internal-api-0" Jan 28 07:05:16 crc kubenswrapper[4642]: I0128 07:05:16.621982 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/16c4c401-8d2d-479c-bbb2-75b0f3ac300a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"16c4c401-8d2d-479c-bbb2-75b0f3ac300a\") " pod="openstack/glance-default-internal-api-0" Jan 28 07:05:16 crc kubenswrapper[4642]: I0128 07:05:16.622005 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16c4c401-8d2d-479c-bbb2-75b0f3ac300a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"16c4c401-8d2d-479c-bbb2-75b0f3ac300a\") " pod="openstack/glance-default-internal-api-0" Jan 28 07:05:16 crc kubenswrapper[4642]: I0128 07:05:16.622024 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/16c4c401-8d2d-479c-bbb2-75b0f3ac300a-logs\") pod \"glance-default-internal-api-0\" (UID: \"16c4c401-8d2d-479c-bbb2-75b0f3ac300a\") " pod="openstack/glance-default-internal-api-0" Jan 28 07:05:16 crc kubenswrapper[4642]: I0128 07:05:16.622046 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"16c4c401-8d2d-479c-bbb2-75b0f3ac300a\") " pod="openstack/glance-default-internal-api-0" Jan 28 07:05:16 crc kubenswrapper[4642]: I0128 07:05:16.622070 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98z9x\" (UniqueName: \"kubernetes.io/projected/16c4c401-8d2d-479c-bbb2-75b0f3ac300a-kube-api-access-98z9x\") pod \"glance-default-internal-api-0\" (UID: \"16c4c401-8d2d-479c-bbb2-75b0f3ac300a\") " pod="openstack/glance-default-internal-api-0" Jan 28 07:05:16 crc kubenswrapper[4642]: I0128 07:05:16.622386 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/16c4c401-8d2d-479c-bbb2-75b0f3ac300a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"16c4c401-8d2d-479c-bbb2-75b0f3ac300a\") " pod="openstack/glance-default-internal-api-0" Jan 28 07:05:16 crc kubenswrapper[4642]: I0128 07:05:16.622412 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/16c4c401-8d2d-479c-bbb2-75b0f3ac300a-logs\") pod \"glance-default-internal-api-0\" (UID: \"16c4c401-8d2d-479c-bbb2-75b0f3ac300a\") " pod="openstack/glance-default-internal-api-0" Jan 28 07:05:16 crc kubenswrapper[4642]: I0128 07:05:16.622608 4642 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"16c4c401-8d2d-479c-bbb2-75b0f3ac300a\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-internal-api-0" Jan 28 07:05:16 crc kubenswrapper[4642]: I0128 07:05:16.627974 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16c4c401-8d2d-479c-bbb2-75b0f3ac300a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"16c4c401-8d2d-479c-bbb2-75b0f3ac300a\") " pod="openstack/glance-default-internal-api-0" Jan 28 07:05:16 crc kubenswrapper[4642]: I0128 07:05:16.628090 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16c4c401-8d2d-479c-bbb2-75b0f3ac300a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"16c4c401-8d2d-479c-bbb2-75b0f3ac300a\") " pod="openstack/glance-default-internal-api-0" Jan 28 07:05:16 crc kubenswrapper[4642]: I0128 07:05:16.628314 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/16c4c401-8d2d-479c-bbb2-75b0f3ac300a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"16c4c401-8d2d-479c-bbb2-75b0f3ac300a\") " pod="openstack/glance-default-internal-api-0" Jan 28 07:05:16 crc kubenswrapper[4642]: I0128 07:05:16.628440 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16c4c401-8d2d-479c-bbb2-75b0f3ac300a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"16c4c401-8d2d-479c-bbb2-75b0f3ac300a\") " pod="openstack/glance-default-internal-api-0" Jan 28 07:05:16 crc kubenswrapper[4642]: I0128 07:05:16.635344 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98z9x\" (UniqueName: \"kubernetes.io/projected/16c4c401-8d2d-479c-bbb2-75b0f3ac300a-kube-api-access-98z9x\") pod \"glance-default-internal-api-0\" (UID: \"16c4c401-8d2d-479c-bbb2-75b0f3ac300a\") " pod="openstack/glance-default-internal-api-0" Jan 28 07:05:16 crc kubenswrapper[4642]: I0128 07:05:16.646365 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"16c4c401-8d2d-479c-bbb2-75b0f3ac300a\") " pod="openstack/glance-default-internal-api-0" Jan 28 07:05:16 crc kubenswrapper[4642]: I0128 07:05:16.683306 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 07:05:17 crc kubenswrapper[4642]: I0128 07:05:17.108662 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2266e1d6-2cec-4bfc-9a24-b6408860e980" path="/var/lib/kubelet/pods/2266e1d6-2cec-4bfc-9a24-b6408860e980/volumes" Jan 28 07:05:17 crc kubenswrapper[4642]: I0128 07:05:17.198007 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-75f9d8d9f-s9pnh" Jan 28 07:05:17 crc kubenswrapper[4642]: I0128 07:05:17.198806 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 07:05:17 crc kubenswrapper[4642]: I0128 07:05:17.245805 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d574698ff-4mg6j"] Jan 28 07:05:17 crc kubenswrapper[4642]: I0128 07:05:17.246058 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5d574698ff-4mg6j" podUID="ff037186-7ca7-4860-a8ae-0d3b84abe5da" containerName="dnsmasq-dns" containerID="cri-o://2646b67997ec10c0776f8fd0723bba77015132cb25ec60ba5ed35a6950ed037f" gracePeriod=10 Jan 28 07:05:17 crc kubenswrapper[4642]: I0128 07:05:17.437059 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 28 07:05:17 crc kubenswrapper[4642]: I0128 07:05:17.466649 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 07:05:17 crc kubenswrapper[4642]: I0128 07:05:17.612673 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d574698ff-4mg6j" Jan 28 07:05:17 crc kubenswrapper[4642]: I0128 07:05:17.751973 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff037186-7ca7-4860-a8ae-0d3b84abe5da-dns-svc\") pod \"ff037186-7ca7-4860-a8ae-0d3b84abe5da\" (UID: \"ff037186-7ca7-4860-a8ae-0d3b84abe5da\") " Jan 28 07:05:17 crc kubenswrapper[4642]: I0128 07:05:17.752221 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ff037186-7ca7-4860-a8ae-0d3b84abe5da-ovsdbserver-sb\") pod \"ff037186-7ca7-4860-a8ae-0d3b84abe5da\" (UID: \"ff037186-7ca7-4860-a8ae-0d3b84abe5da\") " Jan 28 07:05:17 crc kubenswrapper[4642]: I0128 07:05:17.752321 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ff037186-7ca7-4860-a8ae-0d3b84abe5da-ovsdbserver-nb\") pod \"ff037186-7ca7-4860-a8ae-0d3b84abe5da\" (UID: \"ff037186-7ca7-4860-a8ae-0d3b84abe5da\") " Jan 28 07:05:17 crc kubenswrapper[4642]: I0128 07:05:17.752410 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ff037186-7ca7-4860-a8ae-0d3b84abe5da-dns-swift-storage-0\") pod \"ff037186-7ca7-4860-a8ae-0d3b84abe5da\" (UID: \"ff037186-7ca7-4860-a8ae-0d3b84abe5da\") " Jan 28 07:05:17 crc kubenswrapper[4642]: I0128 07:05:17.752578 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff037186-7ca7-4860-a8ae-0d3b84abe5da-config\") pod \"ff037186-7ca7-4860-a8ae-0d3b84abe5da\" (UID: \"ff037186-7ca7-4860-a8ae-0d3b84abe5da\") " Jan 28 07:05:17 crc kubenswrapper[4642]: I0128 07:05:17.752662 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4ktt\" (UniqueName: \"kubernetes.io/projected/ff037186-7ca7-4860-a8ae-0d3b84abe5da-kube-api-access-z4ktt\") pod \"ff037186-7ca7-4860-a8ae-0d3b84abe5da\" (UID: \"ff037186-7ca7-4860-a8ae-0d3b84abe5da\") " Jan 28 07:05:17 crc kubenswrapper[4642]: I0128 07:05:17.755694 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff037186-7ca7-4860-a8ae-0d3b84abe5da-kube-api-access-z4ktt" (OuterVolumeSpecName: "kube-api-access-z4ktt") pod "ff037186-7ca7-4860-a8ae-0d3b84abe5da" (UID: "ff037186-7ca7-4860-a8ae-0d3b84abe5da"). InnerVolumeSpecName "kube-api-access-z4ktt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:05:17 crc kubenswrapper[4642]: I0128 07:05:17.785466 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff037186-7ca7-4860-a8ae-0d3b84abe5da-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ff037186-7ca7-4860-a8ae-0d3b84abe5da" (UID: "ff037186-7ca7-4860-a8ae-0d3b84abe5da"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:05:17 crc kubenswrapper[4642]: I0128 07:05:17.787728 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff037186-7ca7-4860-a8ae-0d3b84abe5da-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ff037186-7ca7-4860-a8ae-0d3b84abe5da" (UID: "ff037186-7ca7-4860-a8ae-0d3b84abe5da"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:05:17 crc kubenswrapper[4642]: I0128 07:05:17.789530 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff037186-7ca7-4860-a8ae-0d3b84abe5da-config" (OuterVolumeSpecName: "config") pod "ff037186-7ca7-4860-a8ae-0d3b84abe5da" (UID: "ff037186-7ca7-4860-a8ae-0d3b84abe5da"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:05:17 crc kubenswrapper[4642]: I0128 07:05:17.794734 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff037186-7ca7-4860-a8ae-0d3b84abe5da-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ff037186-7ca7-4860-a8ae-0d3b84abe5da" (UID: "ff037186-7ca7-4860-a8ae-0d3b84abe5da"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:05:17 crc kubenswrapper[4642]: I0128 07:05:17.796387 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff037186-7ca7-4860-a8ae-0d3b84abe5da-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ff037186-7ca7-4860-a8ae-0d3b84abe5da" (UID: "ff037186-7ca7-4860-a8ae-0d3b84abe5da"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:05:17 crc kubenswrapper[4642]: I0128 07:05:17.854492 4642 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff037186-7ca7-4860-a8ae-0d3b84abe5da-config\") on node \"crc\" DevicePath \"\"" Jan 28 07:05:17 crc kubenswrapper[4642]: I0128 07:05:17.854523 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4ktt\" (UniqueName: \"kubernetes.io/projected/ff037186-7ca7-4860-a8ae-0d3b84abe5da-kube-api-access-z4ktt\") on node \"crc\" DevicePath \"\"" Jan 28 07:05:17 crc kubenswrapper[4642]: I0128 07:05:17.854534 4642 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff037186-7ca7-4860-a8ae-0d3b84abe5da-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 07:05:17 crc kubenswrapper[4642]: I0128 07:05:17.854544 4642 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ff037186-7ca7-4860-a8ae-0d3b84abe5da-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 07:05:17 crc kubenswrapper[4642]: I0128 07:05:17.854559 4642 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ff037186-7ca7-4860-a8ae-0d3b84abe5da-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 07:05:17 crc kubenswrapper[4642]: I0128 07:05:17.854568 4642 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ff037186-7ca7-4860-a8ae-0d3b84abe5da-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 28 07:05:18 crc kubenswrapper[4642]: I0128 07:05:18.009268 4642 generic.go:334] "Generic (PLEG): container finished" podID="ff037186-7ca7-4860-a8ae-0d3b84abe5da" containerID="2646b67997ec10c0776f8fd0723bba77015132cb25ec60ba5ed35a6950ed037f" exitCode=0 Jan 28 07:05:18 crc kubenswrapper[4642]: I0128 07:05:18.009373 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d574698ff-4mg6j" event={"ID":"ff037186-7ca7-4860-a8ae-0d3b84abe5da","Type":"ContainerDied","Data":"2646b67997ec10c0776f8fd0723bba77015132cb25ec60ba5ed35a6950ed037f"} Jan 28 07:05:18 crc kubenswrapper[4642]: I0128 07:05:18.009349 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d574698ff-4mg6j" Jan 28 07:05:18 crc kubenswrapper[4642]: I0128 07:05:18.009445 4642 scope.go:117] "RemoveContainer" containerID="2646b67997ec10c0776f8fd0723bba77015132cb25ec60ba5ed35a6950ed037f" Jan 28 07:05:18 crc kubenswrapper[4642]: I0128 07:05:18.009430 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d574698ff-4mg6j" event={"ID":"ff037186-7ca7-4860-a8ae-0d3b84abe5da","Type":"ContainerDied","Data":"b14d9eb602b348ed9b56037b54a171cada0610389e6bdc4758a5a2ef43f78cb8"} Jan 28 07:05:18 crc kubenswrapper[4642]: I0128 07:05:18.012981 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"16c4c401-8d2d-479c-bbb2-75b0f3ac300a","Type":"ContainerStarted","Data":"986baa3a8a648c904a73eacd81374312c6741c4575e2ff24a80f25802c4384d7"} Jan 28 07:05:18 crc kubenswrapper[4642]: I0128 07:05:18.013013 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"16c4c401-8d2d-479c-bbb2-75b0f3ac300a","Type":"ContainerStarted","Data":"0448ff65cf454a14ecb7201eb6dfb773ce7cae17097e51e99c836735bdcbf50f"} Jan 28 07:05:18 crc kubenswrapper[4642]: I0128 07:05:18.015557 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8c68c4f2-ad0e-4540-acd5-185f6a8568eb","Type":"ContainerStarted","Data":"589ce055a44dc932a4cf73eb592a2d7bbc57a0c78a5d51614ca6a10430389f96"} Jan 28 07:05:18 crc kubenswrapper[4642]: I0128 07:05:18.015752 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="db3bf7e9-aede-4b15-8c39-49ecafa0e435" containerName="cinder-scheduler" containerID="cri-o://7c7b3cbf200a25150acd856562731a67fe6021ff382ddfe47c92ff5214a190d6" gracePeriod=30 Jan 28 07:05:18 crc kubenswrapper[4642]: I0128 07:05:18.016406 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="db3bf7e9-aede-4b15-8c39-49ecafa0e435" containerName="probe" containerID="cri-o://6c837c17f8e7eb134bb44017a6d3b082b078af8cb100436a10ec121c0c43a7f6" gracePeriod=30 Jan 28 07:05:18 crc kubenswrapper[4642]: I0128 07:05:18.080371 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d574698ff-4mg6j"] Jan 28 07:05:18 crc kubenswrapper[4642]: I0128 07:05:18.082984 4642 scope.go:117] "RemoveContainer" containerID="48d4ff721828b99fdf26f6192a12929095911935a1ae3257a070ef4b0df1e6d8" Jan 28 07:05:18 crc kubenswrapper[4642]: I0128 07:05:18.092774 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5d574698ff-4mg6j"] Jan 28 07:05:18 crc kubenswrapper[4642]: I0128 07:05:18.111461 4642 scope.go:117] "RemoveContainer" containerID="2646b67997ec10c0776f8fd0723bba77015132cb25ec60ba5ed35a6950ed037f" Jan 28 07:05:18 crc kubenswrapper[4642]: E0128 07:05:18.116419 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2646b67997ec10c0776f8fd0723bba77015132cb25ec60ba5ed35a6950ed037f\": container with ID starting with 2646b67997ec10c0776f8fd0723bba77015132cb25ec60ba5ed35a6950ed037f not found: ID does not exist" containerID="2646b67997ec10c0776f8fd0723bba77015132cb25ec60ba5ed35a6950ed037f" Jan 28 07:05:18 crc kubenswrapper[4642]: I0128 07:05:18.116462 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2646b67997ec10c0776f8fd0723bba77015132cb25ec60ba5ed35a6950ed037f"} err="failed to get container status \"2646b67997ec10c0776f8fd0723bba77015132cb25ec60ba5ed35a6950ed037f\": rpc error: code = NotFound desc = could not find container \"2646b67997ec10c0776f8fd0723bba77015132cb25ec60ba5ed35a6950ed037f\": container with ID starting with 2646b67997ec10c0776f8fd0723bba77015132cb25ec60ba5ed35a6950ed037f not found: ID does not exist" Jan 28 07:05:18 crc kubenswrapper[4642]: I0128 07:05:18.116500 4642 scope.go:117] "RemoveContainer" containerID="48d4ff721828b99fdf26f6192a12929095911935a1ae3257a070ef4b0df1e6d8" Jan 28 07:05:18 crc kubenswrapper[4642]: E0128 07:05:18.116731 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48d4ff721828b99fdf26f6192a12929095911935a1ae3257a070ef4b0df1e6d8\": container with ID starting with 48d4ff721828b99fdf26f6192a12929095911935a1ae3257a070ef4b0df1e6d8 not found: ID does not exist" containerID="48d4ff721828b99fdf26f6192a12929095911935a1ae3257a070ef4b0df1e6d8" Jan 28 07:05:18 crc kubenswrapper[4642]: I0128 07:05:18.116752 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48d4ff721828b99fdf26f6192a12929095911935a1ae3257a070ef4b0df1e6d8"} err="failed to get container status \"48d4ff721828b99fdf26f6192a12929095911935a1ae3257a070ef4b0df1e6d8\": rpc error: code = NotFound desc = could not find container \"48d4ff721828b99fdf26f6192a12929095911935a1ae3257a070ef4b0df1e6d8\": container with ID starting with 48d4ff721828b99fdf26f6192a12929095911935a1ae3257a070ef4b0df1e6d8 not found: ID does not exist" Jan 28 07:05:19 crc kubenswrapper[4642]: I0128 07:05:19.115322 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff037186-7ca7-4860-a8ae-0d3b84abe5da" path="/var/lib/kubelet/pods/ff037186-7ca7-4860-a8ae-0d3b84abe5da/volumes" Jan 28 07:05:19 crc kubenswrapper[4642]: I0128 07:05:19.116124 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"16c4c401-8d2d-479c-bbb2-75b0f3ac300a","Type":"ContainerStarted","Data":"7c817b515711da423f185e46af7435ae77260fa74d0779babeae80eda694f23d"} Jan 28 07:05:19 crc kubenswrapper[4642]: I0128 07:05:19.121310 4642 generic.go:334] "Generic (PLEG): container finished" podID="db3bf7e9-aede-4b15-8c39-49ecafa0e435" containerID="6c837c17f8e7eb134bb44017a6d3b082b078af8cb100436a10ec121c0c43a7f6" exitCode=0 Jan 28 07:05:19 crc kubenswrapper[4642]: I0128 07:05:19.121359 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"db3bf7e9-aede-4b15-8c39-49ecafa0e435","Type":"ContainerDied","Data":"6c837c17f8e7eb134bb44017a6d3b082b078af8cb100436a10ec121c0c43a7f6"} Jan 28 07:05:19 crc kubenswrapper[4642]: I0128 07:05:19.126887 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8c68c4f2-ad0e-4540-acd5-185f6a8568eb","Type":"ContainerStarted","Data":"b3d9551d0d1a7f653d1fb72fa21fc3c41e26b8ab7771eebc30eb4d18b5f697ae"} Jan 28 07:05:19 crc kubenswrapper[4642]: I0128 07:05:19.145871 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.145856732 podStartE2EDuration="3.145856732s" podCreationTimestamp="2026-01-28 07:05:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 07:05:19.139381938 +0000 UTC m=+1042.371470747" watchObservedRunningTime="2026-01-28 07:05:19.145856732 +0000 UTC m=+1042.377945541" Jan 28 07:05:20 crc kubenswrapper[4642]: I0128 07:05:20.138365 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8c68c4f2-ad0e-4540-acd5-185f6a8568eb","Type":"ContainerStarted","Data":"7ee9a67bd82c2559c1a77068d63cdc7d814a8f1e2f6387f653d61bfbbcb735c2"} Jan 28 07:05:20 crc kubenswrapper[4642]: I0128 07:05:20.495887 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 28 07:05:20 crc kubenswrapper[4642]: I0128 07:05:20.619638 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db3bf7e9-aede-4b15-8c39-49ecafa0e435-combined-ca-bundle\") pod \"db3bf7e9-aede-4b15-8c39-49ecafa0e435\" (UID: \"db3bf7e9-aede-4b15-8c39-49ecafa0e435\") " Jan 28 07:05:20 crc kubenswrapper[4642]: I0128 07:05:20.619903 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db3bf7e9-aede-4b15-8c39-49ecafa0e435-config-data\") pod \"db3bf7e9-aede-4b15-8c39-49ecafa0e435\" (UID: \"db3bf7e9-aede-4b15-8c39-49ecafa0e435\") " Jan 28 07:05:20 crc kubenswrapper[4642]: I0128 07:05:20.620070 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/db3bf7e9-aede-4b15-8c39-49ecafa0e435-config-data-custom\") pod \"db3bf7e9-aede-4b15-8c39-49ecafa0e435\" (UID: \"db3bf7e9-aede-4b15-8c39-49ecafa0e435\") " Jan 28 07:05:20 crc kubenswrapper[4642]: I0128 07:05:20.620090 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mx64s\" (UniqueName: \"kubernetes.io/projected/db3bf7e9-aede-4b15-8c39-49ecafa0e435-kube-api-access-mx64s\") pod \"db3bf7e9-aede-4b15-8c39-49ecafa0e435\" (UID: \"db3bf7e9-aede-4b15-8c39-49ecafa0e435\") " Jan 28 07:05:20 crc kubenswrapper[4642]: I0128 07:05:20.620147 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/db3bf7e9-aede-4b15-8c39-49ecafa0e435-etc-machine-id\") pod \"db3bf7e9-aede-4b15-8c39-49ecafa0e435\" (UID: \"db3bf7e9-aede-4b15-8c39-49ecafa0e435\") " Jan 28 07:05:20 crc kubenswrapper[4642]: I0128 07:05:20.620162 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db3bf7e9-aede-4b15-8c39-49ecafa0e435-scripts\") pod \"db3bf7e9-aede-4b15-8c39-49ecafa0e435\" (UID: \"db3bf7e9-aede-4b15-8c39-49ecafa0e435\") " Jan 28 07:05:20 crc kubenswrapper[4642]: I0128 07:05:20.620470 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/db3bf7e9-aede-4b15-8c39-49ecafa0e435-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "db3bf7e9-aede-4b15-8c39-49ecafa0e435" (UID: "db3bf7e9-aede-4b15-8c39-49ecafa0e435"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 07:05:20 crc kubenswrapper[4642]: I0128 07:05:20.624719 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db3bf7e9-aede-4b15-8c39-49ecafa0e435-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "db3bf7e9-aede-4b15-8c39-49ecafa0e435" (UID: "db3bf7e9-aede-4b15-8c39-49ecafa0e435"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:05:20 crc kubenswrapper[4642]: I0128 07:05:20.630607 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db3bf7e9-aede-4b15-8c39-49ecafa0e435-kube-api-access-mx64s" (OuterVolumeSpecName: "kube-api-access-mx64s") pod "db3bf7e9-aede-4b15-8c39-49ecafa0e435" (UID: "db3bf7e9-aede-4b15-8c39-49ecafa0e435"). InnerVolumeSpecName "kube-api-access-mx64s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:05:20 crc kubenswrapper[4642]: I0128 07:05:20.640954 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db3bf7e9-aede-4b15-8c39-49ecafa0e435-scripts" (OuterVolumeSpecName: "scripts") pod "db3bf7e9-aede-4b15-8c39-49ecafa0e435" (UID: "db3bf7e9-aede-4b15-8c39-49ecafa0e435"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:05:20 crc kubenswrapper[4642]: I0128 07:05:20.671952 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db3bf7e9-aede-4b15-8c39-49ecafa0e435-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "db3bf7e9-aede-4b15-8c39-49ecafa0e435" (UID: "db3bf7e9-aede-4b15-8c39-49ecafa0e435"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:05:20 crc kubenswrapper[4642]: I0128 07:05:20.706823 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db3bf7e9-aede-4b15-8c39-49ecafa0e435-config-data" (OuterVolumeSpecName: "config-data") pod "db3bf7e9-aede-4b15-8c39-49ecafa0e435" (UID: "db3bf7e9-aede-4b15-8c39-49ecafa0e435"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:05:20 crc kubenswrapper[4642]: I0128 07:05:20.724083 4642 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/db3bf7e9-aede-4b15-8c39-49ecafa0e435-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 28 07:05:20 crc kubenswrapper[4642]: I0128 07:05:20.724112 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mx64s\" (UniqueName: \"kubernetes.io/projected/db3bf7e9-aede-4b15-8c39-49ecafa0e435-kube-api-access-mx64s\") on node \"crc\" DevicePath \"\"" Jan 28 07:05:20 crc kubenswrapper[4642]: I0128 07:05:20.724127 4642 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/db3bf7e9-aede-4b15-8c39-49ecafa0e435-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 28 07:05:20 crc kubenswrapper[4642]: I0128 07:05:20.724139 4642 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db3bf7e9-aede-4b15-8c39-49ecafa0e435-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 07:05:20 crc kubenswrapper[4642]: I0128 07:05:20.724149 4642 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db3bf7e9-aede-4b15-8c39-49ecafa0e435-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 07:05:20 crc kubenswrapper[4642]: I0128 07:05:20.724159 4642 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db3bf7e9-aede-4b15-8c39-49ecafa0e435-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 07:05:20 crc kubenswrapper[4642]: I0128 07:05:20.806614 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 28 07:05:20 crc kubenswrapper[4642]: I0128 07:05:20.806677 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 28 07:05:20 crc kubenswrapper[4642]: I0128 07:05:20.837606 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 28 07:05:20 crc kubenswrapper[4642]: I0128 07:05:20.838320 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 28 07:05:21 crc kubenswrapper[4642]: I0128 07:05:21.150634 4642 generic.go:334] "Generic (PLEG): container finished" podID="db3bf7e9-aede-4b15-8c39-49ecafa0e435" containerID="7c7b3cbf200a25150acd856562731a67fe6021ff382ddfe47c92ff5214a190d6" exitCode=0 Jan 28 07:05:21 crc kubenswrapper[4642]: I0128 07:05:21.151622 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 28 07:05:21 crc kubenswrapper[4642]: I0128 07:05:21.153383 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"db3bf7e9-aede-4b15-8c39-49ecafa0e435","Type":"ContainerDied","Data":"7c7b3cbf200a25150acd856562731a67fe6021ff382ddfe47c92ff5214a190d6"} Jan 28 07:05:21 crc kubenswrapper[4642]: I0128 07:05:21.153444 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"db3bf7e9-aede-4b15-8c39-49ecafa0e435","Type":"ContainerDied","Data":"dc35eb106092e44e1857570cec276dd3dc2d1d0545fcfbfb6f6a59cf5b84b05f"} Jan 28 07:05:21 crc kubenswrapper[4642]: I0128 07:05:21.153467 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 28 07:05:21 crc kubenswrapper[4642]: I0128 07:05:21.153495 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 28 07:05:21 crc kubenswrapper[4642]: I0128 07:05:21.153510 4642 scope.go:117] "RemoveContainer" containerID="6c837c17f8e7eb134bb44017a6d3b082b078af8cb100436a10ec121c0c43a7f6" Jan 28 07:05:21 crc kubenswrapper[4642]: I0128 07:05:21.194710 4642 scope.go:117] "RemoveContainer" containerID="7c7b3cbf200a25150acd856562731a67fe6021ff382ddfe47c92ff5214a190d6" Jan 28 07:05:21 crc kubenswrapper[4642]: I0128 07:05:21.219888 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 07:05:21 crc kubenswrapper[4642]: I0128 07:05:21.225348 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 07:05:21 crc kubenswrapper[4642]: I0128 07:05:21.245179 4642 scope.go:117] "RemoveContainer" containerID="6c837c17f8e7eb134bb44017a6d3b082b078af8cb100436a10ec121c0c43a7f6" Jan 28 07:05:21 crc kubenswrapper[4642]: E0128 07:05:21.246583 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c837c17f8e7eb134bb44017a6d3b082b078af8cb100436a10ec121c0c43a7f6\": container with ID starting with 6c837c17f8e7eb134bb44017a6d3b082b078af8cb100436a10ec121c0c43a7f6 not found: ID does not exist" containerID="6c837c17f8e7eb134bb44017a6d3b082b078af8cb100436a10ec121c0c43a7f6" Jan 28 07:05:21 crc kubenswrapper[4642]: I0128 07:05:21.246636 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c837c17f8e7eb134bb44017a6d3b082b078af8cb100436a10ec121c0c43a7f6"} err="failed to get container status \"6c837c17f8e7eb134bb44017a6d3b082b078af8cb100436a10ec121c0c43a7f6\": rpc error: code = NotFound desc = could not find container \"6c837c17f8e7eb134bb44017a6d3b082b078af8cb100436a10ec121c0c43a7f6\": container with ID starting with 6c837c17f8e7eb134bb44017a6d3b082b078af8cb100436a10ec121c0c43a7f6 not found: ID does not exist" Jan 28 07:05:21 crc kubenswrapper[4642]: I0128 07:05:21.246672 4642 scope.go:117] "RemoveContainer" containerID="7c7b3cbf200a25150acd856562731a67fe6021ff382ddfe47c92ff5214a190d6" Jan 28 07:05:21 crc kubenswrapper[4642]: E0128 07:05:21.248716 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c7b3cbf200a25150acd856562731a67fe6021ff382ddfe47c92ff5214a190d6\": container with ID starting with 7c7b3cbf200a25150acd856562731a67fe6021ff382ddfe47c92ff5214a190d6 not found: ID does not exist" containerID="7c7b3cbf200a25150acd856562731a67fe6021ff382ddfe47c92ff5214a190d6" Jan 28 07:05:21 crc kubenswrapper[4642]: I0128 07:05:21.248747 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c7b3cbf200a25150acd856562731a67fe6021ff382ddfe47c92ff5214a190d6"} err="failed to get container status \"7c7b3cbf200a25150acd856562731a67fe6021ff382ddfe47c92ff5214a190d6\": rpc error: code = NotFound desc = could not find container \"7c7b3cbf200a25150acd856562731a67fe6021ff382ddfe47c92ff5214a190d6\": container with ID starting with 7c7b3cbf200a25150acd856562731a67fe6021ff382ddfe47c92ff5214a190d6 not found: ID does not exist" Jan 28 07:05:21 crc kubenswrapper[4642]: I0128 07:05:21.267252 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 07:05:21 crc kubenswrapper[4642]: E0128 07:05:21.268257 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff037186-7ca7-4860-a8ae-0d3b84abe5da" containerName="dnsmasq-dns" Jan 28 07:05:21 crc kubenswrapper[4642]: I0128 07:05:21.268282 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff037186-7ca7-4860-a8ae-0d3b84abe5da" containerName="dnsmasq-dns" Jan 28 07:05:21 crc kubenswrapper[4642]: E0128 07:05:21.268315 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db3bf7e9-aede-4b15-8c39-49ecafa0e435" containerName="probe" Jan 28 07:05:21 crc kubenswrapper[4642]: I0128 07:05:21.268323 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="db3bf7e9-aede-4b15-8c39-49ecafa0e435" containerName="probe" Jan 28 07:05:21 crc kubenswrapper[4642]: E0128 07:05:21.268341 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db3bf7e9-aede-4b15-8c39-49ecafa0e435" containerName="cinder-scheduler" Jan 28 07:05:21 crc kubenswrapper[4642]: I0128 07:05:21.268347 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="db3bf7e9-aede-4b15-8c39-49ecafa0e435" containerName="cinder-scheduler" Jan 28 07:05:21 crc kubenswrapper[4642]: E0128 07:05:21.268363 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff037186-7ca7-4860-a8ae-0d3b84abe5da" containerName="init" Jan 28 07:05:21 crc kubenswrapper[4642]: I0128 07:05:21.268369 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff037186-7ca7-4860-a8ae-0d3b84abe5da" containerName="init" Jan 28 07:05:21 crc kubenswrapper[4642]: I0128 07:05:21.268685 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="db3bf7e9-aede-4b15-8c39-49ecafa0e435" containerName="cinder-scheduler" Jan 28 07:05:21 crc kubenswrapper[4642]: I0128 07:05:21.268717 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="db3bf7e9-aede-4b15-8c39-49ecafa0e435" containerName="probe" Jan 28 07:05:21 crc kubenswrapper[4642]: I0128 07:05:21.268742 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff037186-7ca7-4860-a8ae-0d3b84abe5da" containerName="dnsmasq-dns" Jan 28 07:05:21 crc kubenswrapper[4642]: I0128 07:05:21.270061 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 28 07:05:21 crc kubenswrapper[4642]: I0128 07:05:21.281001 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 07:05:21 crc kubenswrapper[4642]: I0128 07:05:21.288630 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 28 07:05:21 crc kubenswrapper[4642]: I0128 07:05:21.356740 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57740542-145e-4f7e-a313-ef87683e27cd-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"57740542-145e-4f7e-a313-ef87683e27cd\") " pod="openstack/cinder-scheduler-0" Jan 28 07:05:21 crc kubenswrapper[4642]: I0128 07:05:21.356977 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/57740542-145e-4f7e-a313-ef87683e27cd-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"57740542-145e-4f7e-a313-ef87683e27cd\") " pod="openstack/cinder-scheduler-0" Jan 28 07:05:21 crc kubenswrapper[4642]: I0128 07:05:21.357052 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/57740542-145e-4f7e-a313-ef87683e27cd-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"57740542-145e-4f7e-a313-ef87683e27cd\") " pod="openstack/cinder-scheduler-0" Jan 28 07:05:21 crc kubenswrapper[4642]: I0128 07:05:21.357097 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvvvd\" (UniqueName: \"kubernetes.io/projected/57740542-145e-4f7e-a313-ef87683e27cd-kube-api-access-jvvvd\") pod \"cinder-scheduler-0\" (UID: \"57740542-145e-4f7e-a313-ef87683e27cd\") " pod="openstack/cinder-scheduler-0" Jan 28 07:05:21 crc kubenswrapper[4642]: I0128 07:05:21.357165 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57740542-145e-4f7e-a313-ef87683e27cd-config-data\") pod \"cinder-scheduler-0\" (UID: \"57740542-145e-4f7e-a313-ef87683e27cd\") " pod="openstack/cinder-scheduler-0" Jan 28 07:05:21 crc kubenswrapper[4642]: I0128 07:05:21.357266 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57740542-145e-4f7e-a313-ef87683e27cd-scripts\") pod \"cinder-scheduler-0\" (UID: \"57740542-145e-4f7e-a313-ef87683e27cd\") " pod="openstack/cinder-scheduler-0" Jan 28 07:05:21 crc kubenswrapper[4642]: E0128 07:05:21.418761 4642 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb3bf7e9_aede_4b15_8c39_49ecafa0e435.slice/crio-dc35eb106092e44e1857570cec276dd3dc2d1d0545fcfbfb6f6a59cf5b84b05f\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb3bf7e9_aede_4b15_8c39_49ecafa0e435.slice\": RecentStats: unable to find data in memory cache]" Jan 28 07:05:21 crc kubenswrapper[4642]: I0128 07:05:21.460397 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/57740542-145e-4f7e-a313-ef87683e27cd-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"57740542-145e-4f7e-a313-ef87683e27cd\") " pod="openstack/cinder-scheduler-0" Jan 28 07:05:21 crc kubenswrapper[4642]: I0128 07:05:21.461065 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/57740542-145e-4f7e-a313-ef87683e27cd-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"57740542-145e-4f7e-a313-ef87683e27cd\") " pod="openstack/cinder-scheduler-0" Jan 28 07:05:21 crc kubenswrapper[4642]: I0128 07:05:21.461234 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/57740542-145e-4f7e-a313-ef87683e27cd-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"57740542-145e-4f7e-a313-ef87683e27cd\") " pod="openstack/cinder-scheduler-0" Jan 28 07:05:21 crc kubenswrapper[4642]: I0128 07:05:21.461289 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvvvd\" (UniqueName: \"kubernetes.io/projected/57740542-145e-4f7e-a313-ef87683e27cd-kube-api-access-jvvvd\") pod \"cinder-scheduler-0\" (UID: \"57740542-145e-4f7e-a313-ef87683e27cd\") " pod="openstack/cinder-scheduler-0" Jan 28 07:05:21 crc kubenswrapper[4642]: I0128 07:05:21.461928 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57740542-145e-4f7e-a313-ef87683e27cd-config-data\") pod \"cinder-scheduler-0\" (UID: \"57740542-145e-4f7e-a313-ef87683e27cd\") " pod="openstack/cinder-scheduler-0" Jan 28 07:05:21 crc kubenswrapper[4642]: I0128 07:05:21.462236 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57740542-145e-4f7e-a313-ef87683e27cd-scripts\") pod \"cinder-scheduler-0\" (UID: \"57740542-145e-4f7e-a313-ef87683e27cd\") " pod="openstack/cinder-scheduler-0" Jan 28 07:05:21 crc kubenswrapper[4642]: I0128 07:05:21.462583 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57740542-145e-4f7e-a313-ef87683e27cd-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"57740542-145e-4f7e-a313-ef87683e27cd\") " pod="openstack/cinder-scheduler-0" Jan 28 07:05:21 crc kubenswrapper[4642]: I0128 07:05:21.469291 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57740542-145e-4f7e-a313-ef87683e27cd-scripts\") pod \"cinder-scheduler-0\" (UID: \"57740542-145e-4f7e-a313-ef87683e27cd\") " pod="openstack/cinder-scheduler-0" Jan 28 07:05:21 crc kubenswrapper[4642]: I0128 07:05:21.469464 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57740542-145e-4f7e-a313-ef87683e27cd-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"57740542-145e-4f7e-a313-ef87683e27cd\") " pod="openstack/cinder-scheduler-0" Jan 28 07:05:21 crc kubenswrapper[4642]: I0128 07:05:21.472180 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/57740542-145e-4f7e-a313-ef87683e27cd-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"57740542-145e-4f7e-a313-ef87683e27cd\") " pod="openstack/cinder-scheduler-0" Jan 28 07:05:21 crc kubenswrapper[4642]: I0128 07:05:21.473038 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57740542-145e-4f7e-a313-ef87683e27cd-config-data\") pod \"cinder-scheduler-0\" (UID: \"57740542-145e-4f7e-a313-ef87683e27cd\") " pod="openstack/cinder-scheduler-0" Jan 28 07:05:21 crc kubenswrapper[4642]: I0128 07:05:21.480414 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvvvd\" (UniqueName: \"kubernetes.io/projected/57740542-145e-4f7e-a313-ef87683e27cd-kube-api-access-jvvvd\") pod \"cinder-scheduler-0\" (UID: \"57740542-145e-4f7e-a313-ef87683e27cd\") " pod="openstack/cinder-scheduler-0" Jan 28 07:05:21 crc kubenswrapper[4642]: I0128 07:05:21.602526 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 28 07:05:21 crc kubenswrapper[4642]: I0128 07:05:21.991735 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 07:05:22 crc kubenswrapper[4642]: W0128 07:05:22.005085 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod57740542_145e_4f7e_a313_ef87683e27cd.slice/crio-4d29c95af5f6f6f6740846d4f2d2121e4efd8eb10a9b2ef1d6549005a41d8cbc WatchSource:0}: Error finding container 4d29c95af5f6f6f6740846d4f2d2121e4efd8eb10a9b2ef1d6549005a41d8cbc: Status 404 returned error can't find the container with id 4d29c95af5f6f6f6740846d4f2d2121e4efd8eb10a9b2ef1d6549005a41d8cbc Jan 28 07:05:22 crc kubenswrapper[4642]: I0128 07:05:22.173241 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"57740542-145e-4f7e-a313-ef87683e27cd","Type":"ContainerStarted","Data":"4d29c95af5f6f6f6740846d4f2d2121e4efd8eb10a9b2ef1d6549005a41d8cbc"} Jan 28 07:05:22 crc kubenswrapper[4642]: I0128 07:05:22.178638 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8c68c4f2-ad0e-4540-acd5-185f6a8568eb","Type":"ContainerStarted","Data":"9b6e7f7e2f89711f9a74525fbeddb76a9d12c58154be9fe053e3782a8a0d30c8"} Jan 28 07:05:22 crc kubenswrapper[4642]: I0128 07:05:22.178682 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 28 07:05:22 crc kubenswrapper[4642]: I0128 07:05:22.200007 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.19656618 podStartE2EDuration="9.199992262s" podCreationTimestamp="2026-01-28 07:05:13 +0000 UTC" firstStartedPulling="2026-01-28 07:05:14.076405887 +0000 UTC m=+1037.308494696" lastFinishedPulling="2026-01-28 07:05:21.079831969 +0000 UTC m=+1044.311920778" observedRunningTime="2026-01-28 07:05:22.194432387 +0000 UTC m=+1045.426521197" watchObservedRunningTime="2026-01-28 07:05:22.199992262 +0000 UTC m=+1045.432081071" Jan 28 07:05:22 crc kubenswrapper[4642]: I0128 07:05:22.790504 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 28 07:05:22 crc kubenswrapper[4642]: I0128 07:05:22.792878 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 28 07:05:22 crc kubenswrapper[4642]: I0128 07:05:22.975401 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 28 07:05:23 crc kubenswrapper[4642]: I0128 07:05:23.107278 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db3bf7e9-aede-4b15-8c39-49ecafa0e435" path="/var/lib/kubelet/pods/db3bf7e9-aede-4b15-8c39-49ecafa0e435/volumes" Jan 28 07:05:23 crc kubenswrapper[4642]: I0128 07:05:23.193447 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"57740542-145e-4f7e-a313-ef87683e27cd","Type":"ContainerStarted","Data":"df45b7bb77c71d88b135d026062e3e5ce8538c926fd05dc64e29ee2c6c2fef69"} Jan 28 07:05:23 crc kubenswrapper[4642]: I0128 07:05:23.193513 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"57740542-145e-4f7e-a313-ef87683e27cd","Type":"ContainerStarted","Data":"631d9b39a9c0cd5e47ef887cd2b8bf2a7e457025bf656ddaf47ee06b5882346b"} Jan 28 07:05:23 crc kubenswrapper[4642]: I0128 07:05:23.217249 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=2.21723197 podStartE2EDuration="2.21723197s" podCreationTimestamp="2026-01-28 07:05:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 07:05:23.214849912 +0000 UTC m=+1046.446938721" watchObservedRunningTime="2026-01-28 07:05:23.21723197 +0000 UTC m=+1046.449320779" Jan 28 07:05:26 crc kubenswrapper[4642]: I0128 07:05:26.602643 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 28 07:05:26 crc kubenswrapper[4642]: I0128 07:05:26.684445 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 28 07:05:26 crc kubenswrapper[4642]: I0128 07:05:26.684524 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 28 07:05:26 crc kubenswrapper[4642]: I0128 07:05:26.712938 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 28 07:05:26 crc kubenswrapper[4642]: I0128 07:05:26.720953 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 28 07:05:27 crc kubenswrapper[4642]: I0128 07:05:27.239675 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 28 07:05:27 crc kubenswrapper[4642]: I0128 07:05:27.240216 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 28 07:05:27 crc kubenswrapper[4642]: I0128 07:05:27.587946 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 07:05:27 crc kubenswrapper[4642]: I0128 07:05:27.588756 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8c68c4f2-ad0e-4540-acd5-185f6a8568eb" containerName="ceilometer-central-agent" containerID="cri-o://589ce055a44dc932a4cf73eb592a2d7bbc57a0c78a5d51614ca6a10430389f96" gracePeriod=30 Jan 28 07:05:27 crc kubenswrapper[4642]: I0128 07:05:27.588964 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8c68c4f2-ad0e-4540-acd5-185f6a8568eb" containerName="proxy-httpd" containerID="cri-o://9b6e7f7e2f89711f9a74525fbeddb76a9d12c58154be9fe053e3782a8a0d30c8" gracePeriod=30 Jan 28 07:05:27 crc kubenswrapper[4642]: I0128 07:05:27.589036 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8c68c4f2-ad0e-4540-acd5-185f6a8568eb" containerName="ceilometer-notification-agent" containerID="cri-o://b3d9551d0d1a7f653d1fb72fa21fc3c41e26b8ab7771eebc30eb4d18b5f697ae" gracePeriod=30 Jan 28 07:05:27 crc kubenswrapper[4642]: I0128 07:05:27.589222 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8c68c4f2-ad0e-4540-acd5-185f6a8568eb" containerName="sg-core" containerID="cri-o://7ee9a67bd82c2559c1a77068d63cdc7d814a8f1e2f6387f653d61bfbbcb735c2" gracePeriod=30 Jan 28 07:05:28 crc kubenswrapper[4642]: I0128 07:05:28.247853 4642 generic.go:334] "Generic (PLEG): container finished" podID="8c68c4f2-ad0e-4540-acd5-185f6a8568eb" containerID="9b6e7f7e2f89711f9a74525fbeddb76a9d12c58154be9fe053e3782a8a0d30c8" exitCode=0 Jan 28 07:05:28 crc kubenswrapper[4642]: I0128 07:05:28.248088 4642 generic.go:334] "Generic (PLEG): container finished" podID="8c68c4f2-ad0e-4540-acd5-185f6a8568eb" containerID="7ee9a67bd82c2559c1a77068d63cdc7d814a8f1e2f6387f653d61bfbbcb735c2" exitCode=2 Jan 28 07:05:28 crc kubenswrapper[4642]: I0128 07:05:28.248098 4642 generic.go:334] "Generic (PLEG): container finished" podID="8c68c4f2-ad0e-4540-acd5-185f6a8568eb" containerID="589ce055a44dc932a4cf73eb592a2d7bbc57a0c78a5d51614ca6a10430389f96" exitCode=0 Jan 28 07:05:28 crc kubenswrapper[4642]: I0128 07:05:28.247919 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8c68c4f2-ad0e-4540-acd5-185f6a8568eb","Type":"ContainerDied","Data":"9b6e7f7e2f89711f9a74525fbeddb76a9d12c58154be9fe053e3782a8a0d30c8"} Jan 28 07:05:28 crc kubenswrapper[4642]: I0128 07:05:28.248179 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8c68c4f2-ad0e-4540-acd5-185f6a8568eb","Type":"ContainerDied","Data":"7ee9a67bd82c2559c1a77068d63cdc7d814a8f1e2f6387f653d61bfbbcb735c2"} Jan 28 07:05:28 crc kubenswrapper[4642]: I0128 07:05:28.248204 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8c68c4f2-ad0e-4540-acd5-185f6a8568eb","Type":"ContainerDied","Data":"589ce055a44dc932a4cf73eb592a2d7bbc57a0c78a5d51614ca6a10430389f96"} Jan 28 07:05:28 crc kubenswrapper[4642]: I0128 07:05:28.908872 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 28 07:05:28 crc kubenswrapper[4642]: I0128 07:05:28.911101 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 28 07:05:29 crc kubenswrapper[4642]: I0128 07:05:29.263846 4642 generic.go:334] "Generic (PLEG): container finished" podID="8c68c4f2-ad0e-4540-acd5-185f6a8568eb" containerID="b3d9551d0d1a7f653d1fb72fa21fc3c41e26b8ab7771eebc30eb4d18b5f697ae" exitCode=0 Jan 28 07:05:29 crc kubenswrapper[4642]: I0128 07:05:29.263927 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8c68c4f2-ad0e-4540-acd5-185f6a8568eb","Type":"ContainerDied","Data":"b3d9551d0d1a7f653d1fb72fa21fc3c41e26b8ab7771eebc30eb4d18b5f697ae"} Jan 28 07:05:29 crc kubenswrapper[4642]: I0128 07:05:29.517027 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 07:05:29 crc kubenswrapper[4642]: I0128 07:05:29.660285 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8c68c4f2-ad0e-4540-acd5-185f6a8568eb-run-httpd\") pod \"8c68c4f2-ad0e-4540-acd5-185f6a8568eb\" (UID: \"8c68c4f2-ad0e-4540-acd5-185f6a8568eb\") " Jan 28 07:05:29 crc kubenswrapper[4642]: I0128 07:05:29.660324 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48bzh\" (UniqueName: \"kubernetes.io/projected/8c68c4f2-ad0e-4540-acd5-185f6a8568eb-kube-api-access-48bzh\") pod \"8c68c4f2-ad0e-4540-acd5-185f6a8568eb\" (UID: \"8c68c4f2-ad0e-4540-acd5-185f6a8568eb\") " Jan 28 07:05:29 crc kubenswrapper[4642]: I0128 07:05:29.660492 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8c68c4f2-ad0e-4540-acd5-185f6a8568eb-sg-core-conf-yaml\") pod \"8c68c4f2-ad0e-4540-acd5-185f6a8568eb\" (UID: \"8c68c4f2-ad0e-4540-acd5-185f6a8568eb\") " Jan 28 07:05:29 crc kubenswrapper[4642]: I0128 07:05:29.660532 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c68c4f2-ad0e-4540-acd5-185f6a8568eb-scripts\") pod \"8c68c4f2-ad0e-4540-acd5-185f6a8568eb\" (UID: \"8c68c4f2-ad0e-4540-acd5-185f6a8568eb\") " Jan 28 07:05:29 crc kubenswrapper[4642]: I0128 07:05:29.660555 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c68c4f2-ad0e-4540-acd5-185f6a8568eb-config-data\") pod \"8c68c4f2-ad0e-4540-acd5-185f6a8568eb\" (UID: \"8c68c4f2-ad0e-4540-acd5-185f6a8568eb\") " Jan 28 07:05:29 crc kubenswrapper[4642]: I0128 07:05:29.660700 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c68c4f2-ad0e-4540-acd5-185f6a8568eb-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "8c68c4f2-ad0e-4540-acd5-185f6a8568eb" (UID: "8c68c4f2-ad0e-4540-acd5-185f6a8568eb"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 07:05:29 crc kubenswrapper[4642]: I0128 07:05:29.660842 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8c68c4f2-ad0e-4540-acd5-185f6a8568eb-log-httpd\") pod \"8c68c4f2-ad0e-4540-acd5-185f6a8568eb\" (UID: \"8c68c4f2-ad0e-4540-acd5-185f6a8568eb\") " Jan 28 07:05:29 crc kubenswrapper[4642]: I0128 07:05:29.660868 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c68c4f2-ad0e-4540-acd5-185f6a8568eb-combined-ca-bundle\") pod \"8c68c4f2-ad0e-4540-acd5-185f6a8568eb\" (UID: \"8c68c4f2-ad0e-4540-acd5-185f6a8568eb\") " Jan 28 07:05:29 crc kubenswrapper[4642]: I0128 07:05:29.661125 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c68c4f2-ad0e-4540-acd5-185f6a8568eb-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "8c68c4f2-ad0e-4540-acd5-185f6a8568eb" (UID: "8c68c4f2-ad0e-4540-acd5-185f6a8568eb"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 07:05:29 crc kubenswrapper[4642]: I0128 07:05:29.661713 4642 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8c68c4f2-ad0e-4540-acd5-185f6a8568eb-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 07:05:29 crc kubenswrapper[4642]: I0128 07:05:29.661733 4642 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8c68c4f2-ad0e-4540-acd5-185f6a8568eb-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 07:05:29 crc kubenswrapper[4642]: I0128 07:05:29.666739 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c68c4f2-ad0e-4540-acd5-185f6a8568eb-scripts" (OuterVolumeSpecName: "scripts") pod "8c68c4f2-ad0e-4540-acd5-185f6a8568eb" (UID: "8c68c4f2-ad0e-4540-acd5-185f6a8568eb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:05:29 crc kubenswrapper[4642]: I0128 07:05:29.671421 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c68c4f2-ad0e-4540-acd5-185f6a8568eb-kube-api-access-48bzh" (OuterVolumeSpecName: "kube-api-access-48bzh") pod "8c68c4f2-ad0e-4540-acd5-185f6a8568eb" (UID: "8c68c4f2-ad0e-4540-acd5-185f6a8568eb"). InnerVolumeSpecName "kube-api-access-48bzh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:05:29 crc kubenswrapper[4642]: I0128 07:05:29.684513 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c68c4f2-ad0e-4540-acd5-185f6a8568eb-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "8c68c4f2-ad0e-4540-acd5-185f6a8568eb" (UID: "8c68c4f2-ad0e-4540-acd5-185f6a8568eb"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:05:29 crc kubenswrapper[4642]: I0128 07:05:29.725913 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c68c4f2-ad0e-4540-acd5-185f6a8568eb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8c68c4f2-ad0e-4540-acd5-185f6a8568eb" (UID: "8c68c4f2-ad0e-4540-acd5-185f6a8568eb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:05:29 crc kubenswrapper[4642]: I0128 07:05:29.732991 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c68c4f2-ad0e-4540-acd5-185f6a8568eb-config-data" (OuterVolumeSpecName: "config-data") pod "8c68c4f2-ad0e-4540-acd5-185f6a8568eb" (UID: "8c68c4f2-ad0e-4540-acd5-185f6a8568eb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:05:29 crc kubenswrapper[4642]: I0128 07:05:29.763431 4642 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c68c4f2-ad0e-4540-acd5-185f6a8568eb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 07:05:29 crc kubenswrapper[4642]: I0128 07:05:29.763456 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-48bzh\" (UniqueName: \"kubernetes.io/projected/8c68c4f2-ad0e-4540-acd5-185f6a8568eb-kube-api-access-48bzh\") on node \"crc\" DevicePath \"\"" Jan 28 07:05:29 crc kubenswrapper[4642]: I0128 07:05:29.763468 4642 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8c68c4f2-ad0e-4540-acd5-185f6a8568eb-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 28 07:05:29 crc kubenswrapper[4642]: I0128 07:05:29.763487 4642 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c68c4f2-ad0e-4540-acd5-185f6a8568eb-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 07:05:29 crc kubenswrapper[4642]: I0128 07:05:29.763497 4642 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c68c4f2-ad0e-4540-acd5-185f6a8568eb-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 07:05:30 crc kubenswrapper[4642]: I0128 07:05:30.274282 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 07:05:30 crc kubenswrapper[4642]: I0128 07:05:30.274319 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8c68c4f2-ad0e-4540-acd5-185f6a8568eb","Type":"ContainerDied","Data":"f4ac43677dc3d6f42fe29284a3d3134143b483c2d32c346f145a66da7d204fbf"} Jan 28 07:05:30 crc kubenswrapper[4642]: I0128 07:05:30.274349 4642 scope.go:117] "RemoveContainer" containerID="9b6e7f7e2f89711f9a74525fbeddb76a9d12c58154be9fe053e3782a8a0d30c8" Jan 28 07:05:30 crc kubenswrapper[4642]: I0128 07:05:30.299618 4642 scope.go:117] "RemoveContainer" containerID="7ee9a67bd82c2559c1a77068d63cdc7d814a8f1e2f6387f653d61bfbbcb735c2" Jan 28 07:05:30 crc kubenswrapper[4642]: I0128 07:05:30.309400 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 07:05:30 crc kubenswrapper[4642]: I0128 07:05:30.317207 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 28 07:05:30 crc kubenswrapper[4642]: I0128 07:05:30.322178 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 28 07:05:30 crc kubenswrapper[4642]: E0128 07:05:30.322542 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c68c4f2-ad0e-4540-acd5-185f6a8568eb" containerName="sg-core" Jan 28 07:05:30 crc kubenswrapper[4642]: I0128 07:05:30.322559 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c68c4f2-ad0e-4540-acd5-185f6a8568eb" containerName="sg-core" Jan 28 07:05:30 crc kubenswrapper[4642]: E0128 07:05:30.322582 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c68c4f2-ad0e-4540-acd5-185f6a8568eb" containerName="ceilometer-notification-agent" Jan 28 07:05:30 crc kubenswrapper[4642]: I0128 07:05:30.322587 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c68c4f2-ad0e-4540-acd5-185f6a8568eb" containerName="ceilometer-notification-agent" Jan 28 07:05:30 crc kubenswrapper[4642]: E0128 07:05:30.322601 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c68c4f2-ad0e-4540-acd5-185f6a8568eb" containerName="proxy-httpd" Jan 28 07:05:30 crc kubenswrapper[4642]: I0128 07:05:30.322609 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c68c4f2-ad0e-4540-acd5-185f6a8568eb" containerName="proxy-httpd" Jan 28 07:05:30 crc kubenswrapper[4642]: E0128 07:05:30.322623 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c68c4f2-ad0e-4540-acd5-185f6a8568eb" containerName="ceilometer-central-agent" Jan 28 07:05:30 crc kubenswrapper[4642]: I0128 07:05:30.322628 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c68c4f2-ad0e-4540-acd5-185f6a8568eb" containerName="ceilometer-central-agent" Jan 28 07:05:30 crc kubenswrapper[4642]: I0128 07:05:30.322753 4642 scope.go:117] "RemoveContainer" containerID="b3d9551d0d1a7f653d1fb72fa21fc3c41e26b8ab7771eebc30eb4d18b5f697ae" Jan 28 07:05:30 crc kubenswrapper[4642]: I0128 07:05:30.322789 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c68c4f2-ad0e-4540-acd5-185f6a8568eb" containerName="proxy-httpd" Jan 28 07:05:30 crc kubenswrapper[4642]: I0128 07:05:30.322800 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c68c4f2-ad0e-4540-acd5-185f6a8568eb" containerName="sg-core" Jan 28 07:05:30 crc kubenswrapper[4642]: I0128 07:05:30.322814 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c68c4f2-ad0e-4540-acd5-185f6a8568eb" containerName="ceilometer-central-agent" Jan 28 07:05:30 crc kubenswrapper[4642]: I0128 07:05:30.322821 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c68c4f2-ad0e-4540-acd5-185f6a8568eb" containerName="ceilometer-notification-agent" Jan 28 07:05:30 crc kubenswrapper[4642]: I0128 07:05:30.324234 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 07:05:30 crc kubenswrapper[4642]: I0128 07:05:30.329796 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 28 07:05:30 crc kubenswrapper[4642]: I0128 07:05:30.331117 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 28 07:05:30 crc kubenswrapper[4642]: I0128 07:05:30.344159 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 07:05:30 crc kubenswrapper[4642]: I0128 07:05:30.357228 4642 scope.go:117] "RemoveContainer" containerID="589ce055a44dc932a4cf73eb592a2d7bbc57a0c78a5d51614ca6a10430389f96" Jan 28 07:05:30 crc kubenswrapper[4642]: I0128 07:05:30.473593 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50fa8a6c-3a6a-4b78-8330-61f133fc251d-config-data\") pod \"ceilometer-0\" (UID: \"50fa8a6c-3a6a-4b78-8330-61f133fc251d\") " pod="openstack/ceilometer-0" Jan 28 07:05:30 crc kubenswrapper[4642]: I0128 07:05:30.473696 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8j9gj\" (UniqueName: \"kubernetes.io/projected/50fa8a6c-3a6a-4b78-8330-61f133fc251d-kube-api-access-8j9gj\") pod \"ceilometer-0\" (UID: \"50fa8a6c-3a6a-4b78-8330-61f133fc251d\") " pod="openstack/ceilometer-0" Jan 28 07:05:30 crc kubenswrapper[4642]: I0128 07:05:30.473721 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/50fa8a6c-3a6a-4b78-8330-61f133fc251d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"50fa8a6c-3a6a-4b78-8330-61f133fc251d\") " pod="openstack/ceilometer-0" Jan 28 07:05:30 crc kubenswrapper[4642]: I0128 07:05:30.473777 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/50fa8a6c-3a6a-4b78-8330-61f133fc251d-scripts\") pod \"ceilometer-0\" (UID: \"50fa8a6c-3a6a-4b78-8330-61f133fc251d\") " pod="openstack/ceilometer-0" Jan 28 07:05:30 crc kubenswrapper[4642]: I0128 07:05:30.473806 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/50fa8a6c-3a6a-4b78-8330-61f133fc251d-log-httpd\") pod \"ceilometer-0\" (UID: \"50fa8a6c-3a6a-4b78-8330-61f133fc251d\") " pod="openstack/ceilometer-0" Jan 28 07:05:30 crc kubenswrapper[4642]: I0128 07:05:30.473914 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/50fa8a6c-3a6a-4b78-8330-61f133fc251d-run-httpd\") pod \"ceilometer-0\" (UID: \"50fa8a6c-3a6a-4b78-8330-61f133fc251d\") " pod="openstack/ceilometer-0" Jan 28 07:05:30 crc kubenswrapper[4642]: I0128 07:05:30.473934 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50fa8a6c-3a6a-4b78-8330-61f133fc251d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"50fa8a6c-3a6a-4b78-8330-61f133fc251d\") " pod="openstack/ceilometer-0" Jan 28 07:05:30 crc kubenswrapper[4642]: I0128 07:05:30.575611 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/50fa8a6c-3a6a-4b78-8330-61f133fc251d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"50fa8a6c-3a6a-4b78-8330-61f133fc251d\") " pod="openstack/ceilometer-0" Jan 28 07:05:30 crc kubenswrapper[4642]: I0128 07:05:30.575660 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/50fa8a6c-3a6a-4b78-8330-61f133fc251d-scripts\") pod \"ceilometer-0\" (UID: \"50fa8a6c-3a6a-4b78-8330-61f133fc251d\") " pod="openstack/ceilometer-0" Jan 28 07:05:30 crc kubenswrapper[4642]: I0128 07:05:30.575682 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/50fa8a6c-3a6a-4b78-8330-61f133fc251d-log-httpd\") pod \"ceilometer-0\" (UID: \"50fa8a6c-3a6a-4b78-8330-61f133fc251d\") " pod="openstack/ceilometer-0" Jan 28 07:05:30 crc kubenswrapper[4642]: I0128 07:05:30.575776 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/50fa8a6c-3a6a-4b78-8330-61f133fc251d-run-httpd\") pod \"ceilometer-0\" (UID: \"50fa8a6c-3a6a-4b78-8330-61f133fc251d\") " pod="openstack/ceilometer-0" Jan 28 07:05:30 crc kubenswrapper[4642]: I0128 07:05:30.575795 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50fa8a6c-3a6a-4b78-8330-61f133fc251d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"50fa8a6c-3a6a-4b78-8330-61f133fc251d\") " pod="openstack/ceilometer-0" Jan 28 07:05:30 crc kubenswrapper[4642]: I0128 07:05:30.575825 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50fa8a6c-3a6a-4b78-8330-61f133fc251d-config-data\") pod \"ceilometer-0\" (UID: \"50fa8a6c-3a6a-4b78-8330-61f133fc251d\") " pod="openstack/ceilometer-0" Jan 28 07:05:30 crc kubenswrapper[4642]: I0128 07:05:30.575858 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8j9gj\" (UniqueName: \"kubernetes.io/projected/50fa8a6c-3a6a-4b78-8330-61f133fc251d-kube-api-access-8j9gj\") pod \"ceilometer-0\" (UID: \"50fa8a6c-3a6a-4b78-8330-61f133fc251d\") " pod="openstack/ceilometer-0" Jan 28 07:05:30 crc kubenswrapper[4642]: I0128 07:05:30.576404 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/50fa8a6c-3a6a-4b78-8330-61f133fc251d-run-httpd\") pod \"ceilometer-0\" (UID: \"50fa8a6c-3a6a-4b78-8330-61f133fc251d\") " pod="openstack/ceilometer-0" Jan 28 07:05:30 crc kubenswrapper[4642]: I0128 07:05:30.576720 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/50fa8a6c-3a6a-4b78-8330-61f133fc251d-log-httpd\") pod \"ceilometer-0\" (UID: \"50fa8a6c-3a6a-4b78-8330-61f133fc251d\") " pod="openstack/ceilometer-0" Jan 28 07:05:30 crc kubenswrapper[4642]: I0128 07:05:30.578927 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50fa8a6c-3a6a-4b78-8330-61f133fc251d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"50fa8a6c-3a6a-4b78-8330-61f133fc251d\") " pod="openstack/ceilometer-0" Jan 28 07:05:30 crc kubenswrapper[4642]: I0128 07:05:30.579343 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/50fa8a6c-3a6a-4b78-8330-61f133fc251d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"50fa8a6c-3a6a-4b78-8330-61f133fc251d\") " pod="openstack/ceilometer-0" Jan 28 07:05:30 crc kubenswrapper[4642]: I0128 07:05:30.579927 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/50fa8a6c-3a6a-4b78-8330-61f133fc251d-scripts\") pod \"ceilometer-0\" (UID: \"50fa8a6c-3a6a-4b78-8330-61f133fc251d\") " pod="openstack/ceilometer-0" Jan 28 07:05:30 crc kubenswrapper[4642]: I0128 07:05:30.580784 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50fa8a6c-3a6a-4b78-8330-61f133fc251d-config-data\") pod \"ceilometer-0\" (UID: \"50fa8a6c-3a6a-4b78-8330-61f133fc251d\") " pod="openstack/ceilometer-0" Jan 28 07:05:30 crc kubenswrapper[4642]: I0128 07:05:30.595717 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8j9gj\" (UniqueName: \"kubernetes.io/projected/50fa8a6c-3a6a-4b78-8330-61f133fc251d-kube-api-access-8j9gj\") pod \"ceilometer-0\" (UID: \"50fa8a6c-3a6a-4b78-8330-61f133fc251d\") " pod="openstack/ceilometer-0" Jan 28 07:05:30 crc kubenswrapper[4642]: I0128 07:05:30.652812 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 07:05:31 crc kubenswrapper[4642]: I0128 07:05:31.063471 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 07:05:31 crc kubenswrapper[4642]: W0128 07:05:31.068342 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod50fa8a6c_3a6a_4b78_8330_61f133fc251d.slice/crio-6962e4bcaecd30dda8626e5da25630b5dad87cf18cc8f4dad49411ca533a2c2c WatchSource:0}: Error finding container 6962e4bcaecd30dda8626e5da25630b5dad87cf18cc8f4dad49411ca533a2c2c: Status 404 returned error can't find the container with id 6962e4bcaecd30dda8626e5da25630b5dad87cf18cc8f4dad49411ca533a2c2c Jan 28 07:05:31 crc kubenswrapper[4642]: I0128 07:05:31.108395 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c68c4f2-ad0e-4540-acd5-185f6a8568eb" path="/var/lib/kubelet/pods/8c68c4f2-ad0e-4540-acd5-185f6a8568eb/volumes" Jan 28 07:05:31 crc kubenswrapper[4642]: I0128 07:05:31.281958 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"50fa8a6c-3a6a-4b78-8330-61f133fc251d","Type":"ContainerStarted","Data":"6962e4bcaecd30dda8626e5da25630b5dad87cf18cc8f4dad49411ca533a2c2c"} Jan 28 07:05:31 crc kubenswrapper[4642]: I0128 07:05:31.772051 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 28 07:05:37 crc kubenswrapper[4642]: I0128 07:05:37.351138 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"50fa8a6c-3a6a-4b78-8330-61f133fc251d","Type":"ContainerStarted","Data":"56ccddef658f511c781f0ccd5222d4eb7bb212c83d76b6e3b28ed74e1a6b694c"} Jan 28 07:05:37 crc kubenswrapper[4642]: I0128 07:05:37.353633 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-5n5zh" event={"ID":"40790093-08bf-4d1a-8718-dc943de05f37","Type":"ContainerStarted","Data":"bc6af9f4d68d97ec9d5d0022d3c1e57a8f03a83462648eaa37c994d8784716b6"} Jan 28 07:05:37 crc kubenswrapper[4642]: I0128 07:05:37.372902 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-5n5zh" podStartSLOduration=1.9988306169999999 podStartE2EDuration="28.372879432s" podCreationTimestamp="2026-01-28 07:05:09 +0000 UTC" firstStartedPulling="2026-01-28 07:05:10.099109499 +0000 UTC m=+1033.331198308" lastFinishedPulling="2026-01-28 07:05:36.473158324 +0000 UTC m=+1059.705247123" observedRunningTime="2026-01-28 07:05:37.367402484 +0000 UTC m=+1060.599491293" watchObservedRunningTime="2026-01-28 07:05:37.372879432 +0000 UTC m=+1060.604968241" Jan 28 07:05:38 crc kubenswrapper[4642]: I0128 07:05:38.199850 4642 patch_prober.go:28] interesting pod/machine-config-daemon-hdsmf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 07:05:38 crc kubenswrapper[4642]: I0128 07:05:38.200152 4642 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 07:05:38 crc kubenswrapper[4642]: I0128 07:05:38.361374 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"50fa8a6c-3a6a-4b78-8330-61f133fc251d","Type":"ContainerStarted","Data":"139137ca2b1ec05c704dac4e55deffa49c324038ed7fcb5749a511c798cdc731"} Jan 28 07:05:39 crc kubenswrapper[4642]: I0128 07:05:39.372153 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"50fa8a6c-3a6a-4b78-8330-61f133fc251d","Type":"ContainerStarted","Data":"5520ce0ccfc46ff0964ea4ec7491cd91581920b2c454bfc02eadd0dbec85f779"} Jan 28 07:05:40 crc kubenswrapper[4642]: I0128 07:05:40.379240 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"50fa8a6c-3a6a-4b78-8330-61f133fc251d","Type":"ContainerStarted","Data":"ee3f883e0cbdc6900c5e5828ab61b704ddc4e801ffce165e815c3f9a95faec7f"} Jan 28 07:05:40 crc kubenswrapper[4642]: I0128 07:05:40.379580 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 28 07:05:40 crc kubenswrapper[4642]: I0128 07:05:40.402800 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.67684063 podStartE2EDuration="10.402790754s" podCreationTimestamp="2026-01-28 07:05:30 +0000 UTC" firstStartedPulling="2026-01-28 07:05:31.070045431 +0000 UTC m=+1054.302134239" lastFinishedPulling="2026-01-28 07:05:39.795995553 +0000 UTC m=+1063.028084363" observedRunningTime="2026-01-28 07:05:40.399830799 +0000 UTC m=+1063.631919608" watchObservedRunningTime="2026-01-28 07:05:40.402790754 +0000 UTC m=+1063.634879563" Jan 28 07:05:42 crc kubenswrapper[4642]: I0128 07:05:42.397007 4642 generic.go:334] "Generic (PLEG): container finished" podID="40790093-08bf-4d1a-8718-dc943de05f37" containerID="bc6af9f4d68d97ec9d5d0022d3c1e57a8f03a83462648eaa37c994d8784716b6" exitCode=0 Jan 28 07:05:42 crc kubenswrapper[4642]: I0128 07:05:42.397107 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-5n5zh" event={"ID":"40790093-08bf-4d1a-8718-dc943de05f37","Type":"ContainerDied","Data":"bc6af9f4d68d97ec9d5d0022d3c1e57a8f03a83462648eaa37c994d8784716b6"} Jan 28 07:05:43 crc kubenswrapper[4642]: I0128 07:05:43.700027 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-5n5zh" Jan 28 07:05:43 crc kubenswrapper[4642]: I0128 07:05:43.705013 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40790093-08bf-4d1a-8718-dc943de05f37-combined-ca-bundle\") pod \"40790093-08bf-4d1a-8718-dc943de05f37\" (UID: \"40790093-08bf-4d1a-8718-dc943de05f37\") " Jan 28 07:05:43 crc kubenswrapper[4642]: I0128 07:05:43.705082 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hkmf\" (UniqueName: \"kubernetes.io/projected/40790093-08bf-4d1a-8718-dc943de05f37-kube-api-access-2hkmf\") pod \"40790093-08bf-4d1a-8718-dc943de05f37\" (UID: \"40790093-08bf-4d1a-8718-dc943de05f37\") " Jan 28 07:05:43 crc kubenswrapper[4642]: I0128 07:05:43.705100 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40790093-08bf-4d1a-8718-dc943de05f37-config-data\") pod \"40790093-08bf-4d1a-8718-dc943de05f37\" (UID: \"40790093-08bf-4d1a-8718-dc943de05f37\") " Jan 28 07:05:43 crc kubenswrapper[4642]: I0128 07:05:43.705152 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40790093-08bf-4d1a-8718-dc943de05f37-scripts\") pod \"40790093-08bf-4d1a-8718-dc943de05f37\" (UID: \"40790093-08bf-4d1a-8718-dc943de05f37\") " Jan 28 07:05:43 crc kubenswrapper[4642]: I0128 07:05:43.709829 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40790093-08bf-4d1a-8718-dc943de05f37-scripts" (OuterVolumeSpecName: "scripts") pod "40790093-08bf-4d1a-8718-dc943de05f37" (UID: "40790093-08bf-4d1a-8718-dc943de05f37"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:05:43 crc kubenswrapper[4642]: I0128 07:05:43.710117 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40790093-08bf-4d1a-8718-dc943de05f37-kube-api-access-2hkmf" (OuterVolumeSpecName: "kube-api-access-2hkmf") pod "40790093-08bf-4d1a-8718-dc943de05f37" (UID: "40790093-08bf-4d1a-8718-dc943de05f37"). InnerVolumeSpecName "kube-api-access-2hkmf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:05:43 crc kubenswrapper[4642]: I0128 07:05:43.731438 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40790093-08bf-4d1a-8718-dc943de05f37-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "40790093-08bf-4d1a-8718-dc943de05f37" (UID: "40790093-08bf-4d1a-8718-dc943de05f37"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:05:43 crc kubenswrapper[4642]: I0128 07:05:43.742332 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40790093-08bf-4d1a-8718-dc943de05f37-config-data" (OuterVolumeSpecName: "config-data") pod "40790093-08bf-4d1a-8718-dc943de05f37" (UID: "40790093-08bf-4d1a-8718-dc943de05f37"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:05:43 crc kubenswrapper[4642]: I0128 07:05:43.807062 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2hkmf\" (UniqueName: \"kubernetes.io/projected/40790093-08bf-4d1a-8718-dc943de05f37-kube-api-access-2hkmf\") on node \"crc\" DevicePath \"\"" Jan 28 07:05:43 crc kubenswrapper[4642]: I0128 07:05:43.807090 4642 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40790093-08bf-4d1a-8718-dc943de05f37-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 07:05:43 crc kubenswrapper[4642]: I0128 07:05:43.807100 4642 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40790093-08bf-4d1a-8718-dc943de05f37-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 07:05:43 crc kubenswrapper[4642]: I0128 07:05:43.807109 4642 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40790093-08bf-4d1a-8718-dc943de05f37-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 07:05:44 crc kubenswrapper[4642]: I0128 07:05:44.414224 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-5n5zh" event={"ID":"40790093-08bf-4d1a-8718-dc943de05f37","Type":"ContainerDied","Data":"e34c2710e4e72f8d4e99c4c33dd45c3622f977b0a5c22553847736304b6e4c1f"} Jan 28 07:05:44 crc kubenswrapper[4642]: I0128 07:05:44.414610 4642 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e34c2710e4e72f8d4e99c4c33dd45c3622f977b0a5c22553847736304b6e4c1f" Jan 28 07:05:44 crc kubenswrapper[4642]: I0128 07:05:44.414271 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-5n5zh" Jan 28 07:05:44 crc kubenswrapper[4642]: I0128 07:05:44.486869 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 28 07:05:44 crc kubenswrapper[4642]: E0128 07:05:44.487296 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40790093-08bf-4d1a-8718-dc943de05f37" containerName="nova-cell0-conductor-db-sync" Jan 28 07:05:44 crc kubenswrapper[4642]: I0128 07:05:44.487312 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="40790093-08bf-4d1a-8718-dc943de05f37" containerName="nova-cell0-conductor-db-sync" Jan 28 07:05:44 crc kubenswrapper[4642]: I0128 07:05:44.487466 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="40790093-08bf-4d1a-8718-dc943de05f37" containerName="nova-cell0-conductor-db-sync" Jan 28 07:05:44 crc kubenswrapper[4642]: I0128 07:05:44.487973 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 28 07:05:44 crc kubenswrapper[4642]: I0128 07:05:44.489552 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-668vb" Jan 28 07:05:44 crc kubenswrapper[4642]: I0128 07:05:44.489743 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 28 07:05:44 crc kubenswrapper[4642]: I0128 07:05:44.493219 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 28 07:05:44 crc kubenswrapper[4642]: I0128 07:05:44.623382 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b79b1b9-2072-44eb-ab2f-977a02871f54-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"5b79b1b9-2072-44eb-ab2f-977a02871f54\") " pod="openstack/nova-cell0-conductor-0" Jan 28 07:05:44 crc kubenswrapper[4642]: I0128 07:05:44.623671 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kw6dr\" (UniqueName: \"kubernetes.io/projected/5b79b1b9-2072-44eb-ab2f-977a02871f54-kube-api-access-kw6dr\") pod \"nova-cell0-conductor-0\" (UID: \"5b79b1b9-2072-44eb-ab2f-977a02871f54\") " pod="openstack/nova-cell0-conductor-0" Jan 28 07:05:44 crc kubenswrapper[4642]: I0128 07:05:44.623703 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b79b1b9-2072-44eb-ab2f-977a02871f54-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"5b79b1b9-2072-44eb-ab2f-977a02871f54\") " pod="openstack/nova-cell0-conductor-0" Jan 28 07:05:44 crc kubenswrapper[4642]: I0128 07:05:44.725446 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kw6dr\" (UniqueName: \"kubernetes.io/projected/5b79b1b9-2072-44eb-ab2f-977a02871f54-kube-api-access-kw6dr\") pod \"nova-cell0-conductor-0\" (UID: \"5b79b1b9-2072-44eb-ab2f-977a02871f54\") " pod="openstack/nova-cell0-conductor-0" Jan 28 07:05:44 crc kubenswrapper[4642]: I0128 07:05:44.726130 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b79b1b9-2072-44eb-ab2f-977a02871f54-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"5b79b1b9-2072-44eb-ab2f-977a02871f54\") " pod="openstack/nova-cell0-conductor-0" Jan 28 07:05:44 crc kubenswrapper[4642]: I0128 07:05:44.726348 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b79b1b9-2072-44eb-ab2f-977a02871f54-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"5b79b1b9-2072-44eb-ab2f-977a02871f54\") " pod="openstack/nova-cell0-conductor-0" Jan 28 07:05:44 crc kubenswrapper[4642]: I0128 07:05:44.730324 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b79b1b9-2072-44eb-ab2f-977a02871f54-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"5b79b1b9-2072-44eb-ab2f-977a02871f54\") " pod="openstack/nova-cell0-conductor-0" Jan 28 07:05:44 crc kubenswrapper[4642]: I0128 07:05:44.730343 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b79b1b9-2072-44eb-ab2f-977a02871f54-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"5b79b1b9-2072-44eb-ab2f-977a02871f54\") " pod="openstack/nova-cell0-conductor-0" Jan 28 07:05:44 crc kubenswrapper[4642]: I0128 07:05:44.739572 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kw6dr\" (UniqueName: \"kubernetes.io/projected/5b79b1b9-2072-44eb-ab2f-977a02871f54-kube-api-access-kw6dr\") pod \"nova-cell0-conductor-0\" (UID: \"5b79b1b9-2072-44eb-ab2f-977a02871f54\") " pod="openstack/nova-cell0-conductor-0" Jan 28 07:05:44 crc kubenswrapper[4642]: I0128 07:05:44.804413 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 28 07:05:45 crc kubenswrapper[4642]: I0128 07:05:45.188665 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 28 07:05:45 crc kubenswrapper[4642]: I0128 07:05:45.423116 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"5b79b1b9-2072-44eb-ab2f-977a02871f54","Type":"ContainerStarted","Data":"133b572ae1bf52205602fc1029f460e80fd274081bcc332545f233de6ba84e47"} Jan 28 07:05:45 crc kubenswrapper[4642]: I0128 07:05:45.423494 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 28 07:05:45 crc kubenswrapper[4642]: I0128 07:05:45.423535 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"5b79b1b9-2072-44eb-ab2f-977a02871f54","Type":"ContainerStarted","Data":"0ee45ba7806f246663dfe122e3e0a465c13d1c4c18a2bc3c2d7d09948849bfb5"} Jan 28 07:05:45 crc kubenswrapper[4642]: I0128 07:05:45.435299 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=1.435285193 podStartE2EDuration="1.435285193s" podCreationTimestamp="2026-01-28 07:05:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 07:05:45.432781606 +0000 UTC m=+1068.664870415" watchObservedRunningTime="2026-01-28 07:05:45.435285193 +0000 UTC m=+1068.667374002" Jan 28 07:05:51 crc kubenswrapper[4642]: I0128 07:05:51.168340 4642 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","poddb3bf7e9-aede-4b15-8c39-49ecafa0e435"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort poddb3bf7e9-aede-4b15-8c39-49ecafa0e435] : Timed out while waiting for systemd to remove kubepods-besteffort-poddb3bf7e9_aede_4b15_8c39_49ecafa0e435.slice" Jan 28 07:05:54 crc kubenswrapper[4642]: I0128 07:05:54.826481 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.246040 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-qch6m"] Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.247147 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-qch6m" Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.248978 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.249133 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.254101 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-qch6m"] Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.362534 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.363589 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.367932 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.374119 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.389844 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.391448 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.398466 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.413179 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.427950 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jv5ms\" (UniqueName: \"kubernetes.io/projected/6f280eea-8ffb-4200-9357-df15e71681b0-kube-api-access-jv5ms\") pod \"nova-cell1-novncproxy-0\" (UID: \"6f280eea-8ffb-4200-9357-df15e71681b0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.428014 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f68nz\" (UniqueName: \"kubernetes.io/projected/2388a8f5-7be5-4284-9370-23d6f1545c8a-kube-api-access-f68nz\") pod \"nova-cell0-cell-mapping-qch6m\" (UID: \"2388a8f5-7be5-4284-9370-23d6f1545c8a\") " pod="openstack/nova-cell0-cell-mapping-qch6m" Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.428082 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f280eea-8ffb-4200-9357-df15e71681b0-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6f280eea-8ffb-4200-9357-df15e71681b0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.428106 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2388a8f5-7be5-4284-9370-23d6f1545c8a-scripts\") pod \"nova-cell0-cell-mapping-qch6m\" (UID: \"2388a8f5-7be5-4284-9370-23d6f1545c8a\") " pod="openstack/nova-cell0-cell-mapping-qch6m" Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.428129 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2388a8f5-7be5-4284-9370-23d6f1545c8a-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-qch6m\" (UID: \"2388a8f5-7be5-4284-9370-23d6f1545c8a\") " pod="openstack/nova-cell0-cell-mapping-qch6m" Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.428207 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qh7rq\" (UniqueName: \"kubernetes.io/projected/0a6a27a0-9f13-4980-95f5-ec06d1be7492-kube-api-access-qh7rq\") pod \"nova-scheduler-0\" (UID: \"0a6a27a0-9f13-4980-95f5-ec06d1be7492\") " pod="openstack/nova-scheduler-0" Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.428230 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a6a27a0-9f13-4980-95f5-ec06d1be7492-config-data\") pod \"nova-scheduler-0\" (UID: \"0a6a27a0-9f13-4980-95f5-ec06d1be7492\") " pod="openstack/nova-scheduler-0" Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.428319 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2388a8f5-7be5-4284-9370-23d6f1545c8a-config-data\") pod \"nova-cell0-cell-mapping-qch6m\" (UID: \"2388a8f5-7be5-4284-9370-23d6f1545c8a\") " pod="openstack/nova-cell0-cell-mapping-qch6m" Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.428352 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f280eea-8ffb-4200-9357-df15e71681b0-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6f280eea-8ffb-4200-9357-df15e71681b0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.428404 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a6a27a0-9f13-4980-95f5-ec06d1be7492-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0a6a27a0-9f13-4980-95f5-ec06d1be7492\") " pod="openstack/nova-scheduler-0" Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.473848 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.477026 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.480920 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.483900 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.530909 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a6a27a0-9f13-4980-95f5-ec06d1be7492-config-data\") pod \"nova-scheduler-0\" (UID: \"0a6a27a0-9f13-4980-95f5-ec06d1be7492\") " pod="openstack/nova-scheduler-0" Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.530960 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qh7rq\" (UniqueName: \"kubernetes.io/projected/0a6a27a0-9f13-4980-95f5-ec06d1be7492-kube-api-access-qh7rq\") pod \"nova-scheduler-0\" (UID: \"0a6a27a0-9f13-4980-95f5-ec06d1be7492\") " pod="openstack/nova-scheduler-0" Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.531014 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2388a8f5-7be5-4284-9370-23d6f1545c8a-config-data\") pod \"nova-cell0-cell-mapping-qch6m\" (UID: \"2388a8f5-7be5-4284-9370-23d6f1545c8a\") " pod="openstack/nova-cell0-cell-mapping-qch6m" Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.531047 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f280eea-8ffb-4200-9357-df15e71681b0-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6f280eea-8ffb-4200-9357-df15e71681b0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.531097 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a6a27a0-9f13-4980-95f5-ec06d1be7492-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0a6a27a0-9f13-4980-95f5-ec06d1be7492\") " pod="openstack/nova-scheduler-0" Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.531213 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jv5ms\" (UniqueName: \"kubernetes.io/projected/6f280eea-8ffb-4200-9357-df15e71681b0-kube-api-access-jv5ms\") pod \"nova-cell1-novncproxy-0\" (UID: \"6f280eea-8ffb-4200-9357-df15e71681b0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.531251 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4c7e21f-af49-4131-85d6-c3c5656153db-config-data\") pod \"nova-metadata-0\" (UID: \"a4c7e21f-af49-4131-85d6-c3c5656153db\") " pod="openstack/nova-metadata-0" Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.531278 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f68nz\" (UniqueName: \"kubernetes.io/projected/2388a8f5-7be5-4284-9370-23d6f1545c8a-kube-api-access-f68nz\") pod \"nova-cell0-cell-mapping-qch6m\" (UID: \"2388a8f5-7be5-4284-9370-23d6f1545c8a\") " pod="openstack/nova-cell0-cell-mapping-qch6m" Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.531322 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sntqz\" (UniqueName: \"kubernetes.io/projected/a4c7e21f-af49-4131-85d6-c3c5656153db-kube-api-access-sntqz\") pod \"nova-metadata-0\" (UID: \"a4c7e21f-af49-4131-85d6-c3c5656153db\") " pod="openstack/nova-metadata-0" Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.531350 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4c7e21f-af49-4131-85d6-c3c5656153db-logs\") pod \"nova-metadata-0\" (UID: \"a4c7e21f-af49-4131-85d6-c3c5656153db\") " pod="openstack/nova-metadata-0" Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.531382 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f280eea-8ffb-4200-9357-df15e71681b0-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6f280eea-8ffb-4200-9357-df15e71681b0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.531402 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2388a8f5-7be5-4284-9370-23d6f1545c8a-scripts\") pod \"nova-cell0-cell-mapping-qch6m\" (UID: \"2388a8f5-7be5-4284-9370-23d6f1545c8a\") " pod="openstack/nova-cell0-cell-mapping-qch6m" Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.531430 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2388a8f5-7be5-4284-9370-23d6f1545c8a-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-qch6m\" (UID: \"2388a8f5-7be5-4284-9370-23d6f1545c8a\") " pod="openstack/nova-cell0-cell-mapping-qch6m" Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.531502 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4c7e21f-af49-4131-85d6-c3c5656153db-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a4c7e21f-af49-4131-85d6-c3c5656153db\") " pod="openstack/nova-metadata-0" Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.539745 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2388a8f5-7be5-4284-9370-23d6f1545c8a-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-qch6m\" (UID: \"2388a8f5-7be5-4284-9370-23d6f1545c8a\") " pod="openstack/nova-cell0-cell-mapping-qch6m" Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.544797 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a6a27a0-9f13-4980-95f5-ec06d1be7492-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0a6a27a0-9f13-4980-95f5-ec06d1be7492\") " pod="openstack/nova-scheduler-0" Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.545124 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2388a8f5-7be5-4284-9370-23d6f1545c8a-config-data\") pod \"nova-cell0-cell-mapping-qch6m\" (UID: \"2388a8f5-7be5-4284-9370-23d6f1545c8a\") " pod="openstack/nova-cell0-cell-mapping-qch6m" Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.553708 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a6a27a0-9f13-4980-95f5-ec06d1be7492-config-data\") pod \"nova-scheduler-0\" (UID: \"0a6a27a0-9f13-4980-95f5-ec06d1be7492\") " pod="openstack/nova-scheduler-0" Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.564654 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2388a8f5-7be5-4284-9370-23d6f1545c8a-scripts\") pod \"nova-cell0-cell-mapping-qch6m\" (UID: \"2388a8f5-7be5-4284-9370-23d6f1545c8a\") " pod="openstack/nova-cell0-cell-mapping-qch6m" Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.565821 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f280eea-8ffb-4200-9357-df15e71681b0-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6f280eea-8ffb-4200-9357-df15e71681b0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.571346 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jv5ms\" (UniqueName: \"kubernetes.io/projected/6f280eea-8ffb-4200-9357-df15e71681b0-kube-api-access-jv5ms\") pod \"nova-cell1-novncproxy-0\" (UID: \"6f280eea-8ffb-4200-9357-df15e71681b0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.572127 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f280eea-8ffb-4200-9357-df15e71681b0-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6f280eea-8ffb-4200-9357-df15e71681b0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.572734 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qh7rq\" (UniqueName: \"kubernetes.io/projected/0a6a27a0-9f13-4980-95f5-ec06d1be7492-kube-api-access-qh7rq\") pod \"nova-scheduler-0\" (UID: \"0a6a27a0-9f13-4980-95f5-ec06d1be7492\") " pod="openstack/nova-scheduler-0" Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.576222 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f68nz\" (UniqueName: \"kubernetes.io/projected/2388a8f5-7be5-4284-9370-23d6f1545c8a-kube-api-access-f68nz\") pod \"nova-cell0-cell-mapping-qch6m\" (UID: \"2388a8f5-7be5-4284-9370-23d6f1545c8a\") " pod="openstack/nova-cell0-cell-mapping-qch6m" Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.591032 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-84f7f4c6c9-w4c7k"] Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.592333 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84f7f4c6c9-w4c7k" Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.615268 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84f7f4c6c9-w4c7k"] Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.624232 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.625861 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.629501 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.632879 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c8ab084-e8fb-46a0-8196-1b16c11574cc-config\") pod \"dnsmasq-dns-84f7f4c6c9-w4c7k\" (UID: \"4c8ab084-e8fb-46a0-8196-1b16c11574cc\") " pod="openstack/dnsmasq-dns-84f7f4c6c9-w4c7k" Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.632951 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjbrr\" (UniqueName: \"kubernetes.io/projected/417842f7-e349-410b-9dc2-5ef497e538de-kube-api-access-fjbrr\") pod \"nova-api-0\" (UID: \"417842f7-e349-410b-9dc2-5ef497e538de\") " pod="openstack/nova-api-0" Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.632994 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/417842f7-e349-410b-9dc2-5ef497e538de-logs\") pod \"nova-api-0\" (UID: \"417842f7-e349-410b-9dc2-5ef497e538de\") " pod="openstack/nova-api-0" Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.633025 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/417842f7-e349-410b-9dc2-5ef497e538de-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"417842f7-e349-410b-9dc2-5ef497e538de\") " pod="openstack/nova-api-0" Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.633050 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4c8ab084-e8fb-46a0-8196-1b16c11574cc-ovsdbserver-nb\") pod \"dnsmasq-dns-84f7f4c6c9-w4c7k\" (UID: \"4c8ab084-e8fb-46a0-8196-1b16c11574cc\") " pod="openstack/dnsmasq-dns-84f7f4c6c9-w4c7k" Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.633356 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4c7e21f-af49-4131-85d6-c3c5656153db-config-data\") pod \"nova-metadata-0\" (UID: \"a4c7e21f-af49-4131-85d6-c3c5656153db\") " pod="openstack/nova-metadata-0" Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.633405 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sntqz\" (UniqueName: \"kubernetes.io/projected/a4c7e21f-af49-4131-85d6-c3c5656153db-kube-api-access-sntqz\") pod \"nova-metadata-0\" (UID: \"a4c7e21f-af49-4131-85d6-c3c5656153db\") " pod="openstack/nova-metadata-0" Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.633427 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/417842f7-e349-410b-9dc2-5ef497e538de-config-data\") pod \"nova-api-0\" (UID: \"417842f7-e349-410b-9dc2-5ef497e538de\") " pod="openstack/nova-api-0" Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.633444 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4c8ab084-e8fb-46a0-8196-1b16c11574cc-dns-swift-storage-0\") pod \"dnsmasq-dns-84f7f4c6c9-w4c7k\" (UID: \"4c8ab084-e8fb-46a0-8196-1b16c11574cc\") " pod="openstack/dnsmasq-dns-84f7f4c6c9-w4c7k" Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.633460 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4c7e21f-af49-4131-85d6-c3c5656153db-logs\") pod \"nova-metadata-0\" (UID: \"a4c7e21f-af49-4131-85d6-c3c5656153db\") " pod="openstack/nova-metadata-0" Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.633493 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4c8ab084-e8fb-46a0-8196-1b16c11574cc-dns-svc\") pod \"dnsmasq-dns-84f7f4c6c9-w4c7k\" (UID: \"4c8ab084-e8fb-46a0-8196-1b16c11574cc\") " pod="openstack/dnsmasq-dns-84f7f4c6c9-w4c7k" Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.633524 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4c8ab084-e8fb-46a0-8196-1b16c11574cc-ovsdbserver-sb\") pod \"dnsmasq-dns-84f7f4c6c9-w4c7k\" (UID: \"4c8ab084-e8fb-46a0-8196-1b16c11574cc\") " pod="openstack/dnsmasq-dns-84f7f4c6c9-w4c7k" Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.633543 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4c7e21f-af49-4131-85d6-c3c5656153db-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a4c7e21f-af49-4131-85d6-c3c5656153db\") " pod="openstack/nova-metadata-0" Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.633559 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5df9\" (UniqueName: \"kubernetes.io/projected/4c8ab084-e8fb-46a0-8196-1b16c11574cc-kube-api-access-p5df9\") pod \"dnsmasq-dns-84f7f4c6c9-w4c7k\" (UID: \"4c8ab084-e8fb-46a0-8196-1b16c11574cc\") " pod="openstack/dnsmasq-dns-84f7f4c6c9-w4c7k" Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.634397 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.634749 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4c7e21f-af49-4131-85d6-c3c5656153db-logs\") pod \"nova-metadata-0\" (UID: \"a4c7e21f-af49-4131-85d6-c3c5656153db\") " pod="openstack/nova-metadata-0" Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.652489 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4c7e21f-af49-4131-85d6-c3c5656153db-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a4c7e21f-af49-4131-85d6-c3c5656153db\") " pod="openstack/nova-metadata-0" Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.661658 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sntqz\" (UniqueName: \"kubernetes.io/projected/a4c7e21f-af49-4131-85d6-c3c5656153db-kube-api-access-sntqz\") pod \"nova-metadata-0\" (UID: \"a4c7e21f-af49-4131-85d6-c3c5656153db\") " pod="openstack/nova-metadata-0" Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.662812 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4c7e21f-af49-4131-85d6-c3c5656153db-config-data\") pod \"nova-metadata-0\" (UID: \"a4c7e21f-af49-4131-85d6-c3c5656153db\") " pod="openstack/nova-metadata-0" Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.682959 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.740327 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.741021 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c8ab084-e8fb-46a0-8196-1b16c11574cc-config\") pod \"dnsmasq-dns-84f7f4c6c9-w4c7k\" (UID: \"4c8ab084-e8fb-46a0-8196-1b16c11574cc\") " pod="openstack/dnsmasq-dns-84f7f4c6c9-w4c7k" Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.741114 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjbrr\" (UniqueName: \"kubernetes.io/projected/417842f7-e349-410b-9dc2-5ef497e538de-kube-api-access-fjbrr\") pod \"nova-api-0\" (UID: \"417842f7-e349-410b-9dc2-5ef497e538de\") " pod="openstack/nova-api-0" Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.741159 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/417842f7-e349-410b-9dc2-5ef497e538de-logs\") pod \"nova-api-0\" (UID: \"417842f7-e349-410b-9dc2-5ef497e538de\") " pod="openstack/nova-api-0" Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.741212 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/417842f7-e349-410b-9dc2-5ef497e538de-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"417842f7-e349-410b-9dc2-5ef497e538de\") " pod="openstack/nova-api-0" Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.741236 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4c8ab084-e8fb-46a0-8196-1b16c11574cc-ovsdbserver-nb\") pod \"dnsmasq-dns-84f7f4c6c9-w4c7k\" (UID: \"4c8ab084-e8fb-46a0-8196-1b16c11574cc\") " pod="openstack/dnsmasq-dns-84f7f4c6c9-w4c7k" Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.741279 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/417842f7-e349-410b-9dc2-5ef497e538de-config-data\") pod \"nova-api-0\" (UID: \"417842f7-e349-410b-9dc2-5ef497e538de\") " pod="openstack/nova-api-0" Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.741303 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4c8ab084-e8fb-46a0-8196-1b16c11574cc-dns-swift-storage-0\") pod \"dnsmasq-dns-84f7f4c6c9-w4c7k\" (UID: \"4c8ab084-e8fb-46a0-8196-1b16c11574cc\") " pod="openstack/dnsmasq-dns-84f7f4c6c9-w4c7k" Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.741334 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4c8ab084-e8fb-46a0-8196-1b16c11574cc-dns-svc\") pod \"dnsmasq-dns-84f7f4c6c9-w4c7k\" (UID: \"4c8ab084-e8fb-46a0-8196-1b16c11574cc\") " pod="openstack/dnsmasq-dns-84f7f4c6c9-w4c7k" Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.741372 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4c8ab084-e8fb-46a0-8196-1b16c11574cc-ovsdbserver-sb\") pod \"dnsmasq-dns-84f7f4c6c9-w4c7k\" (UID: \"4c8ab084-e8fb-46a0-8196-1b16c11574cc\") " pod="openstack/dnsmasq-dns-84f7f4c6c9-w4c7k" Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.741403 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5df9\" (UniqueName: \"kubernetes.io/projected/4c8ab084-e8fb-46a0-8196-1b16c11574cc-kube-api-access-p5df9\") pod \"dnsmasq-dns-84f7f4c6c9-w4c7k\" (UID: \"4c8ab084-e8fb-46a0-8196-1b16c11574cc\") " pod="openstack/dnsmasq-dns-84f7f4c6c9-w4c7k" Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.741692 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/417842f7-e349-410b-9dc2-5ef497e538de-logs\") pod \"nova-api-0\" (UID: \"417842f7-e349-410b-9dc2-5ef497e538de\") " pod="openstack/nova-api-0" Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.742296 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4c8ab084-e8fb-46a0-8196-1b16c11574cc-ovsdbserver-nb\") pod \"dnsmasq-dns-84f7f4c6c9-w4c7k\" (UID: \"4c8ab084-e8fb-46a0-8196-1b16c11574cc\") " pod="openstack/dnsmasq-dns-84f7f4c6c9-w4c7k" Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.742366 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c8ab084-e8fb-46a0-8196-1b16c11574cc-config\") pod \"dnsmasq-dns-84f7f4c6c9-w4c7k\" (UID: \"4c8ab084-e8fb-46a0-8196-1b16c11574cc\") " pod="openstack/dnsmasq-dns-84f7f4c6c9-w4c7k" Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.743263 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4c8ab084-e8fb-46a0-8196-1b16c11574cc-dns-svc\") pod \"dnsmasq-dns-84f7f4c6c9-w4c7k\" (UID: \"4c8ab084-e8fb-46a0-8196-1b16c11574cc\") " pod="openstack/dnsmasq-dns-84f7f4c6c9-w4c7k" Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.746215 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/417842f7-e349-410b-9dc2-5ef497e538de-config-data\") pod \"nova-api-0\" (UID: \"417842f7-e349-410b-9dc2-5ef497e538de\") " pod="openstack/nova-api-0" Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.749011 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/417842f7-e349-410b-9dc2-5ef497e538de-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"417842f7-e349-410b-9dc2-5ef497e538de\") " pod="openstack/nova-api-0" Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.752567 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4c8ab084-e8fb-46a0-8196-1b16c11574cc-dns-swift-storage-0\") pod \"dnsmasq-dns-84f7f4c6c9-w4c7k\" (UID: \"4c8ab084-e8fb-46a0-8196-1b16c11574cc\") " pod="openstack/dnsmasq-dns-84f7f4c6c9-w4c7k" Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.761965 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjbrr\" (UniqueName: \"kubernetes.io/projected/417842f7-e349-410b-9dc2-5ef497e538de-kube-api-access-fjbrr\") pod \"nova-api-0\" (UID: \"417842f7-e349-410b-9dc2-5ef497e538de\") " pod="openstack/nova-api-0" Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.768388 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5df9\" (UniqueName: \"kubernetes.io/projected/4c8ab084-e8fb-46a0-8196-1b16c11574cc-kube-api-access-p5df9\") pod \"dnsmasq-dns-84f7f4c6c9-w4c7k\" (UID: \"4c8ab084-e8fb-46a0-8196-1b16c11574cc\") " pod="openstack/dnsmasq-dns-84f7f4c6c9-w4c7k" Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.777862 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4c8ab084-e8fb-46a0-8196-1b16c11574cc-ovsdbserver-sb\") pod \"dnsmasq-dns-84f7f4c6c9-w4c7k\" (UID: \"4c8ab084-e8fb-46a0-8196-1b16c11574cc\") " pod="openstack/dnsmasq-dns-84f7f4c6c9-w4c7k" Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.806246 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.867774 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-qch6m" Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.939867 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84f7f4c6c9-w4c7k" Jan 28 07:05:55 crc kubenswrapper[4642]: I0128 07:05:55.949650 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 07:05:56 crc kubenswrapper[4642]: I0128 07:05:56.186387 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 07:05:56 crc kubenswrapper[4642]: I0128 07:05:56.322024 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-tb67c"] Jan 28 07:05:56 crc kubenswrapper[4642]: I0128 07:05:56.323371 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-tb67c" Jan 28 07:05:56 crc kubenswrapper[4642]: I0128 07:05:56.325676 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 28 07:05:56 crc kubenswrapper[4642]: I0128 07:05:56.325858 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 28 07:05:56 crc kubenswrapper[4642]: I0128 07:05:56.344334 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-tb67c"] Jan 28 07:05:56 crc kubenswrapper[4642]: I0128 07:05:56.377364 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 07:05:56 crc kubenswrapper[4642]: I0128 07:05:56.383812 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 07:05:56 crc kubenswrapper[4642]: I0128 07:05:56.463264 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84c94226-5b5d-4c10-a0fb-fcf5e42e34c0-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-tb67c\" (UID: \"84c94226-5b5d-4c10-a0fb-fcf5e42e34c0\") " pod="openstack/nova-cell1-conductor-db-sync-tb67c" Jan 28 07:05:56 crc kubenswrapper[4642]: I0128 07:05:56.463313 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84c94226-5b5d-4c10-a0fb-fcf5e42e34c0-scripts\") pod \"nova-cell1-conductor-db-sync-tb67c\" (UID: \"84c94226-5b5d-4c10-a0fb-fcf5e42e34c0\") " pod="openstack/nova-cell1-conductor-db-sync-tb67c" Jan 28 07:05:56 crc kubenswrapper[4642]: I0128 07:05:56.463824 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kc8t\" (UniqueName: \"kubernetes.io/projected/84c94226-5b5d-4c10-a0fb-fcf5e42e34c0-kube-api-access-6kc8t\") pod \"nova-cell1-conductor-db-sync-tb67c\" (UID: \"84c94226-5b5d-4c10-a0fb-fcf5e42e34c0\") " pod="openstack/nova-cell1-conductor-db-sync-tb67c" Jan 28 07:05:56 crc kubenswrapper[4642]: I0128 07:05:56.463873 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84c94226-5b5d-4c10-a0fb-fcf5e42e34c0-config-data\") pod \"nova-cell1-conductor-db-sync-tb67c\" (UID: \"84c94226-5b5d-4c10-a0fb-fcf5e42e34c0\") " pod="openstack/nova-cell1-conductor-db-sync-tb67c" Jan 28 07:05:56 crc kubenswrapper[4642]: I0128 07:05:56.511422 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6f280eea-8ffb-4200-9357-df15e71681b0","Type":"ContainerStarted","Data":"f7501da1456f5d903e7fed9ca9a5b907eff80ddab1bd909225b284e7d9651dd4"} Jan 28 07:05:56 crc kubenswrapper[4642]: I0128 07:05:56.565016 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84c94226-5b5d-4c10-a0fb-fcf5e42e34c0-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-tb67c\" (UID: \"84c94226-5b5d-4c10-a0fb-fcf5e42e34c0\") " pod="openstack/nova-cell1-conductor-db-sync-tb67c" Jan 28 07:05:56 crc kubenswrapper[4642]: I0128 07:05:56.565062 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84c94226-5b5d-4c10-a0fb-fcf5e42e34c0-scripts\") pod \"nova-cell1-conductor-db-sync-tb67c\" (UID: \"84c94226-5b5d-4c10-a0fb-fcf5e42e34c0\") " pod="openstack/nova-cell1-conductor-db-sync-tb67c" Jan 28 07:05:56 crc kubenswrapper[4642]: I0128 07:05:56.565118 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kc8t\" (UniqueName: \"kubernetes.io/projected/84c94226-5b5d-4c10-a0fb-fcf5e42e34c0-kube-api-access-6kc8t\") pod \"nova-cell1-conductor-db-sync-tb67c\" (UID: \"84c94226-5b5d-4c10-a0fb-fcf5e42e34c0\") " pod="openstack/nova-cell1-conductor-db-sync-tb67c" Jan 28 07:05:56 crc kubenswrapper[4642]: I0128 07:05:56.565150 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84c94226-5b5d-4c10-a0fb-fcf5e42e34c0-config-data\") pod \"nova-cell1-conductor-db-sync-tb67c\" (UID: \"84c94226-5b5d-4c10-a0fb-fcf5e42e34c0\") " pod="openstack/nova-cell1-conductor-db-sync-tb67c" Jan 28 07:05:56 crc kubenswrapper[4642]: I0128 07:05:56.571106 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84c94226-5b5d-4c10-a0fb-fcf5e42e34c0-config-data\") pod \"nova-cell1-conductor-db-sync-tb67c\" (UID: \"84c94226-5b5d-4c10-a0fb-fcf5e42e34c0\") " pod="openstack/nova-cell1-conductor-db-sync-tb67c" Jan 28 07:05:56 crc kubenswrapper[4642]: I0128 07:05:56.578088 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84c94226-5b5d-4c10-a0fb-fcf5e42e34c0-scripts\") pod \"nova-cell1-conductor-db-sync-tb67c\" (UID: \"84c94226-5b5d-4c10-a0fb-fcf5e42e34c0\") " pod="openstack/nova-cell1-conductor-db-sync-tb67c" Jan 28 07:05:56 crc kubenswrapper[4642]: I0128 07:05:56.578542 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84c94226-5b5d-4c10-a0fb-fcf5e42e34c0-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-tb67c\" (UID: \"84c94226-5b5d-4c10-a0fb-fcf5e42e34c0\") " pod="openstack/nova-cell1-conductor-db-sync-tb67c" Jan 28 07:05:56 crc kubenswrapper[4642]: I0128 07:05:56.585280 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kc8t\" (UniqueName: \"kubernetes.io/projected/84c94226-5b5d-4c10-a0fb-fcf5e42e34c0-kube-api-access-6kc8t\") pod \"nova-cell1-conductor-db-sync-tb67c\" (UID: \"84c94226-5b5d-4c10-a0fb-fcf5e42e34c0\") " pod="openstack/nova-cell1-conductor-db-sync-tb67c" Jan 28 07:05:56 crc kubenswrapper[4642]: I0128 07:05:56.592534 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-qch6m"] Jan 28 07:05:56 crc kubenswrapper[4642]: I0128 07:05:56.666275 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-tb67c" Jan 28 07:05:56 crc kubenswrapper[4642]: I0128 07:05:56.684100 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 28 07:05:56 crc kubenswrapper[4642]: I0128 07:05:56.738897 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84f7f4c6c9-w4c7k"] Jan 28 07:05:58 crc kubenswrapper[4642]: I0128 07:05:58.641381 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 07:05:58 crc kubenswrapper[4642]: I0128 07:05:58.655572 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 07:06:00 crc kubenswrapper[4642]: I0128 07:06:00.661282 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 28 07:06:00 crc kubenswrapper[4642]: W0128 07:06:00.731129 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a6a27a0_9f13_4980_95f5_ec06d1be7492.slice/crio-b5eefdaad6eebc05c7d2c9748459c727908002b2bf34914d8d0903731a1b4a48 WatchSource:0}: Error finding container b5eefdaad6eebc05c7d2c9748459c727908002b2bf34914d8d0903731a1b4a48: Status 404 returned error can't find the container with id b5eefdaad6eebc05c7d2c9748459c727908002b2bf34914d8d0903731a1b4a48 Jan 28 07:06:00 crc kubenswrapper[4642]: W0128 07:06:00.747285 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2388a8f5_7be5_4284_9370_23d6f1545c8a.slice/crio-2f50d816f127a20aaded8cca1f567e2c8cdd651310fcea78d58915e7989f6eeb WatchSource:0}: Error finding container 2f50d816f127a20aaded8cca1f567e2c8cdd651310fcea78d58915e7989f6eeb: Status 404 returned error can't find the container with id 2f50d816f127a20aaded8cca1f567e2c8cdd651310fcea78d58915e7989f6eeb Jan 28 07:06:01 crc kubenswrapper[4642]: W0128 07:06:01.222499 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod84c94226_5b5d_4c10_a0fb_fcf5e42e34c0.slice/crio-135d04a1eb17aee36701ed806dcc32b121bde67adf5076a887d384fcf5578fd2 WatchSource:0}: Error finding container 135d04a1eb17aee36701ed806dcc32b121bde67adf5076a887d384fcf5578fd2: Status 404 returned error can't find the container with id 135d04a1eb17aee36701ed806dcc32b121bde67adf5076a887d384fcf5578fd2 Jan 28 07:06:01 crc kubenswrapper[4642]: I0128 07:06:01.224107 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-tb67c"] Jan 28 07:06:01 crc kubenswrapper[4642]: I0128 07:06:01.569224 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"c754de0a-7ee3-416f-988d-d0eb4829ea99","Type":"ContainerStarted","Data":"69195684d0596aa41f6df8c8bda55af0d8a7fa1d4a035541fc9bcf513f818442"} Jan 28 07:06:01 crc kubenswrapper[4642]: I0128 07:06:01.570686 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-qch6m" event={"ID":"2388a8f5-7be5-4284-9370-23d6f1545c8a","Type":"ContainerStarted","Data":"5645b4498308085d82bbbfaac6f38f39fd0b913a902a9cd0bb0c304c0d6e6a10"} Jan 28 07:06:01 crc kubenswrapper[4642]: I0128 07:06:01.570728 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-qch6m" event={"ID":"2388a8f5-7be5-4284-9370-23d6f1545c8a","Type":"ContainerStarted","Data":"2f50d816f127a20aaded8cca1f567e2c8cdd651310fcea78d58915e7989f6eeb"} Jan 28 07:06:01 crc kubenswrapper[4642]: I0128 07:06:01.572987 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"417842f7-e349-410b-9dc2-5ef497e538de","Type":"ContainerStarted","Data":"160f64ba1c93ed6391e82ad312397e3fd0c81dc2095c066a586852c02aa9af39"} Jan 28 07:06:01 crc kubenswrapper[4642]: I0128 07:06:01.574532 4642 generic.go:334] "Generic (PLEG): container finished" podID="4c8ab084-e8fb-46a0-8196-1b16c11574cc" containerID="a075b8d5feaefb4d57b1c6a606d793ed73c1897b90c8fee4fb36cd4bc84d3aeb" exitCode=0 Jan 28 07:06:01 crc kubenswrapper[4642]: I0128 07:06:01.574589 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84f7f4c6c9-w4c7k" event={"ID":"4c8ab084-e8fb-46a0-8196-1b16c11574cc","Type":"ContainerDied","Data":"a075b8d5feaefb4d57b1c6a606d793ed73c1897b90c8fee4fb36cd4bc84d3aeb"} Jan 28 07:06:01 crc kubenswrapper[4642]: I0128 07:06:01.574668 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84f7f4c6c9-w4c7k" event={"ID":"4c8ab084-e8fb-46a0-8196-1b16c11574cc","Type":"ContainerStarted","Data":"95b5da2f34c8c1d03d774f09fb985ca7c07273343ec305bf2c2c4a15183adefe"} Jan 28 07:06:01 crc kubenswrapper[4642]: I0128 07:06:01.575741 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a4c7e21f-af49-4131-85d6-c3c5656153db","Type":"ContainerStarted","Data":"4baa9d3c0a7066986ba7f09d410c6211fdc554581ead0bbca851be727fc9b6c2"} Jan 28 07:06:01 crc kubenswrapper[4642]: I0128 07:06:01.577315 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-tb67c" event={"ID":"84c94226-5b5d-4c10-a0fb-fcf5e42e34c0","Type":"ContainerStarted","Data":"1110d8f2c269c3abed7283437cb94bcd9cc7197b6f63d13c65ce695d81875105"} Jan 28 07:06:01 crc kubenswrapper[4642]: I0128 07:06:01.577341 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-tb67c" event={"ID":"84c94226-5b5d-4c10-a0fb-fcf5e42e34c0","Type":"ContainerStarted","Data":"135d04a1eb17aee36701ed806dcc32b121bde67adf5076a887d384fcf5578fd2"} Jan 28 07:06:01 crc kubenswrapper[4642]: I0128 07:06:01.578702 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0a6a27a0-9f13-4980-95f5-ec06d1be7492","Type":"ContainerStarted","Data":"b5eefdaad6eebc05c7d2c9748459c727908002b2bf34914d8d0903731a1b4a48"} Jan 28 07:06:01 crc kubenswrapper[4642]: I0128 07:06:01.585629 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=10.320152757 podStartE2EDuration="1m11.585618821s" podCreationTimestamp="2026-01-28 07:04:50 +0000 UTC" firstStartedPulling="2026-01-28 07:04:59.566236776 +0000 UTC m=+1022.798325575" lastFinishedPulling="2026-01-28 07:06:00.83170283 +0000 UTC m=+1084.063791639" observedRunningTime="2026-01-28 07:06:01.58262368 +0000 UTC m=+1084.814712489" watchObservedRunningTime="2026-01-28 07:06:01.585618821 +0000 UTC m=+1084.817707630" Jan 28 07:06:01 crc kubenswrapper[4642]: I0128 07:06:01.622214 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-tb67c" podStartSLOduration=5.622179754 podStartE2EDuration="5.622179754s" podCreationTimestamp="2026-01-28 07:05:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 07:06:01.593860548 +0000 UTC m=+1084.825949357" watchObservedRunningTime="2026-01-28 07:06:01.622179754 +0000 UTC m=+1084.854268563" Jan 28 07:06:01 crc kubenswrapper[4642]: I0128 07:06:01.642683 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-qch6m" podStartSLOduration=6.642662527 podStartE2EDuration="6.642662527s" podCreationTimestamp="2026-01-28 07:05:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 07:06:01.630646844 +0000 UTC m=+1084.862735653" watchObservedRunningTime="2026-01-28 07:06:01.642662527 +0000 UTC m=+1084.874751335" Jan 28 07:06:02 crc kubenswrapper[4642]: I0128 07:06:02.590995 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6f280eea-8ffb-4200-9357-df15e71681b0","Type":"ContainerStarted","Data":"129ec0622f1eed4bd86fd6f642ea8bf1d9bc0dead6da5d1e928290ce9f026f40"} Jan 28 07:06:02 crc kubenswrapper[4642]: I0128 07:06:02.591292 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="6f280eea-8ffb-4200-9357-df15e71681b0" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://129ec0622f1eed4bd86fd6f642ea8bf1d9bc0dead6da5d1e928290ce9f026f40" gracePeriod=30 Jan 28 07:06:02 crc kubenswrapper[4642]: I0128 07:06:02.597170 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84f7f4c6c9-w4c7k" event={"ID":"4c8ab084-e8fb-46a0-8196-1b16c11574cc","Type":"ContainerStarted","Data":"5ca8d221107a797165bcbbe394753e29d5cef4cb1e8fdf395528377cf1de7395"} Jan 28 07:06:02 crc kubenswrapper[4642]: I0128 07:06:02.597219 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-84f7f4c6c9-w4c7k" Jan 28 07:06:02 crc kubenswrapper[4642]: I0128 07:06:02.612130 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=1.5868329920000002 podStartE2EDuration="7.612119693s" podCreationTimestamp="2026-01-28 07:05:55 +0000 UTC" firstStartedPulling="2026-01-28 07:05:56.209839081 +0000 UTC m=+1079.441927889" lastFinishedPulling="2026-01-28 07:06:02.235125781 +0000 UTC m=+1085.467214590" observedRunningTime="2026-01-28 07:06:02.605415849 +0000 UTC m=+1085.837504658" watchObservedRunningTime="2026-01-28 07:06:02.612119693 +0000 UTC m=+1085.844208502" Jan 28 07:06:02 crc kubenswrapper[4642]: I0128 07:06:02.630000 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-84f7f4c6c9-w4c7k" podStartSLOduration=7.629979884 podStartE2EDuration="7.629979884s" podCreationTimestamp="2026-01-28 07:05:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 07:06:02.620708371 +0000 UTC m=+1085.852797181" watchObservedRunningTime="2026-01-28 07:06:02.629979884 +0000 UTC m=+1085.862068694" Jan 28 07:06:04 crc kubenswrapper[4642]: I0128 07:06:04.576377 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 07:06:04 crc kubenswrapper[4642]: I0128 07:06:04.576859 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="9f7f3800-a8e9-4ff6-9c02-dd125daac158" containerName="kube-state-metrics" containerID="cri-o://afcb16d17f78b93690ef9378c1edabadab1b03ec3344e7a610ad9e38cbc99f28" gracePeriod=30 Jan 28 07:06:05 crc kubenswrapper[4642]: I0128 07:06:05.007088 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 28 07:06:05 crc kubenswrapper[4642]: I0128 07:06:05.059393 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9f9jv\" (UniqueName: \"kubernetes.io/projected/9f7f3800-a8e9-4ff6-9c02-dd125daac158-kube-api-access-9f9jv\") pod \"9f7f3800-a8e9-4ff6-9c02-dd125daac158\" (UID: \"9f7f3800-a8e9-4ff6-9c02-dd125daac158\") " Jan 28 07:06:05 crc kubenswrapper[4642]: I0128 07:06:05.065106 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f7f3800-a8e9-4ff6-9c02-dd125daac158-kube-api-access-9f9jv" (OuterVolumeSpecName: "kube-api-access-9f9jv") pod "9f7f3800-a8e9-4ff6-9c02-dd125daac158" (UID: "9f7f3800-a8e9-4ff6-9c02-dd125daac158"). InnerVolumeSpecName "kube-api-access-9f9jv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:06:05 crc kubenswrapper[4642]: I0128 07:06:05.163101 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9f9jv\" (UniqueName: \"kubernetes.io/projected/9f7f3800-a8e9-4ff6-9c02-dd125daac158-kube-api-access-9f9jv\") on node \"crc\" DevicePath \"\"" Jan 28 07:06:05 crc kubenswrapper[4642]: I0128 07:06:05.622435 4642 generic.go:334] "Generic (PLEG): container finished" podID="9f7f3800-a8e9-4ff6-9c02-dd125daac158" containerID="afcb16d17f78b93690ef9378c1edabadab1b03ec3344e7a610ad9e38cbc99f28" exitCode=2 Jan 28 07:06:05 crc kubenswrapper[4642]: I0128 07:06:05.622542 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 28 07:06:05 crc kubenswrapper[4642]: I0128 07:06:05.622504 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"9f7f3800-a8e9-4ff6-9c02-dd125daac158","Type":"ContainerDied","Data":"afcb16d17f78b93690ef9378c1edabadab1b03ec3344e7a610ad9e38cbc99f28"} Jan 28 07:06:05 crc kubenswrapper[4642]: I0128 07:06:05.622591 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"9f7f3800-a8e9-4ff6-9c02-dd125daac158","Type":"ContainerDied","Data":"9da5c1c2acc46452661f92f9647a7ee2249c2eb3742590443b3d7cd83fd6508b"} Jan 28 07:06:05 crc kubenswrapper[4642]: I0128 07:06:05.622616 4642 scope.go:117] "RemoveContainer" containerID="afcb16d17f78b93690ef9378c1edabadab1b03ec3344e7a610ad9e38cbc99f28" Jan 28 07:06:05 crc kubenswrapper[4642]: I0128 07:06:05.669902 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 07:06:05 crc kubenswrapper[4642]: I0128 07:06:05.680957 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 07:06:05 crc kubenswrapper[4642]: I0128 07:06:05.683354 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 28 07:06:05 crc kubenswrapper[4642]: I0128 07:06:05.695238 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 07:06:05 crc kubenswrapper[4642]: E0128 07:06:05.695811 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f7f3800-a8e9-4ff6-9c02-dd125daac158" containerName="kube-state-metrics" Jan 28 07:06:05 crc kubenswrapper[4642]: I0128 07:06:05.695835 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f7f3800-a8e9-4ff6-9c02-dd125daac158" containerName="kube-state-metrics" Jan 28 07:06:05 crc kubenswrapper[4642]: I0128 07:06:05.696096 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f7f3800-a8e9-4ff6-9c02-dd125daac158" containerName="kube-state-metrics" Jan 28 07:06:05 crc kubenswrapper[4642]: I0128 07:06:05.696830 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 07:06:05 crc kubenswrapper[4642]: I0128 07:06:05.696927 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 28 07:06:05 crc kubenswrapper[4642]: I0128 07:06:05.701512 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 28 07:06:05 crc kubenswrapper[4642]: I0128 07:06:05.701678 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 28 07:06:05 crc kubenswrapper[4642]: I0128 07:06:05.773421 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/36fd1727-1fc5-4cf4-91a2-4d2a01c1d7c9-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"36fd1727-1fc5-4cf4-91a2-4d2a01c1d7c9\") " pod="openstack/kube-state-metrics-0" Jan 28 07:06:05 crc kubenswrapper[4642]: I0128 07:06:05.773579 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36fd1727-1fc5-4cf4-91a2-4d2a01c1d7c9-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"36fd1727-1fc5-4cf4-91a2-4d2a01c1d7c9\") " pod="openstack/kube-state-metrics-0" Jan 28 07:06:05 crc kubenswrapper[4642]: I0128 07:06:05.773829 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlw6w\" (UniqueName: \"kubernetes.io/projected/36fd1727-1fc5-4cf4-91a2-4d2a01c1d7c9-kube-api-access-qlw6w\") pod \"kube-state-metrics-0\" (UID: \"36fd1727-1fc5-4cf4-91a2-4d2a01c1d7c9\") " pod="openstack/kube-state-metrics-0" Jan 28 07:06:05 crc kubenswrapper[4642]: I0128 07:06:05.773949 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/36fd1727-1fc5-4cf4-91a2-4d2a01c1d7c9-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"36fd1727-1fc5-4cf4-91a2-4d2a01c1d7c9\") " pod="openstack/kube-state-metrics-0" Jan 28 07:06:05 crc kubenswrapper[4642]: I0128 07:06:05.880074 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/36fd1727-1fc5-4cf4-91a2-4d2a01c1d7c9-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"36fd1727-1fc5-4cf4-91a2-4d2a01c1d7c9\") " pod="openstack/kube-state-metrics-0" Jan 28 07:06:05 crc kubenswrapper[4642]: I0128 07:06:05.882317 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/36fd1727-1fc5-4cf4-91a2-4d2a01c1d7c9-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"36fd1727-1fc5-4cf4-91a2-4d2a01c1d7c9\") " pod="openstack/kube-state-metrics-0" Jan 28 07:06:05 crc kubenswrapper[4642]: I0128 07:06:05.882991 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36fd1727-1fc5-4cf4-91a2-4d2a01c1d7c9-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"36fd1727-1fc5-4cf4-91a2-4d2a01c1d7c9\") " pod="openstack/kube-state-metrics-0" Jan 28 07:06:05 crc kubenswrapper[4642]: I0128 07:06:05.883323 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlw6w\" (UniqueName: \"kubernetes.io/projected/36fd1727-1fc5-4cf4-91a2-4d2a01c1d7c9-kube-api-access-qlw6w\") pod \"kube-state-metrics-0\" (UID: \"36fd1727-1fc5-4cf4-91a2-4d2a01c1d7c9\") " pod="openstack/kube-state-metrics-0" Jan 28 07:06:05 crc kubenswrapper[4642]: I0128 07:06:05.886394 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/36fd1727-1fc5-4cf4-91a2-4d2a01c1d7c9-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"36fd1727-1fc5-4cf4-91a2-4d2a01c1d7c9\") " pod="openstack/kube-state-metrics-0" Jan 28 07:06:05 crc kubenswrapper[4642]: I0128 07:06:05.886444 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/36fd1727-1fc5-4cf4-91a2-4d2a01c1d7c9-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"36fd1727-1fc5-4cf4-91a2-4d2a01c1d7c9\") " pod="openstack/kube-state-metrics-0" Jan 28 07:06:05 crc kubenswrapper[4642]: I0128 07:06:05.887792 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36fd1727-1fc5-4cf4-91a2-4d2a01c1d7c9-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"36fd1727-1fc5-4cf4-91a2-4d2a01c1d7c9\") " pod="openstack/kube-state-metrics-0" Jan 28 07:06:05 crc kubenswrapper[4642]: I0128 07:06:05.907167 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlw6w\" (UniqueName: \"kubernetes.io/projected/36fd1727-1fc5-4cf4-91a2-4d2a01c1d7c9-kube-api-access-qlw6w\") pod \"kube-state-metrics-0\" (UID: \"36fd1727-1fc5-4cf4-91a2-4d2a01c1d7c9\") " pod="openstack/kube-state-metrics-0" Jan 28 07:06:06 crc kubenswrapper[4642]: I0128 07:06:06.018789 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 28 07:06:06 crc kubenswrapper[4642]: I0128 07:06:06.028692 4642 scope.go:117] "RemoveContainer" containerID="afcb16d17f78b93690ef9378c1edabadab1b03ec3344e7a610ad9e38cbc99f28" Jan 28 07:06:06 crc kubenswrapper[4642]: E0128 07:06:06.029176 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afcb16d17f78b93690ef9378c1edabadab1b03ec3344e7a610ad9e38cbc99f28\": container with ID starting with afcb16d17f78b93690ef9378c1edabadab1b03ec3344e7a610ad9e38cbc99f28 not found: ID does not exist" containerID="afcb16d17f78b93690ef9378c1edabadab1b03ec3344e7a610ad9e38cbc99f28" Jan 28 07:06:06 crc kubenswrapper[4642]: I0128 07:06:06.029231 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afcb16d17f78b93690ef9378c1edabadab1b03ec3344e7a610ad9e38cbc99f28"} err="failed to get container status \"afcb16d17f78b93690ef9378c1edabadab1b03ec3344e7a610ad9e38cbc99f28\": rpc error: code = NotFound desc = could not find container \"afcb16d17f78b93690ef9378c1edabadab1b03ec3344e7a610ad9e38cbc99f28\": container with ID starting with afcb16d17f78b93690ef9378c1edabadab1b03ec3344e7a610ad9e38cbc99f28 not found: ID does not exist" Jan 28 07:06:06 crc kubenswrapper[4642]: I0128 07:06:06.297446 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 07:06:06 crc kubenswrapper[4642]: I0128 07:06:06.298150 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="50fa8a6c-3a6a-4b78-8330-61f133fc251d" containerName="ceilometer-central-agent" containerID="cri-o://56ccddef658f511c781f0ccd5222d4eb7bb212c83d76b6e3b28ed74e1a6b694c" gracePeriod=30 Jan 28 07:06:06 crc kubenswrapper[4642]: I0128 07:06:06.298238 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="50fa8a6c-3a6a-4b78-8330-61f133fc251d" containerName="proxy-httpd" containerID="cri-o://ee3f883e0cbdc6900c5e5828ab61b704ddc4e801ffce165e815c3f9a95faec7f" gracePeriod=30 Jan 28 07:06:06 crc kubenswrapper[4642]: I0128 07:06:06.298271 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="50fa8a6c-3a6a-4b78-8330-61f133fc251d" containerName="ceilometer-notification-agent" containerID="cri-o://139137ca2b1ec05c704dac4e55deffa49c324038ed7fcb5749a511c798cdc731" gracePeriod=30 Jan 28 07:06:06 crc kubenswrapper[4642]: I0128 07:06:06.298316 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="50fa8a6c-3a6a-4b78-8330-61f133fc251d" containerName="sg-core" containerID="cri-o://5520ce0ccfc46ff0964ea4ec7491cd91581920b2c454bfc02eadd0dbec85f779" gracePeriod=30 Jan 28 07:06:06 crc kubenswrapper[4642]: I0128 07:06:06.478913 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 07:06:06 crc kubenswrapper[4642]: W0128 07:06:06.486619 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod36fd1727_1fc5_4cf4_91a2_4d2a01c1d7c9.slice/crio-be4dd261a1f54f665ffd075feed0ef291a4d7e053070257cd755c61ce921f1ef WatchSource:0}: Error finding container be4dd261a1f54f665ffd075feed0ef291a4d7e053070257cd755c61ce921f1ef: Status 404 returned error can't find the container with id be4dd261a1f54f665ffd075feed0ef291a4d7e053070257cd755c61ce921f1ef Jan 28 07:06:06 crc kubenswrapper[4642]: I0128 07:06:06.631604 4642 generic.go:334] "Generic (PLEG): container finished" podID="84c94226-5b5d-4c10-a0fb-fcf5e42e34c0" containerID="1110d8f2c269c3abed7283437cb94bcd9cc7197b6f63d13c65ce695d81875105" exitCode=0 Jan 28 07:06:06 crc kubenswrapper[4642]: I0128 07:06:06.631683 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-tb67c" event={"ID":"84c94226-5b5d-4c10-a0fb-fcf5e42e34c0","Type":"ContainerDied","Data":"1110d8f2c269c3abed7283437cb94bcd9cc7197b6f63d13c65ce695d81875105"} Jan 28 07:06:06 crc kubenswrapper[4642]: I0128 07:06:06.633539 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"36fd1727-1fc5-4cf4-91a2-4d2a01c1d7c9","Type":"ContainerStarted","Data":"be4dd261a1f54f665ffd075feed0ef291a4d7e053070257cd755c61ce921f1ef"} Jan 28 07:06:06 crc kubenswrapper[4642]: I0128 07:06:06.635581 4642 generic.go:334] "Generic (PLEG): container finished" podID="50fa8a6c-3a6a-4b78-8330-61f133fc251d" containerID="ee3f883e0cbdc6900c5e5828ab61b704ddc4e801ffce165e815c3f9a95faec7f" exitCode=0 Jan 28 07:06:06 crc kubenswrapper[4642]: I0128 07:06:06.635674 4642 generic.go:334] "Generic (PLEG): container finished" podID="50fa8a6c-3a6a-4b78-8330-61f133fc251d" containerID="5520ce0ccfc46ff0964ea4ec7491cd91581920b2c454bfc02eadd0dbec85f779" exitCode=2 Jan 28 07:06:06 crc kubenswrapper[4642]: I0128 07:06:06.635724 4642 generic.go:334] "Generic (PLEG): container finished" podID="50fa8a6c-3a6a-4b78-8330-61f133fc251d" containerID="56ccddef658f511c781f0ccd5222d4eb7bb212c83d76b6e3b28ed74e1a6b694c" exitCode=0 Jan 28 07:06:06 crc kubenswrapper[4642]: I0128 07:06:06.635796 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"50fa8a6c-3a6a-4b78-8330-61f133fc251d","Type":"ContainerDied","Data":"ee3f883e0cbdc6900c5e5828ab61b704ddc4e801ffce165e815c3f9a95faec7f"} Jan 28 07:06:06 crc kubenswrapper[4642]: I0128 07:06:06.635904 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"50fa8a6c-3a6a-4b78-8330-61f133fc251d","Type":"ContainerDied","Data":"5520ce0ccfc46ff0964ea4ec7491cd91581920b2c454bfc02eadd0dbec85f779"} Jan 28 07:06:06 crc kubenswrapper[4642]: I0128 07:06:06.635985 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"50fa8a6c-3a6a-4b78-8330-61f133fc251d","Type":"ContainerDied","Data":"56ccddef658f511c781f0ccd5222d4eb7bb212c83d76b6e3b28ed74e1a6b694c"} Jan 28 07:06:06 crc kubenswrapper[4642]: I0128 07:06:06.637455 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a4c7e21f-af49-4131-85d6-c3c5656153db","Type":"ContainerStarted","Data":"7751a6eb425f84a1b88980f5eeb5806af748eb81b0cc2bb67bc7a63f4d224858"} Jan 28 07:06:06 crc kubenswrapper[4642]: I0128 07:06:06.637577 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a4c7e21f-af49-4131-85d6-c3c5656153db","Type":"ContainerStarted","Data":"95229b8637a0fdb71527cdf4a3fd28c615b90c1750b83117a50d4d02d45862e1"} Jan 28 07:06:06 crc kubenswrapper[4642]: I0128 07:06:06.637667 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="a4c7e21f-af49-4131-85d6-c3c5656153db" containerName="nova-metadata-log" containerID="cri-o://95229b8637a0fdb71527cdf4a3fd28c615b90c1750b83117a50d4d02d45862e1" gracePeriod=30 Jan 28 07:06:06 crc kubenswrapper[4642]: I0128 07:06:06.637682 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="a4c7e21f-af49-4131-85d6-c3c5656153db" containerName="nova-metadata-metadata" containerID="cri-o://7751a6eb425f84a1b88980f5eeb5806af748eb81b0cc2bb67bc7a63f4d224858" gracePeriod=30 Jan 28 07:06:06 crc kubenswrapper[4642]: I0128 07:06:06.663279 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=6.322436279 podStartE2EDuration="11.663269663s" podCreationTimestamp="2026-01-28 07:05:55 +0000 UTC" firstStartedPulling="2026-01-28 07:06:00.729980169 +0000 UTC m=+1083.962068978" lastFinishedPulling="2026-01-28 07:06:06.070813552 +0000 UTC m=+1089.302902362" observedRunningTime="2026-01-28 07:06:06.6571197 +0000 UTC m=+1089.889208509" watchObservedRunningTime="2026-01-28 07:06:06.663269663 +0000 UTC m=+1089.895358473" Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.108432 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f7f3800-a8e9-4ff6-9c02-dd125daac158" path="/var/lib/kubelet/pods/9f7f3800-a8e9-4ff6-9c02-dd125daac158/volumes" Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.155903 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.208931 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4c7e21f-af49-4131-85d6-c3c5656153db-combined-ca-bundle\") pod \"a4c7e21f-af49-4131-85d6-c3c5656153db\" (UID: \"a4c7e21f-af49-4131-85d6-c3c5656153db\") " Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.209127 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4c7e21f-af49-4131-85d6-c3c5656153db-config-data\") pod \"a4c7e21f-af49-4131-85d6-c3c5656153db\" (UID: \"a4c7e21f-af49-4131-85d6-c3c5656153db\") " Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.210013 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sntqz\" (UniqueName: \"kubernetes.io/projected/a4c7e21f-af49-4131-85d6-c3c5656153db-kube-api-access-sntqz\") pod \"a4c7e21f-af49-4131-85d6-c3c5656153db\" (UID: \"a4c7e21f-af49-4131-85d6-c3c5656153db\") " Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.210133 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4c7e21f-af49-4131-85d6-c3c5656153db-logs\") pod \"a4c7e21f-af49-4131-85d6-c3c5656153db\" (UID: \"a4c7e21f-af49-4131-85d6-c3c5656153db\") " Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.210704 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4c7e21f-af49-4131-85d6-c3c5656153db-logs" (OuterVolumeSpecName: "logs") pod "a4c7e21f-af49-4131-85d6-c3c5656153db" (UID: "a4c7e21f-af49-4131-85d6-c3c5656153db"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.211091 4642 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4c7e21f-af49-4131-85d6-c3c5656153db-logs\") on node \"crc\" DevicePath \"\"" Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.213769 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4c7e21f-af49-4131-85d6-c3c5656153db-kube-api-access-sntqz" (OuterVolumeSpecName: "kube-api-access-sntqz") pod "a4c7e21f-af49-4131-85d6-c3c5656153db" (UID: "a4c7e21f-af49-4131-85d6-c3c5656153db"). InnerVolumeSpecName "kube-api-access-sntqz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.236446 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4c7e21f-af49-4131-85d6-c3c5656153db-config-data" (OuterVolumeSpecName: "config-data") pod "a4c7e21f-af49-4131-85d6-c3c5656153db" (UID: "a4c7e21f-af49-4131-85d6-c3c5656153db"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.241946 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4c7e21f-af49-4131-85d6-c3c5656153db-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a4c7e21f-af49-4131-85d6-c3c5656153db" (UID: "a4c7e21f-af49-4131-85d6-c3c5656153db"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.268223 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.312575 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50fa8a6c-3a6a-4b78-8330-61f133fc251d-config-data\") pod \"50fa8a6c-3a6a-4b78-8330-61f133fc251d\" (UID: \"50fa8a6c-3a6a-4b78-8330-61f133fc251d\") " Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.312745 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/50fa8a6c-3a6a-4b78-8330-61f133fc251d-sg-core-conf-yaml\") pod \"50fa8a6c-3a6a-4b78-8330-61f133fc251d\" (UID: \"50fa8a6c-3a6a-4b78-8330-61f133fc251d\") " Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.312833 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/50fa8a6c-3a6a-4b78-8330-61f133fc251d-scripts\") pod \"50fa8a6c-3a6a-4b78-8330-61f133fc251d\" (UID: \"50fa8a6c-3a6a-4b78-8330-61f133fc251d\") " Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.312848 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50fa8a6c-3a6a-4b78-8330-61f133fc251d-combined-ca-bundle\") pod \"50fa8a6c-3a6a-4b78-8330-61f133fc251d\" (UID: \"50fa8a6c-3a6a-4b78-8330-61f133fc251d\") " Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.313291 4642 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4c7e21f-af49-4131-85d6-c3c5656153db-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.313310 4642 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4c7e21f-af49-4131-85d6-c3c5656153db-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.313320 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sntqz\" (UniqueName: \"kubernetes.io/projected/a4c7e21f-af49-4131-85d6-c3c5656153db-kube-api-access-sntqz\") on node \"crc\" DevicePath \"\"" Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.316631 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50fa8a6c-3a6a-4b78-8330-61f133fc251d-scripts" (OuterVolumeSpecName: "scripts") pod "50fa8a6c-3a6a-4b78-8330-61f133fc251d" (UID: "50fa8a6c-3a6a-4b78-8330-61f133fc251d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.357620 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50fa8a6c-3a6a-4b78-8330-61f133fc251d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "50fa8a6c-3a6a-4b78-8330-61f133fc251d" (UID: "50fa8a6c-3a6a-4b78-8330-61f133fc251d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.406817 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50fa8a6c-3a6a-4b78-8330-61f133fc251d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "50fa8a6c-3a6a-4b78-8330-61f133fc251d" (UID: "50fa8a6c-3a6a-4b78-8330-61f133fc251d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.415330 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8j9gj\" (UniqueName: \"kubernetes.io/projected/50fa8a6c-3a6a-4b78-8330-61f133fc251d-kube-api-access-8j9gj\") pod \"50fa8a6c-3a6a-4b78-8330-61f133fc251d\" (UID: \"50fa8a6c-3a6a-4b78-8330-61f133fc251d\") " Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.415583 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/50fa8a6c-3a6a-4b78-8330-61f133fc251d-run-httpd\") pod \"50fa8a6c-3a6a-4b78-8330-61f133fc251d\" (UID: \"50fa8a6c-3a6a-4b78-8330-61f133fc251d\") " Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.415727 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/50fa8a6c-3a6a-4b78-8330-61f133fc251d-log-httpd\") pod \"50fa8a6c-3a6a-4b78-8330-61f133fc251d\" (UID: \"50fa8a6c-3a6a-4b78-8330-61f133fc251d\") " Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.416301 4642 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/50fa8a6c-3a6a-4b78-8330-61f133fc251d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.416374 4642 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/50fa8a6c-3a6a-4b78-8330-61f133fc251d-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.416462 4642 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50fa8a6c-3a6a-4b78-8330-61f133fc251d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.416869 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50fa8a6c-3a6a-4b78-8330-61f133fc251d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "50fa8a6c-3a6a-4b78-8330-61f133fc251d" (UID: "50fa8a6c-3a6a-4b78-8330-61f133fc251d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.417394 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50fa8a6c-3a6a-4b78-8330-61f133fc251d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "50fa8a6c-3a6a-4b78-8330-61f133fc251d" (UID: "50fa8a6c-3a6a-4b78-8330-61f133fc251d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.441361 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50fa8a6c-3a6a-4b78-8330-61f133fc251d-kube-api-access-8j9gj" (OuterVolumeSpecName: "kube-api-access-8j9gj") pod "50fa8a6c-3a6a-4b78-8330-61f133fc251d" (UID: "50fa8a6c-3a6a-4b78-8330-61f133fc251d"). InnerVolumeSpecName "kube-api-access-8j9gj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.449550 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50fa8a6c-3a6a-4b78-8330-61f133fc251d-config-data" (OuterVolumeSpecName: "config-data") pod "50fa8a6c-3a6a-4b78-8330-61f133fc251d" (UID: "50fa8a6c-3a6a-4b78-8330-61f133fc251d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.520007 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8j9gj\" (UniqueName: \"kubernetes.io/projected/50fa8a6c-3a6a-4b78-8330-61f133fc251d-kube-api-access-8j9gj\") on node \"crc\" DevicePath \"\"" Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.520034 4642 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/50fa8a6c-3a6a-4b78-8330-61f133fc251d-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.520042 4642 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/50fa8a6c-3a6a-4b78-8330-61f133fc251d-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.520052 4642 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50fa8a6c-3a6a-4b78-8330-61f133fc251d-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.647357 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"36fd1727-1fc5-4cf4-91a2-4d2a01c1d7c9","Type":"ContainerStarted","Data":"e9d60012215011e6bf05bb4f8b3c8e0fe7539b5ca0ca88b200a2435c32fe24af"} Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.648551 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.658785 4642 generic.go:334] "Generic (PLEG): container finished" podID="50fa8a6c-3a6a-4b78-8330-61f133fc251d" containerID="139137ca2b1ec05c704dac4e55deffa49c324038ed7fcb5749a511c798cdc731" exitCode=0 Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.658836 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.658880 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"50fa8a6c-3a6a-4b78-8330-61f133fc251d","Type":"ContainerDied","Data":"139137ca2b1ec05c704dac4e55deffa49c324038ed7fcb5749a511c798cdc731"} Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.658922 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"50fa8a6c-3a6a-4b78-8330-61f133fc251d","Type":"ContainerDied","Data":"6962e4bcaecd30dda8626e5da25630b5dad87cf18cc8f4dad49411ca533a2c2c"} Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.658948 4642 scope.go:117] "RemoveContainer" containerID="ee3f883e0cbdc6900c5e5828ab61b704ddc4e801ffce165e815c3f9a95faec7f" Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.661630 4642 generic.go:334] "Generic (PLEG): container finished" podID="a4c7e21f-af49-4131-85d6-c3c5656153db" containerID="7751a6eb425f84a1b88980f5eeb5806af748eb81b0cc2bb67bc7a63f4d224858" exitCode=0 Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.661655 4642 generic.go:334] "Generic (PLEG): container finished" podID="a4c7e21f-af49-4131-85d6-c3c5656153db" containerID="95229b8637a0fdb71527cdf4a3fd28c615b90c1750b83117a50d4d02d45862e1" exitCode=143 Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.661699 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a4c7e21f-af49-4131-85d6-c3c5656153db","Type":"ContainerDied","Data":"7751a6eb425f84a1b88980f5eeb5806af748eb81b0cc2bb67bc7a63f4d224858"} Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.661729 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a4c7e21f-af49-4131-85d6-c3c5656153db","Type":"ContainerDied","Data":"95229b8637a0fdb71527cdf4a3fd28c615b90c1750b83117a50d4d02d45862e1"} Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.661741 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a4c7e21f-af49-4131-85d6-c3c5656153db","Type":"ContainerDied","Data":"4baa9d3c0a7066986ba7f09d410c6211fdc554581ead0bbca851be727fc9b6c2"} Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.661751 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.663438 4642 generic.go:334] "Generic (PLEG): container finished" podID="2388a8f5-7be5-4284-9370-23d6f1545c8a" containerID="5645b4498308085d82bbbfaac6f38f39fd0b913a902a9cd0bb0c304c0d6e6a10" exitCode=0 Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.663562 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-qch6m" event={"ID":"2388a8f5-7be5-4284-9370-23d6f1545c8a","Type":"ContainerDied","Data":"5645b4498308085d82bbbfaac6f38f39fd0b913a902a9cd0bb0c304c0d6e6a10"} Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.684437 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.420021647 podStartE2EDuration="2.684418783s" podCreationTimestamp="2026-01-28 07:06:05 +0000 UTC" firstStartedPulling="2026-01-28 07:06:06.489312555 +0000 UTC m=+1089.721401364" lastFinishedPulling="2026-01-28 07:06:06.753709691 +0000 UTC m=+1089.985798500" observedRunningTime="2026-01-28 07:06:07.672870128 +0000 UTC m=+1090.904958938" watchObservedRunningTime="2026-01-28 07:06:07.684418783 +0000 UTC m=+1090.916507591" Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.687719 4642 scope.go:117] "RemoveContainer" containerID="5520ce0ccfc46ff0964ea4ec7491cd91581920b2c454bfc02eadd0dbec85f779" Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.721540 4642 scope.go:117] "RemoveContainer" containerID="139137ca2b1ec05c704dac4e55deffa49c324038ed7fcb5749a511c798cdc731" Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.722535 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.730457 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.743682 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 28 07:06:07 crc kubenswrapper[4642]: E0128 07:06:07.744122 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50fa8a6c-3a6a-4b78-8330-61f133fc251d" containerName="sg-core" Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.744141 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="50fa8a6c-3a6a-4b78-8330-61f133fc251d" containerName="sg-core" Jan 28 07:06:07 crc kubenswrapper[4642]: E0128 07:06:07.744157 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50fa8a6c-3a6a-4b78-8330-61f133fc251d" containerName="proxy-httpd" Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.744164 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="50fa8a6c-3a6a-4b78-8330-61f133fc251d" containerName="proxy-httpd" Jan 28 07:06:07 crc kubenswrapper[4642]: E0128 07:06:07.744174 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50fa8a6c-3a6a-4b78-8330-61f133fc251d" containerName="ceilometer-central-agent" Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.744196 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="50fa8a6c-3a6a-4b78-8330-61f133fc251d" containerName="ceilometer-central-agent" Jan 28 07:06:07 crc kubenswrapper[4642]: E0128 07:06:07.744223 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4c7e21f-af49-4131-85d6-c3c5656153db" containerName="nova-metadata-log" Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.744229 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4c7e21f-af49-4131-85d6-c3c5656153db" containerName="nova-metadata-log" Jan 28 07:06:07 crc kubenswrapper[4642]: E0128 07:06:07.744239 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50fa8a6c-3a6a-4b78-8330-61f133fc251d" containerName="ceilometer-notification-agent" Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.744246 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="50fa8a6c-3a6a-4b78-8330-61f133fc251d" containerName="ceilometer-notification-agent" Jan 28 07:06:07 crc kubenswrapper[4642]: E0128 07:06:07.744254 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4c7e21f-af49-4131-85d6-c3c5656153db" containerName="nova-metadata-metadata" Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.744261 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4c7e21f-af49-4131-85d6-c3c5656153db" containerName="nova-metadata-metadata" Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.744446 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="50fa8a6c-3a6a-4b78-8330-61f133fc251d" containerName="ceilometer-central-agent" Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.744466 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4c7e21f-af49-4131-85d6-c3c5656153db" containerName="nova-metadata-metadata" Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.744482 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="50fa8a6c-3a6a-4b78-8330-61f133fc251d" containerName="proxy-httpd" Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.744489 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="50fa8a6c-3a6a-4b78-8330-61f133fc251d" containerName="ceilometer-notification-agent" Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.744497 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4c7e21f-af49-4131-85d6-c3c5656153db" containerName="nova-metadata-log" Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.744512 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="50fa8a6c-3a6a-4b78-8330-61f133fc251d" containerName="sg-core" Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.746158 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.750250 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.750623 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.750626 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.753950 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.768693 4642 scope.go:117] "RemoveContainer" containerID="56ccddef658f511c781f0ccd5222d4eb7bb212c83d76b6e3b28ed74e1a6b694c" Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.778668 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.787863 4642 scope.go:117] "RemoveContainer" containerID="ee3f883e0cbdc6900c5e5828ab61b704ddc4e801ffce165e815c3f9a95faec7f" Jan 28 07:06:07 crc kubenswrapper[4642]: E0128 07:06:07.788459 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee3f883e0cbdc6900c5e5828ab61b704ddc4e801ffce165e815c3f9a95faec7f\": container with ID starting with ee3f883e0cbdc6900c5e5828ab61b704ddc4e801ffce165e815c3f9a95faec7f not found: ID does not exist" containerID="ee3f883e0cbdc6900c5e5828ab61b704ddc4e801ffce165e815c3f9a95faec7f" Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.788519 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee3f883e0cbdc6900c5e5828ab61b704ddc4e801ffce165e815c3f9a95faec7f"} err="failed to get container status \"ee3f883e0cbdc6900c5e5828ab61b704ddc4e801ffce165e815c3f9a95faec7f\": rpc error: code = NotFound desc = could not find container \"ee3f883e0cbdc6900c5e5828ab61b704ddc4e801ffce165e815c3f9a95faec7f\": container with ID starting with ee3f883e0cbdc6900c5e5828ab61b704ddc4e801ffce165e815c3f9a95faec7f not found: ID does not exist" Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.788547 4642 scope.go:117] "RemoveContainer" containerID="5520ce0ccfc46ff0964ea4ec7491cd91581920b2c454bfc02eadd0dbec85f779" Jan 28 07:06:07 crc kubenswrapper[4642]: E0128 07:06:07.789410 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5520ce0ccfc46ff0964ea4ec7491cd91581920b2c454bfc02eadd0dbec85f779\": container with ID starting with 5520ce0ccfc46ff0964ea4ec7491cd91581920b2c454bfc02eadd0dbec85f779 not found: ID does not exist" containerID="5520ce0ccfc46ff0964ea4ec7491cd91581920b2c454bfc02eadd0dbec85f779" Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.789499 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5520ce0ccfc46ff0964ea4ec7491cd91581920b2c454bfc02eadd0dbec85f779"} err="failed to get container status \"5520ce0ccfc46ff0964ea4ec7491cd91581920b2c454bfc02eadd0dbec85f779\": rpc error: code = NotFound desc = could not find container \"5520ce0ccfc46ff0964ea4ec7491cd91581920b2c454bfc02eadd0dbec85f779\": container with ID starting with 5520ce0ccfc46ff0964ea4ec7491cd91581920b2c454bfc02eadd0dbec85f779 not found: ID does not exist" Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.789539 4642 scope.go:117] "RemoveContainer" containerID="139137ca2b1ec05c704dac4e55deffa49c324038ed7fcb5749a511c798cdc731" Jan 28 07:06:07 crc kubenswrapper[4642]: E0128 07:06:07.790610 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"139137ca2b1ec05c704dac4e55deffa49c324038ed7fcb5749a511c798cdc731\": container with ID starting with 139137ca2b1ec05c704dac4e55deffa49c324038ed7fcb5749a511c798cdc731 not found: ID does not exist" containerID="139137ca2b1ec05c704dac4e55deffa49c324038ed7fcb5749a511c798cdc731" Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.790630 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"139137ca2b1ec05c704dac4e55deffa49c324038ed7fcb5749a511c798cdc731"} err="failed to get container status \"139137ca2b1ec05c704dac4e55deffa49c324038ed7fcb5749a511c798cdc731\": rpc error: code = NotFound desc = could not find container \"139137ca2b1ec05c704dac4e55deffa49c324038ed7fcb5749a511c798cdc731\": container with ID starting with 139137ca2b1ec05c704dac4e55deffa49c324038ed7fcb5749a511c798cdc731 not found: ID does not exist" Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.790644 4642 scope.go:117] "RemoveContainer" containerID="56ccddef658f511c781f0ccd5222d4eb7bb212c83d76b6e3b28ed74e1a6b694c" Jan 28 07:06:07 crc kubenswrapper[4642]: E0128 07:06:07.790910 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56ccddef658f511c781f0ccd5222d4eb7bb212c83d76b6e3b28ed74e1a6b694c\": container with ID starting with 56ccddef658f511c781f0ccd5222d4eb7bb212c83d76b6e3b28ed74e1a6b694c not found: ID does not exist" containerID="56ccddef658f511c781f0ccd5222d4eb7bb212c83d76b6e3b28ed74e1a6b694c" Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.790943 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56ccddef658f511c781f0ccd5222d4eb7bb212c83d76b6e3b28ed74e1a6b694c"} err="failed to get container status \"56ccddef658f511c781f0ccd5222d4eb7bb212c83d76b6e3b28ed74e1a6b694c\": rpc error: code = NotFound desc = could not find container \"56ccddef658f511c781f0ccd5222d4eb7bb212c83d76b6e3b28ed74e1a6b694c\": container with ID starting with 56ccddef658f511c781f0ccd5222d4eb7bb212c83d76b6e3b28ed74e1a6b694c not found: ID does not exist" Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.790968 4642 scope.go:117] "RemoveContainer" containerID="7751a6eb425f84a1b88980f5eeb5806af748eb81b0cc2bb67bc7a63f4d224858" Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.799135 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.807234 4642 scope.go:117] "RemoveContainer" containerID="95229b8637a0fdb71527cdf4a3fd28c615b90c1750b83117a50d4d02d45862e1" Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.808730 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.810344 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.812453 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.813642 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.815295 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.823505 4642 scope.go:117] "RemoveContainer" containerID="7751a6eb425f84a1b88980f5eeb5806af748eb81b0cc2bb67bc7a63f4d224858" Jan 28 07:06:07 crc kubenswrapper[4642]: E0128 07:06:07.823790 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7751a6eb425f84a1b88980f5eeb5806af748eb81b0cc2bb67bc7a63f4d224858\": container with ID starting with 7751a6eb425f84a1b88980f5eeb5806af748eb81b0cc2bb67bc7a63f4d224858 not found: ID does not exist" containerID="7751a6eb425f84a1b88980f5eeb5806af748eb81b0cc2bb67bc7a63f4d224858" Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.823814 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7751a6eb425f84a1b88980f5eeb5806af748eb81b0cc2bb67bc7a63f4d224858"} err="failed to get container status \"7751a6eb425f84a1b88980f5eeb5806af748eb81b0cc2bb67bc7a63f4d224858\": rpc error: code = NotFound desc = could not find container \"7751a6eb425f84a1b88980f5eeb5806af748eb81b0cc2bb67bc7a63f4d224858\": container with ID starting with 7751a6eb425f84a1b88980f5eeb5806af748eb81b0cc2bb67bc7a63f4d224858 not found: ID does not exist" Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.823832 4642 scope.go:117] "RemoveContainer" containerID="95229b8637a0fdb71527cdf4a3fd28c615b90c1750b83117a50d4d02d45862e1" Jan 28 07:06:07 crc kubenswrapper[4642]: E0128 07:06:07.824033 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95229b8637a0fdb71527cdf4a3fd28c615b90c1750b83117a50d4d02d45862e1\": container with ID starting with 95229b8637a0fdb71527cdf4a3fd28c615b90c1750b83117a50d4d02d45862e1 not found: ID does not exist" containerID="95229b8637a0fdb71527cdf4a3fd28c615b90c1750b83117a50d4d02d45862e1" Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.824061 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95229b8637a0fdb71527cdf4a3fd28c615b90c1750b83117a50d4d02d45862e1"} err="failed to get container status \"95229b8637a0fdb71527cdf4a3fd28c615b90c1750b83117a50d4d02d45862e1\": rpc error: code = NotFound desc = could not find container \"95229b8637a0fdb71527cdf4a3fd28c615b90c1750b83117a50d4d02d45862e1\": container with ID starting with 95229b8637a0fdb71527cdf4a3fd28c615b90c1750b83117a50d4d02d45862e1 not found: ID does not exist" Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.824075 4642 scope.go:117] "RemoveContainer" containerID="7751a6eb425f84a1b88980f5eeb5806af748eb81b0cc2bb67bc7a63f4d224858" Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.825312 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7751a6eb425f84a1b88980f5eeb5806af748eb81b0cc2bb67bc7a63f4d224858"} err="failed to get container status \"7751a6eb425f84a1b88980f5eeb5806af748eb81b0cc2bb67bc7a63f4d224858\": rpc error: code = NotFound desc = could not find container \"7751a6eb425f84a1b88980f5eeb5806af748eb81b0cc2bb67bc7a63f4d224858\": container with ID starting with 7751a6eb425f84a1b88980f5eeb5806af748eb81b0cc2bb67bc7a63f4d224858 not found: ID does not exist" Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.825353 4642 scope.go:117] "RemoveContainer" containerID="95229b8637a0fdb71527cdf4a3fd28c615b90c1750b83117a50d4d02d45862e1" Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.825897 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95229b8637a0fdb71527cdf4a3fd28c615b90c1750b83117a50d4d02d45862e1"} err="failed to get container status \"95229b8637a0fdb71527cdf4a3fd28c615b90c1750b83117a50d4d02d45862e1\": rpc error: code = NotFound desc = could not find container \"95229b8637a0fdb71527cdf4a3fd28c615b90c1750b83117a50d4d02d45862e1\": container with ID starting with 95229b8637a0fdb71527cdf4a3fd28c615b90c1750b83117a50d4d02d45862e1 not found: ID does not exist" Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.926945 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3566f95f-d7c0-4181-8314-df6adba02d06-logs\") pod \"nova-metadata-0\" (UID: \"3566f95f-d7c0-4181-8314-df6adba02d06\") " pod="openstack/nova-metadata-0" Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.927347 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3566f95f-d7c0-4181-8314-df6adba02d06-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3566f95f-d7c0-4181-8314-df6adba02d06\") " pod="openstack/nova-metadata-0" Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.927393 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9vqr\" (UniqueName: \"kubernetes.io/projected/71c8e1ac-188f-44f8-a158-3a1c97035d42-kube-api-access-l9vqr\") pod \"ceilometer-0\" (UID: \"71c8e1ac-188f-44f8-a158-3a1c97035d42\") " pod="openstack/ceilometer-0" Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.927424 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/71c8e1ac-188f-44f8-a158-3a1c97035d42-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"71c8e1ac-188f-44f8-a158-3a1c97035d42\") " pod="openstack/ceilometer-0" Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.927441 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/71c8e1ac-188f-44f8-a158-3a1c97035d42-run-httpd\") pod \"ceilometer-0\" (UID: \"71c8e1ac-188f-44f8-a158-3a1c97035d42\") " pod="openstack/ceilometer-0" Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.927456 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74kp9\" (UniqueName: \"kubernetes.io/projected/3566f95f-d7c0-4181-8314-df6adba02d06-kube-api-access-74kp9\") pod \"nova-metadata-0\" (UID: \"3566f95f-d7c0-4181-8314-df6adba02d06\") " pod="openstack/nova-metadata-0" Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.927485 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71c8e1ac-188f-44f8-a158-3a1c97035d42-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"71c8e1ac-188f-44f8-a158-3a1c97035d42\") " pod="openstack/ceilometer-0" Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.927499 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3566f95f-d7c0-4181-8314-df6adba02d06-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3566f95f-d7c0-4181-8314-df6adba02d06\") " pod="openstack/nova-metadata-0" Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.927528 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/71c8e1ac-188f-44f8-a158-3a1c97035d42-log-httpd\") pod \"ceilometer-0\" (UID: \"71c8e1ac-188f-44f8-a158-3a1c97035d42\") " pod="openstack/ceilometer-0" Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.927561 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3566f95f-d7c0-4181-8314-df6adba02d06-config-data\") pod \"nova-metadata-0\" (UID: \"3566f95f-d7c0-4181-8314-df6adba02d06\") " pod="openstack/nova-metadata-0" Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.927582 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71c8e1ac-188f-44f8-a158-3a1c97035d42-config-data\") pod \"ceilometer-0\" (UID: \"71c8e1ac-188f-44f8-a158-3a1c97035d42\") " pod="openstack/ceilometer-0" Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.927635 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71c8e1ac-188f-44f8-a158-3a1c97035d42-scripts\") pod \"ceilometer-0\" (UID: \"71c8e1ac-188f-44f8-a158-3a1c97035d42\") " pod="openstack/ceilometer-0" Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.927652 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/71c8e1ac-188f-44f8-a158-3a1c97035d42-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"71c8e1ac-188f-44f8-a158-3a1c97035d42\") " pod="openstack/ceilometer-0" Jan 28 07:06:07 crc kubenswrapper[4642]: I0128 07:06:07.991127 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-tb67c" Jan 28 07:06:08 crc kubenswrapper[4642]: I0128 07:06:08.029543 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71c8e1ac-188f-44f8-a158-3a1c97035d42-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"71c8e1ac-188f-44f8-a158-3a1c97035d42\") " pod="openstack/ceilometer-0" Jan 28 07:06:08 crc kubenswrapper[4642]: I0128 07:06:08.029579 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3566f95f-d7c0-4181-8314-df6adba02d06-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3566f95f-d7c0-4181-8314-df6adba02d06\") " pod="openstack/nova-metadata-0" Jan 28 07:06:08 crc kubenswrapper[4642]: I0128 07:06:08.029616 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/71c8e1ac-188f-44f8-a158-3a1c97035d42-log-httpd\") pod \"ceilometer-0\" (UID: \"71c8e1ac-188f-44f8-a158-3a1c97035d42\") " pod="openstack/ceilometer-0" Jan 28 07:06:08 crc kubenswrapper[4642]: I0128 07:06:08.029654 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3566f95f-d7c0-4181-8314-df6adba02d06-config-data\") pod \"nova-metadata-0\" (UID: \"3566f95f-d7c0-4181-8314-df6adba02d06\") " pod="openstack/nova-metadata-0" Jan 28 07:06:08 crc kubenswrapper[4642]: I0128 07:06:08.029678 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71c8e1ac-188f-44f8-a158-3a1c97035d42-config-data\") pod \"ceilometer-0\" (UID: \"71c8e1ac-188f-44f8-a158-3a1c97035d42\") " pod="openstack/ceilometer-0" Jan 28 07:06:08 crc kubenswrapper[4642]: I0128 07:06:08.029730 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71c8e1ac-188f-44f8-a158-3a1c97035d42-scripts\") pod \"ceilometer-0\" (UID: \"71c8e1ac-188f-44f8-a158-3a1c97035d42\") " pod="openstack/ceilometer-0" Jan 28 07:06:08 crc kubenswrapper[4642]: I0128 07:06:08.029745 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/71c8e1ac-188f-44f8-a158-3a1c97035d42-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"71c8e1ac-188f-44f8-a158-3a1c97035d42\") " pod="openstack/ceilometer-0" Jan 28 07:06:08 crc kubenswrapper[4642]: I0128 07:06:08.029775 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3566f95f-d7c0-4181-8314-df6adba02d06-logs\") pod \"nova-metadata-0\" (UID: \"3566f95f-d7c0-4181-8314-df6adba02d06\") " pod="openstack/nova-metadata-0" Jan 28 07:06:08 crc kubenswrapper[4642]: I0128 07:06:08.029794 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3566f95f-d7c0-4181-8314-df6adba02d06-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3566f95f-d7c0-4181-8314-df6adba02d06\") " pod="openstack/nova-metadata-0" Jan 28 07:06:08 crc kubenswrapper[4642]: I0128 07:06:08.029822 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9vqr\" (UniqueName: \"kubernetes.io/projected/71c8e1ac-188f-44f8-a158-3a1c97035d42-kube-api-access-l9vqr\") pod \"ceilometer-0\" (UID: \"71c8e1ac-188f-44f8-a158-3a1c97035d42\") " pod="openstack/ceilometer-0" Jan 28 07:06:08 crc kubenswrapper[4642]: I0128 07:06:08.029849 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/71c8e1ac-188f-44f8-a158-3a1c97035d42-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"71c8e1ac-188f-44f8-a158-3a1c97035d42\") " pod="openstack/ceilometer-0" Jan 28 07:06:08 crc kubenswrapper[4642]: I0128 07:06:08.029861 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/71c8e1ac-188f-44f8-a158-3a1c97035d42-run-httpd\") pod \"ceilometer-0\" (UID: \"71c8e1ac-188f-44f8-a158-3a1c97035d42\") " pod="openstack/ceilometer-0" Jan 28 07:06:08 crc kubenswrapper[4642]: I0128 07:06:08.029878 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74kp9\" (UniqueName: \"kubernetes.io/projected/3566f95f-d7c0-4181-8314-df6adba02d06-kube-api-access-74kp9\") pod \"nova-metadata-0\" (UID: \"3566f95f-d7c0-4181-8314-df6adba02d06\") " pod="openstack/nova-metadata-0" Jan 28 07:06:08 crc kubenswrapper[4642]: I0128 07:06:08.030584 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/71c8e1ac-188f-44f8-a158-3a1c97035d42-run-httpd\") pod \"ceilometer-0\" (UID: \"71c8e1ac-188f-44f8-a158-3a1c97035d42\") " pod="openstack/ceilometer-0" Jan 28 07:06:08 crc kubenswrapper[4642]: I0128 07:06:08.030687 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/71c8e1ac-188f-44f8-a158-3a1c97035d42-log-httpd\") pod \"ceilometer-0\" (UID: \"71c8e1ac-188f-44f8-a158-3a1c97035d42\") " pod="openstack/ceilometer-0" Jan 28 07:06:08 crc kubenswrapper[4642]: I0128 07:06:08.031180 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3566f95f-d7c0-4181-8314-df6adba02d06-logs\") pod \"nova-metadata-0\" (UID: \"3566f95f-d7c0-4181-8314-df6adba02d06\") " pod="openstack/nova-metadata-0" Jan 28 07:06:08 crc kubenswrapper[4642]: I0128 07:06:08.033412 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3566f95f-d7c0-4181-8314-df6adba02d06-config-data\") pod \"nova-metadata-0\" (UID: \"3566f95f-d7c0-4181-8314-df6adba02d06\") " pod="openstack/nova-metadata-0" Jan 28 07:06:08 crc kubenswrapper[4642]: I0128 07:06:08.035152 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71c8e1ac-188f-44f8-a158-3a1c97035d42-scripts\") pod \"ceilometer-0\" (UID: \"71c8e1ac-188f-44f8-a158-3a1c97035d42\") " pod="openstack/ceilometer-0" Jan 28 07:06:08 crc kubenswrapper[4642]: I0128 07:06:08.036119 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/71c8e1ac-188f-44f8-a158-3a1c97035d42-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"71c8e1ac-188f-44f8-a158-3a1c97035d42\") " pod="openstack/ceilometer-0" Jan 28 07:06:08 crc kubenswrapper[4642]: I0128 07:06:08.037497 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/71c8e1ac-188f-44f8-a158-3a1c97035d42-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"71c8e1ac-188f-44f8-a158-3a1c97035d42\") " pod="openstack/ceilometer-0" Jan 28 07:06:08 crc kubenswrapper[4642]: I0128 07:06:08.043042 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71c8e1ac-188f-44f8-a158-3a1c97035d42-config-data\") pod \"ceilometer-0\" (UID: \"71c8e1ac-188f-44f8-a158-3a1c97035d42\") " pod="openstack/ceilometer-0" Jan 28 07:06:08 crc kubenswrapper[4642]: I0128 07:06:08.045801 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3566f95f-d7c0-4181-8314-df6adba02d06-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3566f95f-d7c0-4181-8314-df6adba02d06\") " pod="openstack/nova-metadata-0" Jan 28 07:06:08 crc kubenswrapper[4642]: I0128 07:06:08.046639 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74kp9\" (UniqueName: \"kubernetes.io/projected/3566f95f-d7c0-4181-8314-df6adba02d06-kube-api-access-74kp9\") pod \"nova-metadata-0\" (UID: \"3566f95f-d7c0-4181-8314-df6adba02d06\") " pod="openstack/nova-metadata-0" Jan 28 07:06:08 crc kubenswrapper[4642]: I0128 07:06:08.047561 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3566f95f-d7c0-4181-8314-df6adba02d06-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3566f95f-d7c0-4181-8314-df6adba02d06\") " pod="openstack/nova-metadata-0" Jan 28 07:06:08 crc kubenswrapper[4642]: I0128 07:06:08.049825 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9vqr\" (UniqueName: \"kubernetes.io/projected/71c8e1ac-188f-44f8-a158-3a1c97035d42-kube-api-access-l9vqr\") pod \"ceilometer-0\" (UID: \"71c8e1ac-188f-44f8-a158-3a1c97035d42\") " pod="openstack/ceilometer-0" Jan 28 07:06:08 crc kubenswrapper[4642]: I0128 07:06:08.059770 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71c8e1ac-188f-44f8-a158-3a1c97035d42-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"71c8e1ac-188f-44f8-a158-3a1c97035d42\") " pod="openstack/ceilometer-0" Jan 28 07:06:08 crc kubenswrapper[4642]: I0128 07:06:08.063687 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 07:06:08 crc kubenswrapper[4642]: I0128 07:06:08.124012 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 07:06:08 crc kubenswrapper[4642]: I0128 07:06:08.131828 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84c94226-5b5d-4c10-a0fb-fcf5e42e34c0-scripts\") pod \"84c94226-5b5d-4c10-a0fb-fcf5e42e34c0\" (UID: \"84c94226-5b5d-4c10-a0fb-fcf5e42e34c0\") " Jan 28 07:06:08 crc kubenswrapper[4642]: I0128 07:06:08.132002 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84c94226-5b5d-4c10-a0fb-fcf5e42e34c0-config-data\") pod \"84c94226-5b5d-4c10-a0fb-fcf5e42e34c0\" (UID: \"84c94226-5b5d-4c10-a0fb-fcf5e42e34c0\") " Jan 28 07:06:08 crc kubenswrapper[4642]: I0128 07:06:08.132149 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84c94226-5b5d-4c10-a0fb-fcf5e42e34c0-combined-ca-bundle\") pod \"84c94226-5b5d-4c10-a0fb-fcf5e42e34c0\" (UID: \"84c94226-5b5d-4c10-a0fb-fcf5e42e34c0\") " Jan 28 07:06:08 crc kubenswrapper[4642]: I0128 07:06:08.132217 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6kc8t\" (UniqueName: \"kubernetes.io/projected/84c94226-5b5d-4c10-a0fb-fcf5e42e34c0-kube-api-access-6kc8t\") pod \"84c94226-5b5d-4c10-a0fb-fcf5e42e34c0\" (UID: \"84c94226-5b5d-4c10-a0fb-fcf5e42e34c0\") " Jan 28 07:06:08 crc kubenswrapper[4642]: I0128 07:06:08.136530 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84c94226-5b5d-4c10-a0fb-fcf5e42e34c0-kube-api-access-6kc8t" (OuterVolumeSpecName: "kube-api-access-6kc8t") pod "84c94226-5b5d-4c10-a0fb-fcf5e42e34c0" (UID: "84c94226-5b5d-4c10-a0fb-fcf5e42e34c0"). InnerVolumeSpecName "kube-api-access-6kc8t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:06:08 crc kubenswrapper[4642]: I0128 07:06:08.139953 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84c94226-5b5d-4c10-a0fb-fcf5e42e34c0-scripts" (OuterVolumeSpecName: "scripts") pod "84c94226-5b5d-4c10-a0fb-fcf5e42e34c0" (UID: "84c94226-5b5d-4c10-a0fb-fcf5e42e34c0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:06:08 crc kubenswrapper[4642]: I0128 07:06:08.164755 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84c94226-5b5d-4c10-a0fb-fcf5e42e34c0-config-data" (OuterVolumeSpecName: "config-data") pod "84c94226-5b5d-4c10-a0fb-fcf5e42e34c0" (UID: "84c94226-5b5d-4c10-a0fb-fcf5e42e34c0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:06:08 crc kubenswrapper[4642]: I0128 07:06:08.171213 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84c94226-5b5d-4c10-a0fb-fcf5e42e34c0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "84c94226-5b5d-4c10-a0fb-fcf5e42e34c0" (UID: "84c94226-5b5d-4c10-a0fb-fcf5e42e34c0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:06:08 crc kubenswrapper[4642]: I0128 07:06:08.199287 4642 patch_prober.go:28] interesting pod/machine-config-daemon-hdsmf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 07:06:08 crc kubenswrapper[4642]: I0128 07:06:08.199536 4642 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 07:06:08 crc kubenswrapper[4642]: I0128 07:06:08.236150 4642 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84c94226-5b5d-4c10-a0fb-fcf5e42e34c0-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 07:06:08 crc kubenswrapper[4642]: I0128 07:06:08.236216 4642 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84c94226-5b5d-4c10-a0fb-fcf5e42e34c0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 07:06:08 crc kubenswrapper[4642]: I0128 07:06:08.236244 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6kc8t\" (UniqueName: \"kubernetes.io/projected/84c94226-5b5d-4c10-a0fb-fcf5e42e34c0-kube-api-access-6kc8t\") on node \"crc\" DevicePath \"\"" Jan 28 07:06:08 crc kubenswrapper[4642]: I0128 07:06:08.236255 4642 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84c94226-5b5d-4c10-a0fb-fcf5e42e34c0-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 07:06:08 crc kubenswrapper[4642]: I0128 07:06:08.492406 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 07:06:08 crc kubenswrapper[4642]: I0128 07:06:08.563265 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 07:06:08 crc kubenswrapper[4642]: W0128 07:06:08.569345 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3566f95f_d7c0_4181_8314_df6adba02d06.slice/crio-fe89e83b43813983659c3c48474ed25b59dfce0950361cb1639a3e56b77b738a WatchSource:0}: Error finding container fe89e83b43813983659c3c48474ed25b59dfce0950361cb1639a3e56b77b738a: Status 404 returned error can't find the container with id fe89e83b43813983659c3c48474ed25b59dfce0950361cb1639a3e56b77b738a Jan 28 07:06:08 crc kubenswrapper[4642]: I0128 07:06:08.682044 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3566f95f-d7c0-4181-8314-df6adba02d06","Type":"ContainerStarted","Data":"fe89e83b43813983659c3c48474ed25b59dfce0950361cb1639a3e56b77b738a"} Jan 28 07:06:08 crc kubenswrapper[4642]: I0128 07:06:08.685918 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-tb67c" Jan 28 07:06:08 crc kubenswrapper[4642]: I0128 07:06:08.685941 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-tb67c" event={"ID":"84c94226-5b5d-4c10-a0fb-fcf5e42e34c0","Type":"ContainerDied","Data":"135d04a1eb17aee36701ed806dcc32b121bde67adf5076a887d384fcf5578fd2"} Jan 28 07:06:08 crc kubenswrapper[4642]: I0128 07:06:08.685984 4642 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="135d04a1eb17aee36701ed806dcc32b121bde67adf5076a887d384fcf5578fd2" Jan 28 07:06:08 crc kubenswrapper[4642]: I0128 07:06:08.689531 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"71c8e1ac-188f-44f8-a158-3a1c97035d42","Type":"ContainerStarted","Data":"6c5b259bcfd8680f79dbc89d88c18ece17f92336503e1e1923a35dae6182005c"} Jan 28 07:06:08 crc kubenswrapper[4642]: I0128 07:06:08.718610 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 28 07:06:08 crc kubenswrapper[4642]: E0128 07:06:08.719060 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84c94226-5b5d-4c10-a0fb-fcf5e42e34c0" containerName="nova-cell1-conductor-db-sync" Jan 28 07:06:08 crc kubenswrapper[4642]: I0128 07:06:08.719074 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="84c94226-5b5d-4c10-a0fb-fcf5e42e34c0" containerName="nova-cell1-conductor-db-sync" Jan 28 07:06:08 crc kubenswrapper[4642]: I0128 07:06:08.719281 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="84c94226-5b5d-4c10-a0fb-fcf5e42e34c0" containerName="nova-cell1-conductor-db-sync" Jan 28 07:06:08 crc kubenswrapper[4642]: I0128 07:06:08.719878 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 28 07:06:08 crc kubenswrapper[4642]: I0128 07:06:08.726197 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 28 07:06:08 crc kubenswrapper[4642]: I0128 07:06:08.729902 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 28 07:06:08 crc kubenswrapper[4642]: I0128 07:06:08.850461 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5b728d1-49f3-4652-b330-89eb118ee26e-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"e5b728d1-49f3-4652-b330-89eb118ee26e\") " pod="openstack/nova-cell1-conductor-0" Jan 28 07:06:08 crc kubenswrapper[4642]: I0128 07:06:08.850673 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjqwz\" (UniqueName: \"kubernetes.io/projected/e5b728d1-49f3-4652-b330-89eb118ee26e-kube-api-access-mjqwz\") pod \"nova-cell1-conductor-0\" (UID: \"e5b728d1-49f3-4652-b330-89eb118ee26e\") " pod="openstack/nova-cell1-conductor-0" Jan 28 07:06:08 crc kubenswrapper[4642]: I0128 07:06:08.850847 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5b728d1-49f3-4652-b330-89eb118ee26e-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"e5b728d1-49f3-4652-b330-89eb118ee26e\") " pod="openstack/nova-cell1-conductor-0" Jan 28 07:06:08 crc kubenswrapper[4642]: I0128 07:06:08.952686 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5b728d1-49f3-4652-b330-89eb118ee26e-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"e5b728d1-49f3-4652-b330-89eb118ee26e\") " pod="openstack/nova-cell1-conductor-0" Jan 28 07:06:08 crc kubenswrapper[4642]: I0128 07:06:08.953072 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5b728d1-49f3-4652-b330-89eb118ee26e-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"e5b728d1-49f3-4652-b330-89eb118ee26e\") " pod="openstack/nova-cell1-conductor-0" Jan 28 07:06:08 crc kubenswrapper[4642]: I0128 07:06:08.953138 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjqwz\" (UniqueName: \"kubernetes.io/projected/e5b728d1-49f3-4652-b330-89eb118ee26e-kube-api-access-mjqwz\") pod \"nova-cell1-conductor-0\" (UID: \"e5b728d1-49f3-4652-b330-89eb118ee26e\") " pod="openstack/nova-cell1-conductor-0" Jan 28 07:06:08 crc kubenswrapper[4642]: I0128 07:06:08.958275 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5b728d1-49f3-4652-b330-89eb118ee26e-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"e5b728d1-49f3-4652-b330-89eb118ee26e\") " pod="openstack/nova-cell1-conductor-0" Jan 28 07:06:08 crc kubenswrapper[4642]: I0128 07:06:08.958578 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5b728d1-49f3-4652-b330-89eb118ee26e-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"e5b728d1-49f3-4652-b330-89eb118ee26e\") " pod="openstack/nova-cell1-conductor-0" Jan 28 07:06:08 crc kubenswrapper[4642]: I0128 07:06:08.970489 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjqwz\" (UniqueName: \"kubernetes.io/projected/e5b728d1-49f3-4652-b330-89eb118ee26e-kube-api-access-mjqwz\") pod \"nova-cell1-conductor-0\" (UID: \"e5b728d1-49f3-4652-b330-89eb118ee26e\") " pod="openstack/nova-cell1-conductor-0" Jan 28 07:06:09 crc kubenswrapper[4642]: I0128 07:06:09.015284 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-qch6m" Jan 28 07:06:09 crc kubenswrapper[4642]: I0128 07:06:09.053955 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 28 07:06:09 crc kubenswrapper[4642]: I0128 07:06:09.114573 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50fa8a6c-3a6a-4b78-8330-61f133fc251d" path="/var/lib/kubelet/pods/50fa8a6c-3a6a-4b78-8330-61f133fc251d/volumes" Jan 28 07:06:09 crc kubenswrapper[4642]: I0128 07:06:09.115269 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4c7e21f-af49-4131-85d6-c3c5656153db" path="/var/lib/kubelet/pods/a4c7e21f-af49-4131-85d6-c3c5656153db/volumes" Jan 28 07:06:09 crc kubenswrapper[4642]: I0128 07:06:09.156310 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2388a8f5-7be5-4284-9370-23d6f1545c8a-scripts\") pod \"2388a8f5-7be5-4284-9370-23d6f1545c8a\" (UID: \"2388a8f5-7be5-4284-9370-23d6f1545c8a\") " Jan 28 07:06:09 crc kubenswrapper[4642]: I0128 07:06:09.158427 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2388a8f5-7be5-4284-9370-23d6f1545c8a-config-data\") pod \"2388a8f5-7be5-4284-9370-23d6f1545c8a\" (UID: \"2388a8f5-7be5-4284-9370-23d6f1545c8a\") " Jan 28 07:06:09 crc kubenswrapper[4642]: I0128 07:06:09.158562 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f68nz\" (UniqueName: \"kubernetes.io/projected/2388a8f5-7be5-4284-9370-23d6f1545c8a-kube-api-access-f68nz\") pod \"2388a8f5-7be5-4284-9370-23d6f1545c8a\" (UID: \"2388a8f5-7be5-4284-9370-23d6f1545c8a\") " Jan 28 07:06:09 crc kubenswrapper[4642]: I0128 07:06:09.158594 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2388a8f5-7be5-4284-9370-23d6f1545c8a-combined-ca-bundle\") pod \"2388a8f5-7be5-4284-9370-23d6f1545c8a\" (UID: \"2388a8f5-7be5-4284-9370-23d6f1545c8a\") " Jan 28 07:06:09 crc kubenswrapper[4642]: I0128 07:06:09.160919 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2388a8f5-7be5-4284-9370-23d6f1545c8a-scripts" (OuterVolumeSpecName: "scripts") pod "2388a8f5-7be5-4284-9370-23d6f1545c8a" (UID: "2388a8f5-7be5-4284-9370-23d6f1545c8a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:06:09 crc kubenswrapper[4642]: I0128 07:06:09.161733 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2388a8f5-7be5-4284-9370-23d6f1545c8a-kube-api-access-f68nz" (OuterVolumeSpecName: "kube-api-access-f68nz") pod "2388a8f5-7be5-4284-9370-23d6f1545c8a" (UID: "2388a8f5-7be5-4284-9370-23d6f1545c8a"). InnerVolumeSpecName "kube-api-access-f68nz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:06:09 crc kubenswrapper[4642]: I0128 07:06:09.190529 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2388a8f5-7be5-4284-9370-23d6f1545c8a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2388a8f5-7be5-4284-9370-23d6f1545c8a" (UID: "2388a8f5-7be5-4284-9370-23d6f1545c8a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:06:09 crc kubenswrapper[4642]: I0128 07:06:09.200041 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2388a8f5-7be5-4284-9370-23d6f1545c8a-config-data" (OuterVolumeSpecName: "config-data") pod "2388a8f5-7be5-4284-9370-23d6f1545c8a" (UID: "2388a8f5-7be5-4284-9370-23d6f1545c8a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:06:09 crc kubenswrapper[4642]: I0128 07:06:09.260363 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f68nz\" (UniqueName: \"kubernetes.io/projected/2388a8f5-7be5-4284-9370-23d6f1545c8a-kube-api-access-f68nz\") on node \"crc\" DevicePath \"\"" Jan 28 07:06:09 crc kubenswrapper[4642]: I0128 07:06:09.261449 4642 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2388a8f5-7be5-4284-9370-23d6f1545c8a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 07:06:09 crc kubenswrapper[4642]: I0128 07:06:09.261501 4642 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2388a8f5-7be5-4284-9370-23d6f1545c8a-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 07:06:09 crc kubenswrapper[4642]: I0128 07:06:09.261525 4642 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2388a8f5-7be5-4284-9370-23d6f1545c8a-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 07:06:09 crc kubenswrapper[4642]: I0128 07:06:09.459402 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 28 07:06:09 crc kubenswrapper[4642]: I0128 07:06:09.704637 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3566f95f-d7c0-4181-8314-df6adba02d06","Type":"ContainerStarted","Data":"f83467772a226bffd3f33d110cbf507d7292588775f35721430cd873189d64ab"} Jan 28 07:06:09 crc kubenswrapper[4642]: I0128 07:06:09.704676 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3566f95f-d7c0-4181-8314-df6adba02d06","Type":"ContainerStarted","Data":"fa52fabde28f60eb371f1e6e4afa805550f7d8912273b54dafa18cf5e445c7e9"} Jan 28 07:06:09 crc kubenswrapper[4642]: I0128 07:06:09.706716 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-qch6m" event={"ID":"2388a8f5-7be5-4284-9370-23d6f1545c8a","Type":"ContainerDied","Data":"2f50d816f127a20aaded8cca1f567e2c8cdd651310fcea78d58915e7989f6eeb"} Jan 28 07:06:09 crc kubenswrapper[4642]: I0128 07:06:09.706744 4642 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f50d816f127a20aaded8cca1f567e2c8cdd651310fcea78d58915e7989f6eeb" Jan 28 07:06:09 crc kubenswrapper[4642]: I0128 07:06:09.706762 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-qch6m" Jan 28 07:06:09 crc kubenswrapper[4642]: I0128 07:06:09.709638 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"71c8e1ac-188f-44f8-a158-3a1c97035d42","Type":"ContainerStarted","Data":"28a462dbfd6310fa9a18f6a7289f2999423c709e95a1dfed7ef067ae335d3299"} Jan 28 07:06:09 crc kubenswrapper[4642]: I0128 07:06:09.711247 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"e5b728d1-49f3-4652-b330-89eb118ee26e","Type":"ContainerStarted","Data":"833c1a25bb71833fdc7ab1a36f39ced3a68012bb60ea81e530432d68dcfb89a7"} Jan 28 07:06:09 crc kubenswrapper[4642]: I0128 07:06:09.711273 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"e5b728d1-49f3-4652-b330-89eb118ee26e","Type":"ContainerStarted","Data":"a4fe94945869e8d4c065bd366e7fe2924b23a8df6093254732cdc0814edf3ad1"} Jan 28 07:06:09 crc kubenswrapper[4642]: I0128 07:06:09.712254 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 28 07:06:09 crc kubenswrapper[4642]: I0128 07:06:09.733372 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.733343521 podStartE2EDuration="2.733343521s" podCreationTimestamp="2026-01-28 07:06:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 07:06:09.721975306 +0000 UTC m=+1092.954064115" watchObservedRunningTime="2026-01-28 07:06:09.733343521 +0000 UTC m=+1092.965432330" Jan 28 07:06:09 crc kubenswrapper[4642]: I0128 07:06:09.763567 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=1.763546931 podStartE2EDuration="1.763546931s" podCreationTimestamp="2026-01-28 07:06:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 07:06:09.739054819 +0000 UTC m=+1092.971143629" watchObservedRunningTime="2026-01-28 07:06:09.763546931 +0000 UTC m=+1092.995635729" Jan 28 07:06:09 crc kubenswrapper[4642]: I0128 07:06:09.798834 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 28 07:06:09 crc kubenswrapper[4642]: I0128 07:06:09.804920 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 07:06:09 crc kubenswrapper[4642]: I0128 07:06:09.859510 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 07:06:10 crc kubenswrapper[4642]: I0128 07:06:10.725378 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"71c8e1ac-188f-44f8-a158-3a1c97035d42","Type":"ContainerStarted","Data":"7dca94b318fe03e75ba0ad0ac3926bb7f1cedbe07925b261ada02f01fda89ba3"} Jan 28 07:06:10 crc kubenswrapper[4642]: I0128 07:06:10.727649 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"417842f7-e349-410b-9dc2-5ef497e538de","Type":"ContainerStarted","Data":"89cf6f74006b5d6439b5268009b54c781f1d7cb627743701bce0e9a0e079bdee"} Jan 28 07:06:10 crc kubenswrapper[4642]: I0128 07:06:10.727705 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"417842f7-e349-410b-9dc2-5ef497e538de","Type":"ContainerStarted","Data":"d14527beab3deb241a270644177ffad1309f288594e4ac2f45ccc4d4c737a07d"} Jan 28 07:06:10 crc kubenswrapper[4642]: I0128 07:06:10.727989 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="417842f7-e349-410b-9dc2-5ef497e538de" containerName="nova-api-log" containerID="cri-o://89cf6f74006b5d6439b5268009b54c781f1d7cb627743701bce0e9a0e079bdee" gracePeriod=30 Jan 28 07:06:10 crc kubenswrapper[4642]: I0128 07:06:10.728249 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="417842f7-e349-410b-9dc2-5ef497e538de" containerName="nova-api-api" containerID="cri-o://d14527beab3deb241a270644177ffad1309f288594e4ac2f45ccc4d4c737a07d" gracePeriod=30 Jan 28 07:06:10 crc kubenswrapper[4642]: I0128 07:06:10.748116 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=6.909569412 podStartE2EDuration="15.748097556s" podCreationTimestamp="2026-01-28 07:05:55 +0000 UTC" firstStartedPulling="2026-01-28 07:06:00.756423609 +0000 UTC m=+1083.988512418" lastFinishedPulling="2026-01-28 07:06:09.594951753 +0000 UTC m=+1092.827040562" observedRunningTime="2026-01-28 07:06:10.741884664 +0000 UTC m=+1093.973973473" watchObservedRunningTime="2026-01-28 07:06:10.748097556 +0000 UTC m=+1093.980186365" Jan 28 07:06:10 crc kubenswrapper[4642]: I0128 07:06:10.942414 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-84f7f4c6c9-w4c7k" Jan 28 07:06:11 crc kubenswrapper[4642]: I0128 07:06:11.011449 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75f9d8d9f-s9pnh"] Jan 28 07:06:11 crc kubenswrapper[4642]: I0128 07:06:11.011670 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-75f9d8d9f-s9pnh" podUID="a3686e6e-7a8c-45b8-80a5-c64c8fc92aef" containerName="dnsmasq-dns" containerID="cri-o://f4e3b6fc49653592508788121dcaaba70eb8cf6d338fa2e21cc923e1aaf15d0b" gracePeriod=10 Jan 28 07:06:11 crc kubenswrapper[4642]: I0128 07:06:11.360759 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 07:06:11 crc kubenswrapper[4642]: I0128 07:06:11.480399 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75f9d8d9f-s9pnh" Jan 28 07:06:11 crc kubenswrapper[4642]: I0128 07:06:11.545105 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/417842f7-e349-410b-9dc2-5ef497e538de-logs\") pod \"417842f7-e349-410b-9dc2-5ef497e538de\" (UID: \"417842f7-e349-410b-9dc2-5ef497e538de\") " Jan 28 07:06:11 crc kubenswrapper[4642]: I0128 07:06:11.545216 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/417842f7-e349-410b-9dc2-5ef497e538de-config-data\") pod \"417842f7-e349-410b-9dc2-5ef497e538de\" (UID: \"417842f7-e349-410b-9dc2-5ef497e538de\") " Jan 28 07:06:11 crc kubenswrapper[4642]: I0128 07:06:11.545281 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjbrr\" (UniqueName: \"kubernetes.io/projected/417842f7-e349-410b-9dc2-5ef497e538de-kube-api-access-fjbrr\") pod \"417842f7-e349-410b-9dc2-5ef497e538de\" (UID: \"417842f7-e349-410b-9dc2-5ef497e538de\") " Jan 28 07:06:11 crc kubenswrapper[4642]: I0128 07:06:11.545315 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/417842f7-e349-410b-9dc2-5ef497e538de-combined-ca-bundle\") pod \"417842f7-e349-410b-9dc2-5ef497e538de\" (UID: \"417842f7-e349-410b-9dc2-5ef497e538de\") " Jan 28 07:06:11 crc kubenswrapper[4642]: I0128 07:06:11.546686 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/417842f7-e349-410b-9dc2-5ef497e538de-logs" (OuterVolumeSpecName: "logs") pod "417842f7-e349-410b-9dc2-5ef497e538de" (UID: "417842f7-e349-410b-9dc2-5ef497e538de"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 07:06:11 crc kubenswrapper[4642]: I0128 07:06:11.553787 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/417842f7-e349-410b-9dc2-5ef497e538de-kube-api-access-fjbrr" (OuterVolumeSpecName: "kube-api-access-fjbrr") pod "417842f7-e349-410b-9dc2-5ef497e538de" (UID: "417842f7-e349-410b-9dc2-5ef497e538de"). InnerVolumeSpecName "kube-api-access-fjbrr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:06:11 crc kubenswrapper[4642]: I0128 07:06:11.569863 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/417842f7-e349-410b-9dc2-5ef497e538de-config-data" (OuterVolumeSpecName: "config-data") pod "417842f7-e349-410b-9dc2-5ef497e538de" (UID: "417842f7-e349-410b-9dc2-5ef497e538de"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:06:11 crc kubenswrapper[4642]: I0128 07:06:11.572111 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/417842f7-e349-410b-9dc2-5ef497e538de-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "417842f7-e349-410b-9dc2-5ef497e538de" (UID: "417842f7-e349-410b-9dc2-5ef497e538de"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:06:11 crc kubenswrapper[4642]: I0128 07:06:11.646736 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqvmt\" (UniqueName: \"kubernetes.io/projected/a3686e6e-7a8c-45b8-80a5-c64c8fc92aef-kube-api-access-hqvmt\") pod \"a3686e6e-7a8c-45b8-80a5-c64c8fc92aef\" (UID: \"a3686e6e-7a8c-45b8-80a5-c64c8fc92aef\") " Jan 28 07:06:11 crc kubenswrapper[4642]: I0128 07:06:11.646955 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a3686e6e-7a8c-45b8-80a5-c64c8fc92aef-dns-svc\") pod \"a3686e6e-7a8c-45b8-80a5-c64c8fc92aef\" (UID: \"a3686e6e-7a8c-45b8-80a5-c64c8fc92aef\") " Jan 28 07:06:11 crc kubenswrapper[4642]: I0128 07:06:11.647030 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a3686e6e-7a8c-45b8-80a5-c64c8fc92aef-ovsdbserver-nb\") pod \"a3686e6e-7a8c-45b8-80a5-c64c8fc92aef\" (UID: \"a3686e6e-7a8c-45b8-80a5-c64c8fc92aef\") " Jan 28 07:06:11 crc kubenswrapper[4642]: I0128 07:06:11.647068 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a3686e6e-7a8c-45b8-80a5-c64c8fc92aef-ovsdbserver-sb\") pod \"a3686e6e-7a8c-45b8-80a5-c64c8fc92aef\" (UID: \"a3686e6e-7a8c-45b8-80a5-c64c8fc92aef\") " Jan 28 07:06:11 crc kubenswrapper[4642]: I0128 07:06:11.647205 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a3686e6e-7a8c-45b8-80a5-c64c8fc92aef-dns-swift-storage-0\") pod \"a3686e6e-7a8c-45b8-80a5-c64c8fc92aef\" (UID: \"a3686e6e-7a8c-45b8-80a5-c64c8fc92aef\") " Jan 28 07:06:11 crc kubenswrapper[4642]: I0128 07:06:11.647243 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3686e6e-7a8c-45b8-80a5-c64c8fc92aef-config\") pod \"a3686e6e-7a8c-45b8-80a5-c64c8fc92aef\" (UID: \"a3686e6e-7a8c-45b8-80a5-c64c8fc92aef\") " Jan 28 07:06:11 crc kubenswrapper[4642]: I0128 07:06:11.647591 4642 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/417842f7-e349-410b-9dc2-5ef497e538de-logs\") on node \"crc\" DevicePath \"\"" Jan 28 07:06:11 crc kubenswrapper[4642]: I0128 07:06:11.647614 4642 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/417842f7-e349-410b-9dc2-5ef497e538de-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 07:06:11 crc kubenswrapper[4642]: I0128 07:06:11.647627 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fjbrr\" (UniqueName: \"kubernetes.io/projected/417842f7-e349-410b-9dc2-5ef497e538de-kube-api-access-fjbrr\") on node \"crc\" DevicePath \"\"" Jan 28 07:06:11 crc kubenswrapper[4642]: I0128 07:06:11.647638 4642 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/417842f7-e349-410b-9dc2-5ef497e538de-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 07:06:11 crc kubenswrapper[4642]: I0128 07:06:11.649602 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3686e6e-7a8c-45b8-80a5-c64c8fc92aef-kube-api-access-hqvmt" (OuterVolumeSpecName: "kube-api-access-hqvmt") pod "a3686e6e-7a8c-45b8-80a5-c64c8fc92aef" (UID: "a3686e6e-7a8c-45b8-80a5-c64c8fc92aef"). InnerVolumeSpecName "kube-api-access-hqvmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:06:11 crc kubenswrapper[4642]: I0128 07:06:11.687783 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3686e6e-7a8c-45b8-80a5-c64c8fc92aef-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a3686e6e-7a8c-45b8-80a5-c64c8fc92aef" (UID: "a3686e6e-7a8c-45b8-80a5-c64c8fc92aef"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:06:11 crc kubenswrapper[4642]: I0128 07:06:11.688970 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3686e6e-7a8c-45b8-80a5-c64c8fc92aef-config" (OuterVolumeSpecName: "config") pod "a3686e6e-7a8c-45b8-80a5-c64c8fc92aef" (UID: "a3686e6e-7a8c-45b8-80a5-c64c8fc92aef"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:06:11 crc kubenswrapper[4642]: I0128 07:06:11.689057 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3686e6e-7a8c-45b8-80a5-c64c8fc92aef-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a3686e6e-7a8c-45b8-80a5-c64c8fc92aef" (UID: "a3686e6e-7a8c-45b8-80a5-c64c8fc92aef"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:06:11 crc kubenswrapper[4642]: I0128 07:06:11.693743 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3686e6e-7a8c-45b8-80a5-c64c8fc92aef-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a3686e6e-7a8c-45b8-80a5-c64c8fc92aef" (UID: "a3686e6e-7a8c-45b8-80a5-c64c8fc92aef"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:06:11 crc kubenswrapper[4642]: I0128 07:06:11.696518 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3686e6e-7a8c-45b8-80a5-c64c8fc92aef-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a3686e6e-7a8c-45b8-80a5-c64c8fc92aef" (UID: "a3686e6e-7a8c-45b8-80a5-c64c8fc92aef"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:06:11 crc kubenswrapper[4642]: I0128 07:06:11.737644 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"71c8e1ac-188f-44f8-a158-3a1c97035d42","Type":"ContainerStarted","Data":"e118a0c0c1ddf30f75a05a97651134e10797b190ab863647e80a648e8f6b77b1"} Jan 28 07:06:11 crc kubenswrapper[4642]: I0128 07:06:11.739212 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0a6a27a0-9f13-4980-95f5-ec06d1be7492","Type":"ContainerStarted","Data":"cb3aa49821a306881840a596197f91894ddf6efbfc716166791e344624614393"} Jan 28 07:06:11 crc kubenswrapper[4642]: I0128 07:06:11.739325 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="0a6a27a0-9f13-4980-95f5-ec06d1be7492" containerName="nova-scheduler-scheduler" containerID="cri-o://cb3aa49821a306881840a596197f91894ddf6efbfc716166791e344624614393" gracePeriod=30 Jan 28 07:06:11 crc kubenswrapper[4642]: I0128 07:06:11.744802 4642 generic.go:334] "Generic (PLEG): container finished" podID="417842f7-e349-410b-9dc2-5ef497e538de" containerID="d14527beab3deb241a270644177ffad1309f288594e4ac2f45ccc4d4c737a07d" exitCode=0 Jan 28 07:06:11 crc kubenswrapper[4642]: I0128 07:06:11.744833 4642 generic.go:334] "Generic (PLEG): container finished" podID="417842f7-e349-410b-9dc2-5ef497e538de" containerID="89cf6f74006b5d6439b5268009b54c781f1d7cb627743701bce0e9a0e079bdee" exitCode=143 Jan 28 07:06:11 crc kubenswrapper[4642]: I0128 07:06:11.744885 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"417842f7-e349-410b-9dc2-5ef497e538de","Type":"ContainerDied","Data":"d14527beab3deb241a270644177ffad1309f288594e4ac2f45ccc4d4c737a07d"} Jan 28 07:06:11 crc kubenswrapper[4642]: I0128 07:06:11.744915 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"417842f7-e349-410b-9dc2-5ef497e538de","Type":"ContainerDied","Data":"89cf6f74006b5d6439b5268009b54c781f1d7cb627743701bce0e9a0e079bdee"} Jan 28 07:06:11 crc kubenswrapper[4642]: I0128 07:06:11.744927 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"417842f7-e349-410b-9dc2-5ef497e538de","Type":"ContainerDied","Data":"160f64ba1c93ed6391e82ad312397e3fd0c81dc2095c066a586852c02aa9af39"} Jan 28 07:06:11 crc kubenswrapper[4642]: I0128 07:06:11.744946 4642 scope.go:117] "RemoveContainer" containerID="d14527beab3deb241a270644177ffad1309f288594e4ac2f45ccc4d4c737a07d" Jan 28 07:06:11 crc kubenswrapper[4642]: I0128 07:06:11.745085 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 07:06:11 crc kubenswrapper[4642]: I0128 07:06:11.748103 4642 generic.go:334] "Generic (PLEG): container finished" podID="a3686e6e-7a8c-45b8-80a5-c64c8fc92aef" containerID="f4e3b6fc49653592508788121dcaaba70eb8cf6d338fa2e21cc923e1aaf15d0b" exitCode=0 Jan 28 07:06:11 crc kubenswrapper[4642]: I0128 07:06:11.748302 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75f9d8d9f-s9pnh" event={"ID":"a3686e6e-7a8c-45b8-80a5-c64c8fc92aef","Type":"ContainerDied","Data":"f4e3b6fc49653592508788121dcaaba70eb8cf6d338fa2e21cc923e1aaf15d0b"} Jan 28 07:06:11 crc kubenswrapper[4642]: I0128 07:06:11.748348 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75f9d8d9f-s9pnh" event={"ID":"a3686e6e-7a8c-45b8-80a5-c64c8fc92aef","Type":"ContainerDied","Data":"29f617298ebb96a9bfa5d52561fcb5b49d7e75233af9d0830c1a931c39d8fa35"} Jan 28 07:06:11 crc kubenswrapper[4642]: I0128 07:06:11.748374 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3566f95f-d7c0-4181-8314-df6adba02d06" containerName="nova-metadata-log" containerID="cri-o://fa52fabde28f60eb371f1e6e4afa805550f7d8912273b54dafa18cf5e445c7e9" gracePeriod=30 Jan 28 07:06:11 crc kubenswrapper[4642]: I0128 07:06:11.748388 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75f9d8d9f-s9pnh" Jan 28 07:06:11 crc kubenswrapper[4642]: I0128 07:06:11.748413 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3566f95f-d7c0-4181-8314-df6adba02d06" containerName="nova-metadata-metadata" containerID="cri-o://f83467772a226bffd3f33d110cbf507d7292588775f35721430cd873189d64ab" gracePeriod=30 Jan 28 07:06:11 crc kubenswrapper[4642]: I0128 07:06:11.759036 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=6.594985752 podStartE2EDuration="16.759023563s" podCreationTimestamp="2026-01-28 07:05:55 +0000 UTC" firstStartedPulling="2026-01-28 07:06:00.737259696 +0000 UTC m=+1083.969348505" lastFinishedPulling="2026-01-28 07:06:10.901297506 +0000 UTC m=+1094.133386316" observedRunningTime="2026-01-28 07:06:11.757290315 +0000 UTC m=+1094.989379124" watchObservedRunningTime="2026-01-28 07:06:11.759023563 +0000 UTC m=+1094.991112372" Jan 28 07:06:11 crc kubenswrapper[4642]: I0128 07:06:11.760589 4642 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a3686e6e-7a8c-45b8-80a5-c64c8fc92aef-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 07:06:11 crc kubenswrapper[4642]: I0128 07:06:11.760623 4642 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a3686e6e-7a8c-45b8-80a5-c64c8fc92aef-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 07:06:11 crc kubenswrapper[4642]: I0128 07:06:11.760634 4642 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a3686e6e-7a8c-45b8-80a5-c64c8fc92aef-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 28 07:06:11 crc kubenswrapper[4642]: I0128 07:06:11.760645 4642 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3686e6e-7a8c-45b8-80a5-c64c8fc92aef-config\") on node \"crc\" DevicePath \"\"" Jan 28 07:06:11 crc kubenswrapper[4642]: I0128 07:06:11.760654 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hqvmt\" (UniqueName: \"kubernetes.io/projected/a3686e6e-7a8c-45b8-80a5-c64c8fc92aef-kube-api-access-hqvmt\") on node \"crc\" DevicePath \"\"" Jan 28 07:06:11 crc kubenswrapper[4642]: I0128 07:06:11.760662 4642 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a3686e6e-7a8c-45b8-80a5-c64c8fc92aef-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 07:06:11 crc kubenswrapper[4642]: I0128 07:06:11.780448 4642 scope.go:117] "RemoveContainer" containerID="89cf6f74006b5d6439b5268009b54c781f1d7cb627743701bce0e9a0e079bdee" Jan 28 07:06:11 crc kubenswrapper[4642]: I0128 07:06:11.853616 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75f9d8d9f-s9pnh"] Jan 28 07:06:11 crc kubenswrapper[4642]: I0128 07:06:11.855642 4642 scope.go:117] "RemoveContainer" containerID="d14527beab3deb241a270644177ffad1309f288594e4ac2f45ccc4d4c737a07d" Jan 28 07:06:11 crc kubenswrapper[4642]: E0128 07:06:11.856586 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d14527beab3deb241a270644177ffad1309f288594e4ac2f45ccc4d4c737a07d\": container with ID starting with d14527beab3deb241a270644177ffad1309f288594e4ac2f45ccc4d4c737a07d not found: ID does not exist" containerID="d14527beab3deb241a270644177ffad1309f288594e4ac2f45ccc4d4c737a07d" Jan 28 07:06:11 crc kubenswrapper[4642]: I0128 07:06:11.856615 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d14527beab3deb241a270644177ffad1309f288594e4ac2f45ccc4d4c737a07d"} err="failed to get container status \"d14527beab3deb241a270644177ffad1309f288594e4ac2f45ccc4d4c737a07d\": rpc error: code = NotFound desc = could not find container \"d14527beab3deb241a270644177ffad1309f288594e4ac2f45ccc4d4c737a07d\": container with ID starting with d14527beab3deb241a270644177ffad1309f288594e4ac2f45ccc4d4c737a07d not found: ID does not exist" Jan 28 07:06:11 crc kubenswrapper[4642]: I0128 07:06:11.856632 4642 scope.go:117] "RemoveContainer" containerID="89cf6f74006b5d6439b5268009b54c781f1d7cb627743701bce0e9a0e079bdee" Jan 28 07:06:11 crc kubenswrapper[4642]: E0128 07:06:11.863362 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89cf6f74006b5d6439b5268009b54c781f1d7cb627743701bce0e9a0e079bdee\": container with ID starting with 89cf6f74006b5d6439b5268009b54c781f1d7cb627743701bce0e9a0e079bdee not found: ID does not exist" containerID="89cf6f74006b5d6439b5268009b54c781f1d7cb627743701bce0e9a0e079bdee" Jan 28 07:06:11 crc kubenswrapper[4642]: I0128 07:06:11.863393 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89cf6f74006b5d6439b5268009b54c781f1d7cb627743701bce0e9a0e079bdee"} err="failed to get container status \"89cf6f74006b5d6439b5268009b54c781f1d7cb627743701bce0e9a0e079bdee\": rpc error: code = NotFound desc = could not find container \"89cf6f74006b5d6439b5268009b54c781f1d7cb627743701bce0e9a0e079bdee\": container with ID starting with 89cf6f74006b5d6439b5268009b54c781f1d7cb627743701bce0e9a0e079bdee not found: ID does not exist" Jan 28 07:06:11 crc kubenswrapper[4642]: I0128 07:06:11.863412 4642 scope.go:117] "RemoveContainer" containerID="d14527beab3deb241a270644177ffad1309f288594e4ac2f45ccc4d4c737a07d" Jan 28 07:06:11 crc kubenswrapper[4642]: I0128 07:06:11.865098 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d14527beab3deb241a270644177ffad1309f288594e4ac2f45ccc4d4c737a07d"} err="failed to get container status \"d14527beab3deb241a270644177ffad1309f288594e4ac2f45ccc4d4c737a07d\": rpc error: code = NotFound desc = could not find container \"d14527beab3deb241a270644177ffad1309f288594e4ac2f45ccc4d4c737a07d\": container with ID starting with d14527beab3deb241a270644177ffad1309f288594e4ac2f45ccc4d4c737a07d not found: ID does not exist" Jan 28 07:06:11 crc kubenswrapper[4642]: I0128 07:06:11.865124 4642 scope.go:117] "RemoveContainer" containerID="89cf6f74006b5d6439b5268009b54c781f1d7cb627743701bce0e9a0e079bdee" Jan 28 07:06:11 crc kubenswrapper[4642]: I0128 07:06:11.867317 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89cf6f74006b5d6439b5268009b54c781f1d7cb627743701bce0e9a0e079bdee"} err="failed to get container status \"89cf6f74006b5d6439b5268009b54c781f1d7cb627743701bce0e9a0e079bdee\": rpc error: code = NotFound desc = could not find container \"89cf6f74006b5d6439b5268009b54c781f1d7cb627743701bce0e9a0e079bdee\": container with ID starting with 89cf6f74006b5d6439b5268009b54c781f1d7cb627743701bce0e9a0e079bdee not found: ID does not exist" Jan 28 07:06:11 crc kubenswrapper[4642]: I0128 07:06:11.867341 4642 scope.go:117] "RemoveContainer" containerID="f4e3b6fc49653592508788121dcaaba70eb8cf6d338fa2e21cc923e1aaf15d0b" Jan 28 07:06:11 crc kubenswrapper[4642]: I0128 07:06:11.880595 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-75f9d8d9f-s9pnh"] Jan 28 07:06:11 crc kubenswrapper[4642]: I0128 07:06:11.898058 4642 scope.go:117] "RemoveContainer" containerID="93aea3e48c85f16e1ada5dacad447eee7a5bf96198766a6027b13c19b77eef9f" Jan 28 07:06:11 crc kubenswrapper[4642]: I0128 07:06:11.902268 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 28 07:06:11 crc kubenswrapper[4642]: I0128 07:06:11.911179 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 28 07:06:11 crc kubenswrapper[4642]: I0128 07:06:11.917621 4642 scope.go:117] "RemoveContainer" containerID="f4e3b6fc49653592508788121dcaaba70eb8cf6d338fa2e21cc923e1aaf15d0b" Jan 28 07:06:11 crc kubenswrapper[4642]: I0128 07:06:11.917732 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 28 07:06:11 crc kubenswrapper[4642]: E0128 07:06:11.918090 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3686e6e-7a8c-45b8-80a5-c64c8fc92aef" containerName="dnsmasq-dns" Jan 28 07:06:11 crc kubenswrapper[4642]: I0128 07:06:11.918106 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3686e6e-7a8c-45b8-80a5-c64c8fc92aef" containerName="dnsmasq-dns" Jan 28 07:06:11 crc kubenswrapper[4642]: E0128 07:06:11.918120 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="417842f7-e349-410b-9dc2-5ef497e538de" containerName="nova-api-log" Jan 28 07:06:11 crc kubenswrapper[4642]: I0128 07:06:11.918128 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="417842f7-e349-410b-9dc2-5ef497e538de" containerName="nova-api-log" Jan 28 07:06:11 crc kubenswrapper[4642]: E0128 07:06:11.918139 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2388a8f5-7be5-4284-9370-23d6f1545c8a" containerName="nova-manage" Jan 28 07:06:11 crc kubenswrapper[4642]: I0128 07:06:11.918145 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="2388a8f5-7be5-4284-9370-23d6f1545c8a" containerName="nova-manage" Jan 28 07:06:11 crc kubenswrapper[4642]: E0128 07:06:11.918160 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3686e6e-7a8c-45b8-80a5-c64c8fc92aef" containerName="init" Jan 28 07:06:11 crc kubenswrapper[4642]: I0128 07:06:11.918165 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3686e6e-7a8c-45b8-80a5-c64c8fc92aef" containerName="init" Jan 28 07:06:11 crc kubenswrapper[4642]: E0128 07:06:11.918179 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="417842f7-e349-410b-9dc2-5ef497e538de" containerName="nova-api-api" Jan 28 07:06:11 crc kubenswrapper[4642]: I0128 07:06:11.918200 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="417842f7-e349-410b-9dc2-5ef497e538de" containerName="nova-api-api" Jan 28 07:06:11 crc kubenswrapper[4642]: I0128 07:06:11.918367 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="2388a8f5-7be5-4284-9370-23d6f1545c8a" containerName="nova-manage" Jan 28 07:06:11 crc kubenswrapper[4642]: I0128 07:06:11.918376 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="417842f7-e349-410b-9dc2-5ef497e538de" containerName="nova-api-log" Jan 28 07:06:11 crc kubenswrapper[4642]: I0128 07:06:11.918387 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3686e6e-7a8c-45b8-80a5-c64c8fc92aef" containerName="dnsmasq-dns" Jan 28 07:06:11 crc kubenswrapper[4642]: I0128 07:06:11.918403 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="417842f7-e349-410b-9dc2-5ef497e538de" containerName="nova-api-api" Jan 28 07:06:11 crc kubenswrapper[4642]: E0128 07:06:11.918879 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4e3b6fc49653592508788121dcaaba70eb8cf6d338fa2e21cc923e1aaf15d0b\": container with ID starting with f4e3b6fc49653592508788121dcaaba70eb8cf6d338fa2e21cc923e1aaf15d0b not found: ID does not exist" containerID="f4e3b6fc49653592508788121dcaaba70eb8cf6d338fa2e21cc923e1aaf15d0b" Jan 28 07:06:11 crc kubenswrapper[4642]: I0128 07:06:11.918928 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4e3b6fc49653592508788121dcaaba70eb8cf6d338fa2e21cc923e1aaf15d0b"} err="failed to get container status \"f4e3b6fc49653592508788121dcaaba70eb8cf6d338fa2e21cc923e1aaf15d0b\": rpc error: code = NotFound desc = could not find container \"f4e3b6fc49653592508788121dcaaba70eb8cf6d338fa2e21cc923e1aaf15d0b\": container with ID starting with f4e3b6fc49653592508788121dcaaba70eb8cf6d338fa2e21cc923e1aaf15d0b not found: ID does not exist" Jan 28 07:06:11 crc kubenswrapper[4642]: I0128 07:06:11.918962 4642 scope.go:117] "RemoveContainer" containerID="93aea3e48c85f16e1ada5dacad447eee7a5bf96198766a6027b13c19b77eef9f" Jan 28 07:06:11 crc kubenswrapper[4642]: I0128 07:06:11.919316 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 07:06:11 crc kubenswrapper[4642]: E0128 07:06:11.919886 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93aea3e48c85f16e1ada5dacad447eee7a5bf96198766a6027b13c19b77eef9f\": container with ID starting with 93aea3e48c85f16e1ada5dacad447eee7a5bf96198766a6027b13c19b77eef9f not found: ID does not exist" containerID="93aea3e48c85f16e1ada5dacad447eee7a5bf96198766a6027b13c19b77eef9f" Jan 28 07:06:11 crc kubenswrapper[4642]: I0128 07:06:11.919919 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93aea3e48c85f16e1ada5dacad447eee7a5bf96198766a6027b13c19b77eef9f"} err="failed to get container status \"93aea3e48c85f16e1ada5dacad447eee7a5bf96198766a6027b13c19b77eef9f\": rpc error: code = NotFound desc = could not find container \"93aea3e48c85f16e1ada5dacad447eee7a5bf96198766a6027b13c19b77eef9f\": container with ID starting with 93aea3e48c85f16e1ada5dacad447eee7a5bf96198766a6027b13c19b77eef9f not found: ID does not exist" Jan 28 07:06:11 crc kubenswrapper[4642]: I0128 07:06:11.921340 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 28 07:06:11 crc kubenswrapper[4642]: I0128 07:06:11.940837 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 28 07:06:11 crc kubenswrapper[4642]: I0128 07:06:11.967538 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/749b478a-1e5f-4637-88b4-fed9484b01a6-config-data\") pod \"nova-api-0\" (UID: \"749b478a-1e5f-4637-88b4-fed9484b01a6\") " pod="openstack/nova-api-0" Jan 28 07:06:11 crc kubenswrapper[4642]: I0128 07:06:11.967596 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/749b478a-1e5f-4637-88b4-fed9484b01a6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"749b478a-1e5f-4637-88b4-fed9484b01a6\") " pod="openstack/nova-api-0" Jan 28 07:06:11 crc kubenswrapper[4642]: I0128 07:06:11.967730 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/749b478a-1e5f-4637-88b4-fed9484b01a6-logs\") pod \"nova-api-0\" (UID: \"749b478a-1e5f-4637-88b4-fed9484b01a6\") " pod="openstack/nova-api-0" Jan 28 07:06:11 crc kubenswrapper[4642]: I0128 07:06:11.967772 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lldg\" (UniqueName: \"kubernetes.io/projected/749b478a-1e5f-4637-88b4-fed9484b01a6-kube-api-access-6lldg\") pod \"nova-api-0\" (UID: \"749b478a-1e5f-4637-88b4-fed9484b01a6\") " pod="openstack/nova-api-0" Jan 28 07:06:12 crc kubenswrapper[4642]: I0128 07:06:12.069839 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/749b478a-1e5f-4637-88b4-fed9484b01a6-config-data\") pod \"nova-api-0\" (UID: \"749b478a-1e5f-4637-88b4-fed9484b01a6\") " pod="openstack/nova-api-0" Jan 28 07:06:12 crc kubenswrapper[4642]: I0128 07:06:12.070793 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/749b478a-1e5f-4637-88b4-fed9484b01a6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"749b478a-1e5f-4637-88b4-fed9484b01a6\") " pod="openstack/nova-api-0" Jan 28 07:06:12 crc kubenswrapper[4642]: I0128 07:06:12.071144 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/749b478a-1e5f-4637-88b4-fed9484b01a6-logs\") pod \"nova-api-0\" (UID: \"749b478a-1e5f-4637-88b4-fed9484b01a6\") " pod="openstack/nova-api-0" Jan 28 07:06:12 crc kubenswrapper[4642]: I0128 07:06:12.071219 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lldg\" (UniqueName: \"kubernetes.io/projected/749b478a-1e5f-4637-88b4-fed9484b01a6-kube-api-access-6lldg\") pod \"nova-api-0\" (UID: \"749b478a-1e5f-4637-88b4-fed9484b01a6\") " pod="openstack/nova-api-0" Jan 28 07:06:12 crc kubenswrapper[4642]: I0128 07:06:12.071876 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/749b478a-1e5f-4637-88b4-fed9484b01a6-logs\") pod \"nova-api-0\" (UID: \"749b478a-1e5f-4637-88b4-fed9484b01a6\") " pod="openstack/nova-api-0" Jan 28 07:06:12 crc kubenswrapper[4642]: I0128 07:06:12.078115 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/749b478a-1e5f-4637-88b4-fed9484b01a6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"749b478a-1e5f-4637-88b4-fed9484b01a6\") " pod="openstack/nova-api-0" Jan 28 07:06:12 crc kubenswrapper[4642]: I0128 07:06:12.118755 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lldg\" (UniqueName: \"kubernetes.io/projected/749b478a-1e5f-4637-88b4-fed9484b01a6-kube-api-access-6lldg\") pod \"nova-api-0\" (UID: \"749b478a-1e5f-4637-88b4-fed9484b01a6\") " pod="openstack/nova-api-0" Jan 28 07:06:12 crc kubenswrapper[4642]: I0128 07:06:12.135700 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/749b478a-1e5f-4637-88b4-fed9484b01a6-config-data\") pod \"nova-api-0\" (UID: \"749b478a-1e5f-4637-88b4-fed9484b01a6\") " pod="openstack/nova-api-0" Jan 28 07:06:12 crc kubenswrapper[4642]: I0128 07:06:12.165871 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 07:06:12 crc kubenswrapper[4642]: I0128 07:06:12.236950 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 07:06:12 crc kubenswrapper[4642]: I0128 07:06:12.273299 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3566f95f-d7c0-4181-8314-df6adba02d06-config-data\") pod \"3566f95f-d7c0-4181-8314-df6adba02d06\" (UID: \"3566f95f-d7c0-4181-8314-df6adba02d06\") " Jan 28 07:06:12 crc kubenswrapper[4642]: I0128 07:06:12.273355 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3566f95f-d7c0-4181-8314-df6adba02d06-logs\") pod \"3566f95f-d7c0-4181-8314-df6adba02d06\" (UID: \"3566f95f-d7c0-4181-8314-df6adba02d06\") " Jan 28 07:06:12 crc kubenswrapper[4642]: I0128 07:06:12.273398 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74kp9\" (UniqueName: \"kubernetes.io/projected/3566f95f-d7c0-4181-8314-df6adba02d06-kube-api-access-74kp9\") pod \"3566f95f-d7c0-4181-8314-df6adba02d06\" (UID: \"3566f95f-d7c0-4181-8314-df6adba02d06\") " Jan 28 07:06:12 crc kubenswrapper[4642]: I0128 07:06:12.273484 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3566f95f-d7c0-4181-8314-df6adba02d06-nova-metadata-tls-certs\") pod \"3566f95f-d7c0-4181-8314-df6adba02d06\" (UID: \"3566f95f-d7c0-4181-8314-df6adba02d06\") " Jan 28 07:06:12 crc kubenswrapper[4642]: I0128 07:06:12.273596 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3566f95f-d7c0-4181-8314-df6adba02d06-combined-ca-bundle\") pod \"3566f95f-d7c0-4181-8314-df6adba02d06\" (UID: \"3566f95f-d7c0-4181-8314-df6adba02d06\") " Jan 28 07:06:12 crc kubenswrapper[4642]: I0128 07:06:12.275383 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3566f95f-d7c0-4181-8314-df6adba02d06-logs" (OuterVolumeSpecName: "logs") pod "3566f95f-d7c0-4181-8314-df6adba02d06" (UID: "3566f95f-d7c0-4181-8314-df6adba02d06"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 07:06:12 crc kubenswrapper[4642]: I0128 07:06:12.284742 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3566f95f-d7c0-4181-8314-df6adba02d06-kube-api-access-74kp9" (OuterVolumeSpecName: "kube-api-access-74kp9") pod "3566f95f-d7c0-4181-8314-df6adba02d06" (UID: "3566f95f-d7c0-4181-8314-df6adba02d06"). InnerVolumeSpecName "kube-api-access-74kp9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:06:12 crc kubenswrapper[4642]: I0128 07:06:12.301047 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3566f95f-d7c0-4181-8314-df6adba02d06-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3566f95f-d7c0-4181-8314-df6adba02d06" (UID: "3566f95f-d7c0-4181-8314-df6adba02d06"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:06:12 crc kubenswrapper[4642]: I0128 07:06:12.307350 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3566f95f-d7c0-4181-8314-df6adba02d06-config-data" (OuterVolumeSpecName: "config-data") pod "3566f95f-d7c0-4181-8314-df6adba02d06" (UID: "3566f95f-d7c0-4181-8314-df6adba02d06"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:06:12 crc kubenswrapper[4642]: I0128 07:06:12.339532 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3566f95f-d7c0-4181-8314-df6adba02d06-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "3566f95f-d7c0-4181-8314-df6adba02d06" (UID: "3566f95f-d7c0-4181-8314-df6adba02d06"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:06:12 crc kubenswrapper[4642]: I0128 07:06:12.374703 4642 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3566f95f-d7c0-4181-8314-df6adba02d06-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 07:06:12 crc kubenswrapper[4642]: I0128 07:06:12.374726 4642 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3566f95f-d7c0-4181-8314-df6adba02d06-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 07:06:12 crc kubenswrapper[4642]: I0128 07:06:12.374737 4642 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3566f95f-d7c0-4181-8314-df6adba02d06-logs\") on node \"crc\" DevicePath \"\"" Jan 28 07:06:12 crc kubenswrapper[4642]: I0128 07:06:12.374762 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-74kp9\" (UniqueName: \"kubernetes.io/projected/3566f95f-d7c0-4181-8314-df6adba02d06-kube-api-access-74kp9\") on node \"crc\" DevicePath \"\"" Jan 28 07:06:12 crc kubenswrapper[4642]: I0128 07:06:12.374774 4642 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3566f95f-d7c0-4181-8314-df6adba02d06-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 07:06:12 crc kubenswrapper[4642]: I0128 07:06:12.606728 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 28 07:06:12 crc kubenswrapper[4642]: I0128 07:06:12.760522 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"749b478a-1e5f-4637-88b4-fed9484b01a6","Type":"ContainerStarted","Data":"f77e24f49c7200f0739a9f8ef0cb1fff5beaf968fcd91e272fc34a89439871f2"} Jan 28 07:06:12 crc kubenswrapper[4642]: I0128 07:06:12.762209 4642 generic.go:334] "Generic (PLEG): container finished" podID="3566f95f-d7c0-4181-8314-df6adba02d06" containerID="f83467772a226bffd3f33d110cbf507d7292588775f35721430cd873189d64ab" exitCode=0 Jan 28 07:06:12 crc kubenswrapper[4642]: I0128 07:06:12.762234 4642 generic.go:334] "Generic (PLEG): container finished" podID="3566f95f-d7c0-4181-8314-df6adba02d06" containerID="fa52fabde28f60eb371f1e6e4afa805550f7d8912273b54dafa18cf5e445c7e9" exitCode=143 Jan 28 07:06:12 crc kubenswrapper[4642]: I0128 07:06:12.762265 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3566f95f-d7c0-4181-8314-df6adba02d06","Type":"ContainerDied","Data":"f83467772a226bffd3f33d110cbf507d7292588775f35721430cd873189d64ab"} Jan 28 07:06:12 crc kubenswrapper[4642]: I0128 07:06:12.762283 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3566f95f-d7c0-4181-8314-df6adba02d06","Type":"ContainerDied","Data":"fa52fabde28f60eb371f1e6e4afa805550f7d8912273b54dafa18cf5e445c7e9"} Jan 28 07:06:12 crc kubenswrapper[4642]: I0128 07:06:12.762294 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3566f95f-d7c0-4181-8314-df6adba02d06","Type":"ContainerDied","Data":"fe89e83b43813983659c3c48474ed25b59dfce0950361cb1639a3e56b77b738a"} Jan 28 07:06:12 crc kubenswrapper[4642]: I0128 07:06:12.762308 4642 scope.go:117] "RemoveContainer" containerID="f83467772a226bffd3f33d110cbf507d7292588775f35721430cd873189d64ab" Jan 28 07:06:12 crc kubenswrapper[4642]: I0128 07:06:12.762392 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 07:06:12 crc kubenswrapper[4642]: I0128 07:06:12.801453 4642 scope.go:117] "RemoveContainer" containerID="fa52fabde28f60eb371f1e6e4afa805550f7d8912273b54dafa18cf5e445c7e9" Jan 28 07:06:12 crc kubenswrapper[4642]: I0128 07:06:12.814108 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 07:06:12 crc kubenswrapper[4642]: I0128 07:06:12.824165 4642 scope.go:117] "RemoveContainer" containerID="f83467772a226bffd3f33d110cbf507d7292588775f35721430cd873189d64ab" Jan 28 07:06:12 crc kubenswrapper[4642]: E0128 07:06:12.824866 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f83467772a226bffd3f33d110cbf507d7292588775f35721430cd873189d64ab\": container with ID starting with f83467772a226bffd3f33d110cbf507d7292588775f35721430cd873189d64ab not found: ID does not exist" containerID="f83467772a226bffd3f33d110cbf507d7292588775f35721430cd873189d64ab" Jan 28 07:06:12 crc kubenswrapper[4642]: I0128 07:06:12.824899 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f83467772a226bffd3f33d110cbf507d7292588775f35721430cd873189d64ab"} err="failed to get container status \"f83467772a226bffd3f33d110cbf507d7292588775f35721430cd873189d64ab\": rpc error: code = NotFound desc = could not find container \"f83467772a226bffd3f33d110cbf507d7292588775f35721430cd873189d64ab\": container with ID starting with f83467772a226bffd3f33d110cbf507d7292588775f35721430cd873189d64ab not found: ID does not exist" Jan 28 07:06:12 crc kubenswrapper[4642]: I0128 07:06:12.824919 4642 scope.go:117] "RemoveContainer" containerID="fa52fabde28f60eb371f1e6e4afa805550f7d8912273b54dafa18cf5e445c7e9" Jan 28 07:06:12 crc kubenswrapper[4642]: E0128 07:06:12.825958 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa52fabde28f60eb371f1e6e4afa805550f7d8912273b54dafa18cf5e445c7e9\": container with ID starting with fa52fabde28f60eb371f1e6e4afa805550f7d8912273b54dafa18cf5e445c7e9 not found: ID does not exist" containerID="fa52fabde28f60eb371f1e6e4afa805550f7d8912273b54dafa18cf5e445c7e9" Jan 28 07:06:12 crc kubenswrapper[4642]: I0128 07:06:12.825981 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa52fabde28f60eb371f1e6e4afa805550f7d8912273b54dafa18cf5e445c7e9"} err="failed to get container status \"fa52fabde28f60eb371f1e6e4afa805550f7d8912273b54dafa18cf5e445c7e9\": rpc error: code = NotFound desc = could not find container \"fa52fabde28f60eb371f1e6e4afa805550f7d8912273b54dafa18cf5e445c7e9\": container with ID starting with fa52fabde28f60eb371f1e6e4afa805550f7d8912273b54dafa18cf5e445c7e9 not found: ID does not exist" Jan 28 07:06:12 crc kubenswrapper[4642]: I0128 07:06:12.825996 4642 scope.go:117] "RemoveContainer" containerID="f83467772a226bffd3f33d110cbf507d7292588775f35721430cd873189d64ab" Jan 28 07:06:12 crc kubenswrapper[4642]: I0128 07:06:12.828435 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f83467772a226bffd3f33d110cbf507d7292588775f35721430cd873189d64ab"} err="failed to get container status \"f83467772a226bffd3f33d110cbf507d7292588775f35721430cd873189d64ab\": rpc error: code = NotFound desc = could not find container \"f83467772a226bffd3f33d110cbf507d7292588775f35721430cd873189d64ab\": container with ID starting with f83467772a226bffd3f33d110cbf507d7292588775f35721430cd873189d64ab not found: ID does not exist" Jan 28 07:06:12 crc kubenswrapper[4642]: I0128 07:06:12.828495 4642 scope.go:117] "RemoveContainer" containerID="fa52fabde28f60eb371f1e6e4afa805550f7d8912273b54dafa18cf5e445c7e9" Jan 28 07:06:12 crc kubenswrapper[4642]: I0128 07:06:12.828856 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa52fabde28f60eb371f1e6e4afa805550f7d8912273b54dafa18cf5e445c7e9"} err="failed to get container status \"fa52fabde28f60eb371f1e6e4afa805550f7d8912273b54dafa18cf5e445c7e9\": rpc error: code = NotFound desc = could not find container \"fa52fabde28f60eb371f1e6e4afa805550f7d8912273b54dafa18cf5e445c7e9\": container with ID starting with fa52fabde28f60eb371f1e6e4afa805550f7d8912273b54dafa18cf5e445c7e9 not found: ID does not exist" Jan 28 07:06:12 crc kubenswrapper[4642]: I0128 07:06:12.847574 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 07:06:12 crc kubenswrapper[4642]: I0128 07:06:12.854685 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 28 07:06:12 crc kubenswrapper[4642]: E0128 07:06:12.855110 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3566f95f-d7c0-4181-8314-df6adba02d06" containerName="nova-metadata-log" Jan 28 07:06:12 crc kubenswrapper[4642]: I0128 07:06:12.855131 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="3566f95f-d7c0-4181-8314-df6adba02d06" containerName="nova-metadata-log" Jan 28 07:06:12 crc kubenswrapper[4642]: E0128 07:06:12.855142 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3566f95f-d7c0-4181-8314-df6adba02d06" containerName="nova-metadata-metadata" Jan 28 07:06:12 crc kubenswrapper[4642]: I0128 07:06:12.855150 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="3566f95f-d7c0-4181-8314-df6adba02d06" containerName="nova-metadata-metadata" Jan 28 07:06:12 crc kubenswrapper[4642]: I0128 07:06:12.855391 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="3566f95f-d7c0-4181-8314-df6adba02d06" containerName="nova-metadata-metadata" Jan 28 07:06:12 crc kubenswrapper[4642]: I0128 07:06:12.855410 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="3566f95f-d7c0-4181-8314-df6adba02d06" containerName="nova-metadata-log" Jan 28 07:06:12 crc kubenswrapper[4642]: I0128 07:06:12.856429 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 07:06:12 crc kubenswrapper[4642]: I0128 07:06:12.858746 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 28 07:06:12 crc kubenswrapper[4642]: I0128 07:06:12.859048 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 28 07:06:12 crc kubenswrapper[4642]: I0128 07:06:12.864949 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 07:06:12 crc kubenswrapper[4642]: I0128 07:06:12.882075 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7864162e-d7ed-4456-8f84-5b75fcb78ab4-config-data\") pod \"nova-metadata-0\" (UID: \"7864162e-d7ed-4456-8f84-5b75fcb78ab4\") " pod="openstack/nova-metadata-0" Jan 28 07:06:12 crc kubenswrapper[4642]: I0128 07:06:12.882141 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7864162e-d7ed-4456-8f84-5b75fcb78ab4-logs\") pod \"nova-metadata-0\" (UID: \"7864162e-d7ed-4456-8f84-5b75fcb78ab4\") " pod="openstack/nova-metadata-0" Jan 28 07:06:12 crc kubenswrapper[4642]: I0128 07:06:12.882181 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7864162e-d7ed-4456-8f84-5b75fcb78ab4-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7864162e-d7ed-4456-8f84-5b75fcb78ab4\") " pod="openstack/nova-metadata-0" Jan 28 07:06:12 crc kubenswrapper[4642]: I0128 07:06:12.882281 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vc7k7\" (UniqueName: \"kubernetes.io/projected/7864162e-d7ed-4456-8f84-5b75fcb78ab4-kube-api-access-vc7k7\") pod \"nova-metadata-0\" (UID: \"7864162e-d7ed-4456-8f84-5b75fcb78ab4\") " pod="openstack/nova-metadata-0" Jan 28 07:06:12 crc kubenswrapper[4642]: I0128 07:06:12.882312 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7864162e-d7ed-4456-8f84-5b75fcb78ab4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7864162e-d7ed-4456-8f84-5b75fcb78ab4\") " pod="openstack/nova-metadata-0" Jan 28 07:06:12 crc kubenswrapper[4642]: I0128 07:06:12.984639 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7864162e-d7ed-4456-8f84-5b75fcb78ab4-config-data\") pod \"nova-metadata-0\" (UID: \"7864162e-d7ed-4456-8f84-5b75fcb78ab4\") " pod="openstack/nova-metadata-0" Jan 28 07:06:12 crc kubenswrapper[4642]: I0128 07:06:12.984690 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7864162e-d7ed-4456-8f84-5b75fcb78ab4-logs\") pod \"nova-metadata-0\" (UID: \"7864162e-d7ed-4456-8f84-5b75fcb78ab4\") " pod="openstack/nova-metadata-0" Jan 28 07:06:12 crc kubenswrapper[4642]: I0128 07:06:12.984724 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7864162e-d7ed-4456-8f84-5b75fcb78ab4-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7864162e-d7ed-4456-8f84-5b75fcb78ab4\") " pod="openstack/nova-metadata-0" Jan 28 07:06:12 crc kubenswrapper[4642]: I0128 07:06:12.984796 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vc7k7\" (UniqueName: \"kubernetes.io/projected/7864162e-d7ed-4456-8f84-5b75fcb78ab4-kube-api-access-vc7k7\") pod \"nova-metadata-0\" (UID: \"7864162e-d7ed-4456-8f84-5b75fcb78ab4\") " pod="openstack/nova-metadata-0" Jan 28 07:06:12 crc kubenswrapper[4642]: I0128 07:06:12.984821 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7864162e-d7ed-4456-8f84-5b75fcb78ab4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7864162e-d7ed-4456-8f84-5b75fcb78ab4\") " pod="openstack/nova-metadata-0" Jan 28 07:06:12 crc kubenswrapper[4642]: I0128 07:06:12.985693 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7864162e-d7ed-4456-8f84-5b75fcb78ab4-logs\") pod \"nova-metadata-0\" (UID: \"7864162e-d7ed-4456-8f84-5b75fcb78ab4\") " pod="openstack/nova-metadata-0" Jan 28 07:06:12 crc kubenswrapper[4642]: I0128 07:06:12.989467 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7864162e-d7ed-4456-8f84-5b75fcb78ab4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7864162e-d7ed-4456-8f84-5b75fcb78ab4\") " pod="openstack/nova-metadata-0" Jan 28 07:06:12 crc kubenswrapper[4642]: I0128 07:06:12.989682 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7864162e-d7ed-4456-8f84-5b75fcb78ab4-config-data\") pod \"nova-metadata-0\" (UID: \"7864162e-d7ed-4456-8f84-5b75fcb78ab4\") " pod="openstack/nova-metadata-0" Jan 28 07:06:12 crc kubenswrapper[4642]: I0128 07:06:12.995542 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7864162e-d7ed-4456-8f84-5b75fcb78ab4-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7864162e-d7ed-4456-8f84-5b75fcb78ab4\") " pod="openstack/nova-metadata-0" Jan 28 07:06:13 crc kubenswrapper[4642]: I0128 07:06:13.007752 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vc7k7\" (UniqueName: \"kubernetes.io/projected/7864162e-d7ed-4456-8f84-5b75fcb78ab4-kube-api-access-vc7k7\") pod \"nova-metadata-0\" (UID: \"7864162e-d7ed-4456-8f84-5b75fcb78ab4\") " pod="openstack/nova-metadata-0" Jan 28 07:06:13 crc kubenswrapper[4642]: I0128 07:06:13.146443 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3566f95f-d7c0-4181-8314-df6adba02d06" path="/var/lib/kubelet/pods/3566f95f-d7c0-4181-8314-df6adba02d06/volumes" Jan 28 07:06:13 crc kubenswrapper[4642]: I0128 07:06:13.150941 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="417842f7-e349-410b-9dc2-5ef497e538de" path="/var/lib/kubelet/pods/417842f7-e349-410b-9dc2-5ef497e538de/volumes" Jan 28 07:06:13 crc kubenswrapper[4642]: I0128 07:06:13.151576 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3686e6e-7a8c-45b8-80a5-c64c8fc92aef" path="/var/lib/kubelet/pods/a3686e6e-7a8c-45b8-80a5-c64c8fc92aef/volumes" Jan 28 07:06:13 crc kubenswrapper[4642]: I0128 07:06:13.192984 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 07:06:13 crc kubenswrapper[4642]: I0128 07:06:13.630722 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 07:06:13 crc kubenswrapper[4642]: I0128 07:06:13.786770 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"71c8e1ac-188f-44f8-a158-3a1c97035d42","Type":"ContainerStarted","Data":"a7e83eefca0603ae89083cb5c62472b4a4f14db759cbacca76ad9dd5e405ea2b"} Jan 28 07:06:13 crc kubenswrapper[4642]: I0128 07:06:13.788171 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 28 07:06:13 crc kubenswrapper[4642]: I0128 07:06:13.789151 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7864162e-d7ed-4456-8f84-5b75fcb78ab4","Type":"ContainerStarted","Data":"1d337e693226c8fd6868e88f8c15c1d56249490f6a9248ae144cf9fc2dbf214e"} Jan 28 07:06:13 crc kubenswrapper[4642]: I0128 07:06:13.789179 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7864162e-d7ed-4456-8f84-5b75fcb78ab4","Type":"ContainerStarted","Data":"746e988de22a35d54a0eee2e9f608e3ce4161ba5f45a350eaed5f30333478c2d"} Jan 28 07:06:13 crc kubenswrapper[4642]: I0128 07:06:13.791276 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"749b478a-1e5f-4637-88b4-fed9484b01a6","Type":"ContainerStarted","Data":"ab2bb35880d579f1e17ca079729973478d5819f01d43f4be5a3936a8c725341e"} Jan 28 07:06:13 crc kubenswrapper[4642]: I0128 07:06:13.791329 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"749b478a-1e5f-4637-88b4-fed9484b01a6","Type":"ContainerStarted","Data":"9a805a21d78df60974b018c82e29f8f2a272e4ec877db7a46c4af32d181aa19c"} Jan 28 07:06:13 crc kubenswrapper[4642]: I0128 07:06:13.812538 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.39475592 podStartE2EDuration="6.812522272s" podCreationTimestamp="2026-01-28 07:06:07 +0000 UTC" firstStartedPulling="2026-01-28 07:06:08.498371216 +0000 UTC m=+1091.730460025" lastFinishedPulling="2026-01-28 07:06:12.916137567 +0000 UTC m=+1096.148226377" observedRunningTime="2026-01-28 07:06:13.803013282 +0000 UTC m=+1097.035102092" watchObservedRunningTime="2026-01-28 07:06:13.812522272 +0000 UTC m=+1097.044611080" Jan 28 07:06:13 crc kubenswrapper[4642]: I0128 07:06:13.826646 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.826627753 podStartE2EDuration="2.826627753s" podCreationTimestamp="2026-01-28 07:06:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 07:06:13.821877943 +0000 UTC m=+1097.053966751" watchObservedRunningTime="2026-01-28 07:06:13.826627753 +0000 UTC m=+1097.058716561" Jan 28 07:06:14 crc kubenswrapper[4642]: I0128 07:06:14.081016 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 28 07:06:14 crc kubenswrapper[4642]: I0128 07:06:14.801016 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7864162e-d7ed-4456-8f84-5b75fcb78ab4","Type":"ContainerStarted","Data":"75f3a9902ebcf91905e2c7dc10a2802d619e0085f28af9a63045e72989f11814"} Jan 28 07:06:14 crc kubenswrapper[4642]: I0128 07:06:14.824411 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.824392554 podStartE2EDuration="2.824392554s" podCreationTimestamp="2026-01-28 07:06:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 07:06:14.816109269 +0000 UTC m=+1098.048198079" watchObservedRunningTime="2026-01-28 07:06:14.824392554 +0000 UTC m=+1098.056481364" Jan 28 07:06:15 crc kubenswrapper[4642]: I0128 07:06:15.741209 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 28 07:06:16 crc kubenswrapper[4642]: I0128 07:06:16.028171 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 28 07:06:18 crc kubenswrapper[4642]: I0128 07:06:18.193917 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 28 07:06:18 crc kubenswrapper[4642]: I0128 07:06:18.194583 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 28 07:06:22 crc kubenswrapper[4642]: I0128 07:06:22.166444 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 28 07:06:22 crc kubenswrapper[4642]: I0128 07:06:22.166745 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 28 07:06:23 crc kubenswrapper[4642]: I0128 07:06:23.193804 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 28 07:06:23 crc kubenswrapper[4642]: I0128 07:06:23.194089 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 28 07:06:23 crc kubenswrapper[4642]: I0128 07:06:23.248302 4642 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="749b478a-1e5f-4637-88b4-fed9484b01a6" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.193:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 07:06:23 crc kubenswrapper[4642]: I0128 07:06:23.248372 4642 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="749b478a-1e5f-4637-88b4-fed9484b01a6" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.193:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 07:06:24 crc kubenswrapper[4642]: I0128 07:06:24.208330 4642 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="7864162e-d7ed-4456-8f84-5b75fcb78ab4" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.194:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 07:06:24 crc kubenswrapper[4642]: I0128 07:06:24.208392 4642 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="7864162e-d7ed-4456-8f84-5b75fcb78ab4" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.194:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 07:06:32 crc kubenswrapper[4642]: I0128 07:06:32.169981 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 28 07:06:32 crc kubenswrapper[4642]: I0128 07:06:32.170400 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 28 07:06:32 crc kubenswrapper[4642]: I0128 07:06:32.170676 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 28 07:06:32 crc kubenswrapper[4642]: I0128 07:06:32.170696 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 28 07:06:32 crc kubenswrapper[4642]: I0128 07:06:32.172591 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 28 07:06:32 crc kubenswrapper[4642]: I0128 07:06:32.175450 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 28 07:06:32 crc kubenswrapper[4642]: I0128 07:06:32.319741 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-59775c5b57-vjbrx"] Jan 28 07:06:32 crc kubenswrapper[4642]: I0128 07:06:32.321322 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59775c5b57-vjbrx" Jan 28 07:06:32 crc kubenswrapper[4642]: I0128 07:06:32.329258 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59775c5b57-vjbrx"] Jan 28 07:06:32 crc kubenswrapper[4642]: I0128 07:06:32.339874 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kczbf\" (UniqueName: \"kubernetes.io/projected/3966764e-5967-46c5-9b39-88fd738b8b0a-kube-api-access-kczbf\") pod \"dnsmasq-dns-59775c5b57-vjbrx\" (UID: \"3966764e-5967-46c5-9b39-88fd738b8b0a\") " pod="openstack/dnsmasq-dns-59775c5b57-vjbrx" Jan 28 07:06:32 crc kubenswrapper[4642]: I0128 07:06:32.339921 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3966764e-5967-46c5-9b39-88fd738b8b0a-ovsdbserver-nb\") pod \"dnsmasq-dns-59775c5b57-vjbrx\" (UID: \"3966764e-5967-46c5-9b39-88fd738b8b0a\") " pod="openstack/dnsmasq-dns-59775c5b57-vjbrx" Jan 28 07:06:32 crc kubenswrapper[4642]: I0128 07:06:32.339971 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3966764e-5967-46c5-9b39-88fd738b8b0a-dns-svc\") pod \"dnsmasq-dns-59775c5b57-vjbrx\" (UID: \"3966764e-5967-46c5-9b39-88fd738b8b0a\") " pod="openstack/dnsmasq-dns-59775c5b57-vjbrx" Jan 28 07:06:32 crc kubenswrapper[4642]: I0128 07:06:32.340015 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3966764e-5967-46c5-9b39-88fd738b8b0a-config\") pod \"dnsmasq-dns-59775c5b57-vjbrx\" (UID: \"3966764e-5967-46c5-9b39-88fd738b8b0a\") " pod="openstack/dnsmasq-dns-59775c5b57-vjbrx" Jan 28 07:06:32 crc kubenswrapper[4642]: I0128 07:06:32.340038 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3966764e-5967-46c5-9b39-88fd738b8b0a-dns-swift-storage-0\") pod \"dnsmasq-dns-59775c5b57-vjbrx\" (UID: \"3966764e-5967-46c5-9b39-88fd738b8b0a\") " pod="openstack/dnsmasq-dns-59775c5b57-vjbrx" Jan 28 07:06:32 crc kubenswrapper[4642]: I0128 07:06:32.340056 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3966764e-5967-46c5-9b39-88fd738b8b0a-ovsdbserver-sb\") pod \"dnsmasq-dns-59775c5b57-vjbrx\" (UID: \"3966764e-5967-46c5-9b39-88fd738b8b0a\") " pod="openstack/dnsmasq-dns-59775c5b57-vjbrx" Jan 28 07:06:32 crc kubenswrapper[4642]: I0128 07:06:32.440826 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kczbf\" (UniqueName: \"kubernetes.io/projected/3966764e-5967-46c5-9b39-88fd738b8b0a-kube-api-access-kczbf\") pod \"dnsmasq-dns-59775c5b57-vjbrx\" (UID: \"3966764e-5967-46c5-9b39-88fd738b8b0a\") " pod="openstack/dnsmasq-dns-59775c5b57-vjbrx" Jan 28 07:06:32 crc kubenswrapper[4642]: I0128 07:06:32.440902 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3966764e-5967-46c5-9b39-88fd738b8b0a-ovsdbserver-nb\") pod \"dnsmasq-dns-59775c5b57-vjbrx\" (UID: \"3966764e-5967-46c5-9b39-88fd738b8b0a\") " pod="openstack/dnsmasq-dns-59775c5b57-vjbrx" Jan 28 07:06:32 crc kubenswrapper[4642]: I0128 07:06:32.440961 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3966764e-5967-46c5-9b39-88fd738b8b0a-dns-svc\") pod \"dnsmasq-dns-59775c5b57-vjbrx\" (UID: \"3966764e-5967-46c5-9b39-88fd738b8b0a\") " pod="openstack/dnsmasq-dns-59775c5b57-vjbrx" Jan 28 07:06:32 crc kubenswrapper[4642]: I0128 07:06:32.441020 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3966764e-5967-46c5-9b39-88fd738b8b0a-config\") pod \"dnsmasq-dns-59775c5b57-vjbrx\" (UID: \"3966764e-5967-46c5-9b39-88fd738b8b0a\") " pod="openstack/dnsmasq-dns-59775c5b57-vjbrx" Jan 28 07:06:32 crc kubenswrapper[4642]: I0128 07:06:32.441055 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3966764e-5967-46c5-9b39-88fd738b8b0a-dns-swift-storage-0\") pod \"dnsmasq-dns-59775c5b57-vjbrx\" (UID: \"3966764e-5967-46c5-9b39-88fd738b8b0a\") " pod="openstack/dnsmasq-dns-59775c5b57-vjbrx" Jan 28 07:06:32 crc kubenswrapper[4642]: I0128 07:06:32.441076 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3966764e-5967-46c5-9b39-88fd738b8b0a-ovsdbserver-sb\") pod \"dnsmasq-dns-59775c5b57-vjbrx\" (UID: \"3966764e-5967-46c5-9b39-88fd738b8b0a\") " pod="openstack/dnsmasq-dns-59775c5b57-vjbrx" Jan 28 07:06:32 crc kubenswrapper[4642]: I0128 07:06:32.441886 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3966764e-5967-46c5-9b39-88fd738b8b0a-ovsdbserver-nb\") pod \"dnsmasq-dns-59775c5b57-vjbrx\" (UID: \"3966764e-5967-46c5-9b39-88fd738b8b0a\") " pod="openstack/dnsmasq-dns-59775c5b57-vjbrx" Jan 28 07:06:32 crc kubenswrapper[4642]: I0128 07:06:32.442349 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3966764e-5967-46c5-9b39-88fd738b8b0a-config\") pod \"dnsmasq-dns-59775c5b57-vjbrx\" (UID: \"3966764e-5967-46c5-9b39-88fd738b8b0a\") " pod="openstack/dnsmasq-dns-59775c5b57-vjbrx" Jan 28 07:06:32 crc kubenswrapper[4642]: I0128 07:06:32.442430 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3966764e-5967-46c5-9b39-88fd738b8b0a-dns-swift-storage-0\") pod \"dnsmasq-dns-59775c5b57-vjbrx\" (UID: \"3966764e-5967-46c5-9b39-88fd738b8b0a\") " pod="openstack/dnsmasq-dns-59775c5b57-vjbrx" Jan 28 07:06:32 crc kubenswrapper[4642]: I0128 07:06:32.442525 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3966764e-5967-46c5-9b39-88fd738b8b0a-ovsdbserver-sb\") pod \"dnsmasq-dns-59775c5b57-vjbrx\" (UID: \"3966764e-5967-46c5-9b39-88fd738b8b0a\") " pod="openstack/dnsmasq-dns-59775c5b57-vjbrx" Jan 28 07:06:32 crc kubenswrapper[4642]: I0128 07:06:32.442629 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3966764e-5967-46c5-9b39-88fd738b8b0a-dns-svc\") pod \"dnsmasq-dns-59775c5b57-vjbrx\" (UID: \"3966764e-5967-46c5-9b39-88fd738b8b0a\") " pod="openstack/dnsmasq-dns-59775c5b57-vjbrx" Jan 28 07:06:32 crc kubenswrapper[4642]: I0128 07:06:32.458135 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kczbf\" (UniqueName: \"kubernetes.io/projected/3966764e-5967-46c5-9b39-88fd738b8b0a-kube-api-access-kczbf\") pod \"dnsmasq-dns-59775c5b57-vjbrx\" (UID: \"3966764e-5967-46c5-9b39-88fd738b8b0a\") " pod="openstack/dnsmasq-dns-59775c5b57-vjbrx" Jan 28 07:06:32 crc kubenswrapper[4642]: I0128 07:06:32.642555 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59775c5b57-vjbrx" Jan 28 07:06:32 crc kubenswrapper[4642]: I0128 07:06:32.989160 4642 generic.go:334] "Generic (PLEG): container finished" podID="6f280eea-8ffb-4200-9357-df15e71681b0" containerID="129ec0622f1eed4bd86fd6f642ea8bf1d9bc0dead6da5d1e928290ce9f026f40" exitCode=137 Jan 28 07:06:32 crc kubenswrapper[4642]: I0128 07:06:32.989306 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6f280eea-8ffb-4200-9357-df15e71681b0","Type":"ContainerDied","Data":"129ec0622f1eed4bd86fd6f642ea8bf1d9bc0dead6da5d1e928290ce9f026f40"} Jan 28 07:06:32 crc kubenswrapper[4642]: I0128 07:06:32.991049 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6f280eea-8ffb-4200-9357-df15e71681b0","Type":"ContainerDied","Data":"f7501da1456f5d903e7fed9ca9a5b907eff80ddab1bd909225b284e7d9651dd4"} Jan 28 07:06:32 crc kubenswrapper[4642]: I0128 07:06:32.991069 4642 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f7501da1456f5d903e7fed9ca9a5b907eff80ddab1bd909225b284e7d9651dd4" Jan 28 07:06:33 crc kubenswrapper[4642]: I0128 07:06:33.031237 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 28 07:06:33 crc kubenswrapper[4642]: I0128 07:06:33.153158 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jv5ms\" (UniqueName: \"kubernetes.io/projected/6f280eea-8ffb-4200-9357-df15e71681b0-kube-api-access-jv5ms\") pod \"6f280eea-8ffb-4200-9357-df15e71681b0\" (UID: \"6f280eea-8ffb-4200-9357-df15e71681b0\") " Jan 28 07:06:33 crc kubenswrapper[4642]: I0128 07:06:33.153241 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f280eea-8ffb-4200-9357-df15e71681b0-config-data\") pod \"6f280eea-8ffb-4200-9357-df15e71681b0\" (UID: \"6f280eea-8ffb-4200-9357-df15e71681b0\") " Jan 28 07:06:33 crc kubenswrapper[4642]: I0128 07:06:33.153317 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f280eea-8ffb-4200-9357-df15e71681b0-combined-ca-bundle\") pod \"6f280eea-8ffb-4200-9357-df15e71681b0\" (UID: \"6f280eea-8ffb-4200-9357-df15e71681b0\") " Jan 28 07:06:33 crc kubenswrapper[4642]: I0128 07:06:33.168404 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f280eea-8ffb-4200-9357-df15e71681b0-kube-api-access-jv5ms" (OuterVolumeSpecName: "kube-api-access-jv5ms") pod "6f280eea-8ffb-4200-9357-df15e71681b0" (UID: "6f280eea-8ffb-4200-9357-df15e71681b0"). InnerVolumeSpecName "kube-api-access-jv5ms". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:06:33 crc kubenswrapper[4642]: I0128 07:06:33.177496 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59775c5b57-vjbrx"] Jan 28 07:06:33 crc kubenswrapper[4642]: I0128 07:06:33.183335 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f280eea-8ffb-4200-9357-df15e71681b0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6f280eea-8ffb-4200-9357-df15e71681b0" (UID: "6f280eea-8ffb-4200-9357-df15e71681b0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:06:33 crc kubenswrapper[4642]: I0128 07:06:33.185528 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f280eea-8ffb-4200-9357-df15e71681b0-config-data" (OuterVolumeSpecName: "config-data") pod "6f280eea-8ffb-4200-9357-df15e71681b0" (UID: "6f280eea-8ffb-4200-9357-df15e71681b0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:06:33 crc kubenswrapper[4642]: I0128 07:06:33.197954 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 28 07:06:33 crc kubenswrapper[4642]: I0128 07:06:33.200357 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 28 07:06:33 crc kubenswrapper[4642]: I0128 07:06:33.204656 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 28 07:06:33 crc kubenswrapper[4642]: I0128 07:06:33.257138 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jv5ms\" (UniqueName: \"kubernetes.io/projected/6f280eea-8ffb-4200-9357-df15e71681b0-kube-api-access-jv5ms\") on node \"crc\" DevicePath \"\"" Jan 28 07:06:33 crc kubenswrapper[4642]: I0128 07:06:33.258037 4642 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f280eea-8ffb-4200-9357-df15e71681b0-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 07:06:33 crc kubenswrapper[4642]: I0128 07:06:33.258063 4642 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f280eea-8ffb-4200-9357-df15e71681b0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 07:06:33 crc kubenswrapper[4642]: I0128 07:06:33.935295 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 07:06:33 crc kubenswrapper[4642]: I0128 07:06:33.935567 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="71c8e1ac-188f-44f8-a158-3a1c97035d42" containerName="ceilometer-central-agent" containerID="cri-o://28a462dbfd6310fa9a18f6a7289f2999423c709e95a1dfed7ef067ae335d3299" gracePeriod=30 Jan 28 07:06:33 crc kubenswrapper[4642]: I0128 07:06:33.935645 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="71c8e1ac-188f-44f8-a158-3a1c97035d42" containerName="proxy-httpd" containerID="cri-o://a7e83eefca0603ae89083cb5c62472b4a4f14db759cbacca76ad9dd5e405ea2b" gracePeriod=30 Jan 28 07:06:33 crc kubenswrapper[4642]: I0128 07:06:33.935760 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="71c8e1ac-188f-44f8-a158-3a1c97035d42" containerName="sg-core" containerID="cri-o://e118a0c0c1ddf30f75a05a97651134e10797b190ab863647e80a648e8f6b77b1" gracePeriod=30 Jan 28 07:06:33 crc kubenswrapper[4642]: I0128 07:06:33.935864 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="71c8e1ac-188f-44f8-a158-3a1c97035d42" containerName="ceilometer-notification-agent" containerID="cri-o://7dca94b318fe03e75ba0ad0ac3926bb7f1cedbe07925b261ada02f01fda89ba3" gracePeriod=30 Jan 28 07:06:33 crc kubenswrapper[4642]: I0128 07:06:33.943171 4642 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="71c8e1ac-188f-44f8-a158-3a1c97035d42" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.190:3000/\": read tcp 10.217.0.2:35532->10.217.0.190:3000: read: connection reset by peer" Jan 28 07:06:33 crc kubenswrapper[4642]: I0128 07:06:33.999235 4642 generic.go:334] "Generic (PLEG): container finished" podID="3966764e-5967-46c5-9b39-88fd738b8b0a" containerID="74149a55718780f5f55078a148ba35fd2e6705ba2127a425c42a330759c89688" exitCode=0 Jan 28 07:06:33 crc kubenswrapper[4642]: I0128 07:06:33.999279 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59775c5b57-vjbrx" event={"ID":"3966764e-5967-46c5-9b39-88fd738b8b0a","Type":"ContainerDied","Data":"74149a55718780f5f55078a148ba35fd2e6705ba2127a425c42a330759c89688"} Jan 28 07:06:33 crc kubenswrapper[4642]: I0128 07:06:33.999317 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 28 07:06:33 crc kubenswrapper[4642]: I0128 07:06:33.999319 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59775c5b57-vjbrx" event={"ID":"3966764e-5967-46c5-9b39-88fd738b8b0a","Type":"ContainerStarted","Data":"0bd27367e383ac4e50b5c414b55a0190baa3c4288037b53dc8ae32ae9a08ba73"} Jan 28 07:06:34 crc kubenswrapper[4642]: I0128 07:06:34.009583 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 28 07:06:34 crc kubenswrapper[4642]: I0128 07:06:34.173100 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 07:06:34 crc kubenswrapper[4642]: I0128 07:06:34.179105 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 07:06:34 crc kubenswrapper[4642]: I0128 07:06:34.196046 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 07:06:34 crc kubenswrapper[4642]: E0128 07:06:34.196516 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f280eea-8ffb-4200-9357-df15e71681b0" containerName="nova-cell1-novncproxy-novncproxy" Jan 28 07:06:34 crc kubenswrapper[4642]: I0128 07:06:34.196537 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f280eea-8ffb-4200-9357-df15e71681b0" containerName="nova-cell1-novncproxy-novncproxy" Jan 28 07:06:34 crc kubenswrapper[4642]: I0128 07:06:34.196718 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f280eea-8ffb-4200-9357-df15e71681b0" containerName="nova-cell1-novncproxy-novncproxy" Jan 28 07:06:34 crc kubenswrapper[4642]: I0128 07:06:34.197338 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 28 07:06:34 crc kubenswrapper[4642]: I0128 07:06:34.199082 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 28 07:06:34 crc kubenswrapper[4642]: I0128 07:06:34.199130 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 28 07:06:34 crc kubenswrapper[4642]: I0128 07:06:34.199313 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 28 07:06:34 crc kubenswrapper[4642]: I0128 07:06:34.204540 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 07:06:34 crc kubenswrapper[4642]: I0128 07:06:34.284318 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8befd04d-7f83-44b3-8136-94b85511b14f-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"8befd04d-7f83-44b3-8136-94b85511b14f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 07:06:34 crc kubenswrapper[4642]: I0128 07:06:34.284448 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xp75m\" (UniqueName: \"kubernetes.io/projected/8befd04d-7f83-44b3-8136-94b85511b14f-kube-api-access-xp75m\") pod \"nova-cell1-novncproxy-0\" (UID: \"8befd04d-7f83-44b3-8136-94b85511b14f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 07:06:34 crc kubenswrapper[4642]: I0128 07:06:34.284619 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8befd04d-7f83-44b3-8136-94b85511b14f-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"8befd04d-7f83-44b3-8136-94b85511b14f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 07:06:34 crc kubenswrapper[4642]: I0128 07:06:34.284697 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/8befd04d-7f83-44b3-8136-94b85511b14f-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8befd04d-7f83-44b3-8136-94b85511b14f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 07:06:34 crc kubenswrapper[4642]: I0128 07:06:34.284830 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/8befd04d-7f83-44b3-8136-94b85511b14f-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8befd04d-7f83-44b3-8136-94b85511b14f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 07:06:34 crc kubenswrapper[4642]: I0128 07:06:34.386547 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/8befd04d-7f83-44b3-8136-94b85511b14f-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8befd04d-7f83-44b3-8136-94b85511b14f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 07:06:34 crc kubenswrapper[4642]: I0128 07:06:34.386728 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/8befd04d-7f83-44b3-8136-94b85511b14f-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8befd04d-7f83-44b3-8136-94b85511b14f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 07:06:34 crc kubenswrapper[4642]: I0128 07:06:34.386785 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8befd04d-7f83-44b3-8136-94b85511b14f-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"8befd04d-7f83-44b3-8136-94b85511b14f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 07:06:34 crc kubenswrapper[4642]: I0128 07:06:34.386825 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xp75m\" (UniqueName: \"kubernetes.io/projected/8befd04d-7f83-44b3-8136-94b85511b14f-kube-api-access-xp75m\") pod \"nova-cell1-novncproxy-0\" (UID: \"8befd04d-7f83-44b3-8136-94b85511b14f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 07:06:34 crc kubenswrapper[4642]: I0128 07:06:34.387033 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8befd04d-7f83-44b3-8136-94b85511b14f-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"8befd04d-7f83-44b3-8136-94b85511b14f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 07:06:34 crc kubenswrapper[4642]: I0128 07:06:34.391113 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/8befd04d-7f83-44b3-8136-94b85511b14f-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8befd04d-7f83-44b3-8136-94b85511b14f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 07:06:34 crc kubenswrapper[4642]: I0128 07:06:34.391551 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8befd04d-7f83-44b3-8136-94b85511b14f-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"8befd04d-7f83-44b3-8136-94b85511b14f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 07:06:34 crc kubenswrapper[4642]: I0128 07:06:34.392174 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/8befd04d-7f83-44b3-8136-94b85511b14f-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8befd04d-7f83-44b3-8136-94b85511b14f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 07:06:34 crc kubenswrapper[4642]: I0128 07:06:34.393473 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8befd04d-7f83-44b3-8136-94b85511b14f-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"8befd04d-7f83-44b3-8136-94b85511b14f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 07:06:34 crc kubenswrapper[4642]: I0128 07:06:34.404813 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xp75m\" (UniqueName: \"kubernetes.io/projected/8befd04d-7f83-44b3-8136-94b85511b14f-kube-api-access-xp75m\") pod \"nova-cell1-novncproxy-0\" (UID: \"8befd04d-7f83-44b3-8136-94b85511b14f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 07:06:34 crc kubenswrapper[4642]: I0128 07:06:34.510032 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 28 07:06:34 crc kubenswrapper[4642]: I0128 07:06:34.545444 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 28 07:06:34 crc kubenswrapper[4642]: I0128 07:06:34.945145 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 07:06:34 crc kubenswrapper[4642]: W0128 07:06:34.946665 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8befd04d_7f83_44b3_8136_94b85511b14f.slice/crio-20f73cd36d4ccac276e9cacef63591670d9a1036bb2acc2c6959dd2223d038ac WatchSource:0}: Error finding container 20f73cd36d4ccac276e9cacef63591670d9a1036bb2acc2c6959dd2223d038ac: Status 404 returned error can't find the container with id 20f73cd36d4ccac276e9cacef63591670d9a1036bb2acc2c6959dd2223d038ac Jan 28 07:06:35 crc kubenswrapper[4642]: I0128 07:06:35.008045 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59775c5b57-vjbrx" event={"ID":"3966764e-5967-46c5-9b39-88fd738b8b0a","Type":"ContainerStarted","Data":"59cc0a4217eb939f4a8cd6e6b5e46b169f231555287b65d48fdc408fa5414998"} Jan 28 07:06:35 crc kubenswrapper[4642]: I0128 07:06:35.008769 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-59775c5b57-vjbrx" Jan 28 07:06:35 crc kubenswrapper[4642]: I0128 07:06:35.010595 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"8befd04d-7f83-44b3-8136-94b85511b14f","Type":"ContainerStarted","Data":"20f73cd36d4ccac276e9cacef63591670d9a1036bb2acc2c6959dd2223d038ac"} Jan 28 07:06:35 crc kubenswrapper[4642]: I0128 07:06:35.022802 4642 generic.go:334] "Generic (PLEG): container finished" podID="71c8e1ac-188f-44f8-a158-3a1c97035d42" containerID="a7e83eefca0603ae89083cb5c62472b4a4f14db759cbacca76ad9dd5e405ea2b" exitCode=0 Jan 28 07:06:35 crc kubenswrapper[4642]: I0128 07:06:35.022830 4642 generic.go:334] "Generic (PLEG): container finished" podID="71c8e1ac-188f-44f8-a158-3a1c97035d42" containerID="e118a0c0c1ddf30f75a05a97651134e10797b190ab863647e80a648e8f6b77b1" exitCode=2 Jan 28 07:06:35 crc kubenswrapper[4642]: I0128 07:06:35.022839 4642 generic.go:334] "Generic (PLEG): container finished" podID="71c8e1ac-188f-44f8-a158-3a1c97035d42" containerID="28a462dbfd6310fa9a18f6a7289f2999423c709e95a1dfed7ef067ae335d3299" exitCode=0 Jan 28 07:06:35 crc kubenswrapper[4642]: I0128 07:06:35.022997 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="749b478a-1e5f-4637-88b4-fed9484b01a6" containerName="nova-api-log" containerID="cri-o://ab2bb35880d579f1e17ca079729973478d5819f01d43f4be5a3936a8c725341e" gracePeriod=30 Jan 28 07:06:35 crc kubenswrapper[4642]: I0128 07:06:35.023239 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"71c8e1ac-188f-44f8-a158-3a1c97035d42","Type":"ContainerDied","Data":"a7e83eefca0603ae89083cb5c62472b4a4f14db759cbacca76ad9dd5e405ea2b"} Jan 28 07:06:35 crc kubenswrapper[4642]: I0128 07:06:35.023267 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"71c8e1ac-188f-44f8-a158-3a1c97035d42","Type":"ContainerDied","Data":"e118a0c0c1ddf30f75a05a97651134e10797b190ab863647e80a648e8f6b77b1"} Jan 28 07:06:35 crc kubenswrapper[4642]: I0128 07:06:35.023279 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"71c8e1ac-188f-44f8-a158-3a1c97035d42","Type":"ContainerDied","Data":"28a462dbfd6310fa9a18f6a7289f2999423c709e95a1dfed7ef067ae335d3299"} Jan 28 07:06:35 crc kubenswrapper[4642]: I0128 07:06:35.024036 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="749b478a-1e5f-4637-88b4-fed9484b01a6" containerName="nova-api-api" containerID="cri-o://9a805a21d78df60974b018c82e29f8f2a272e4ec877db7a46c4af32d181aa19c" gracePeriod=30 Jan 28 07:06:35 crc kubenswrapper[4642]: I0128 07:06:35.027272 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-59775c5b57-vjbrx" podStartSLOduration=3.027262213 podStartE2EDuration="3.027262213s" podCreationTimestamp="2026-01-28 07:06:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 07:06:35.024372353 +0000 UTC m=+1118.256461162" watchObservedRunningTime="2026-01-28 07:06:35.027262213 +0000 UTC m=+1118.259351022" Jan 28 07:06:35 crc kubenswrapper[4642]: I0128 07:06:35.108841 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f280eea-8ffb-4200-9357-df15e71681b0" path="/var/lib/kubelet/pods/6f280eea-8ffb-4200-9357-df15e71681b0/volumes" Jan 28 07:06:35 crc kubenswrapper[4642]: I0128 07:06:35.669555 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 07:06:35 crc kubenswrapper[4642]: I0128 07:06:35.716618 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/71c8e1ac-188f-44f8-a158-3a1c97035d42-run-httpd\") pod \"71c8e1ac-188f-44f8-a158-3a1c97035d42\" (UID: \"71c8e1ac-188f-44f8-a158-3a1c97035d42\") " Jan 28 07:06:35 crc kubenswrapper[4642]: I0128 07:06:35.716661 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71c8e1ac-188f-44f8-a158-3a1c97035d42-config-data\") pod \"71c8e1ac-188f-44f8-a158-3a1c97035d42\" (UID: \"71c8e1ac-188f-44f8-a158-3a1c97035d42\") " Jan 28 07:06:35 crc kubenswrapper[4642]: I0128 07:06:35.716688 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71c8e1ac-188f-44f8-a158-3a1c97035d42-combined-ca-bundle\") pod \"71c8e1ac-188f-44f8-a158-3a1c97035d42\" (UID: \"71c8e1ac-188f-44f8-a158-3a1c97035d42\") " Jan 28 07:06:35 crc kubenswrapper[4642]: I0128 07:06:35.716711 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9vqr\" (UniqueName: \"kubernetes.io/projected/71c8e1ac-188f-44f8-a158-3a1c97035d42-kube-api-access-l9vqr\") pod \"71c8e1ac-188f-44f8-a158-3a1c97035d42\" (UID: \"71c8e1ac-188f-44f8-a158-3a1c97035d42\") " Jan 28 07:06:35 crc kubenswrapper[4642]: I0128 07:06:35.716831 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/71c8e1ac-188f-44f8-a158-3a1c97035d42-sg-core-conf-yaml\") pod \"71c8e1ac-188f-44f8-a158-3a1c97035d42\" (UID: \"71c8e1ac-188f-44f8-a158-3a1c97035d42\") " Jan 28 07:06:35 crc kubenswrapper[4642]: I0128 07:06:35.716863 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/71c8e1ac-188f-44f8-a158-3a1c97035d42-ceilometer-tls-certs\") pod \"71c8e1ac-188f-44f8-a158-3a1c97035d42\" (UID: \"71c8e1ac-188f-44f8-a158-3a1c97035d42\") " Jan 28 07:06:35 crc kubenswrapper[4642]: I0128 07:06:35.716878 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/71c8e1ac-188f-44f8-a158-3a1c97035d42-log-httpd\") pod \"71c8e1ac-188f-44f8-a158-3a1c97035d42\" (UID: \"71c8e1ac-188f-44f8-a158-3a1c97035d42\") " Jan 28 07:06:35 crc kubenswrapper[4642]: I0128 07:06:35.716912 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71c8e1ac-188f-44f8-a158-3a1c97035d42-scripts\") pod \"71c8e1ac-188f-44f8-a158-3a1c97035d42\" (UID: \"71c8e1ac-188f-44f8-a158-3a1c97035d42\") " Jan 28 07:06:35 crc kubenswrapper[4642]: I0128 07:06:35.716966 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8e1ac-188f-44f8-a158-3a1c97035d42-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "71c8e1ac-188f-44f8-a158-3a1c97035d42" (UID: "71c8e1ac-188f-44f8-a158-3a1c97035d42"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 07:06:35 crc kubenswrapper[4642]: I0128 07:06:35.717216 4642 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/71c8e1ac-188f-44f8-a158-3a1c97035d42-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 07:06:35 crc kubenswrapper[4642]: I0128 07:06:35.717799 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8e1ac-188f-44f8-a158-3a1c97035d42-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "71c8e1ac-188f-44f8-a158-3a1c97035d42" (UID: "71c8e1ac-188f-44f8-a158-3a1c97035d42"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 07:06:35 crc kubenswrapper[4642]: I0128 07:06:35.724345 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71c8e1ac-188f-44f8-a158-3a1c97035d42-kube-api-access-l9vqr" (OuterVolumeSpecName: "kube-api-access-l9vqr") pod "71c8e1ac-188f-44f8-a158-3a1c97035d42" (UID: "71c8e1ac-188f-44f8-a158-3a1c97035d42"). InnerVolumeSpecName "kube-api-access-l9vqr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:06:35 crc kubenswrapper[4642]: I0128 07:06:35.729536 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71c8e1ac-188f-44f8-a158-3a1c97035d42-scripts" (OuterVolumeSpecName: "scripts") pod "71c8e1ac-188f-44f8-a158-3a1c97035d42" (UID: "71c8e1ac-188f-44f8-a158-3a1c97035d42"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:06:35 crc kubenswrapper[4642]: I0128 07:06:35.761361 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71c8e1ac-188f-44f8-a158-3a1c97035d42-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "71c8e1ac-188f-44f8-a158-3a1c97035d42" (UID: "71c8e1ac-188f-44f8-a158-3a1c97035d42"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:06:35 crc kubenswrapper[4642]: I0128 07:06:35.784818 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71c8e1ac-188f-44f8-a158-3a1c97035d42-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "71c8e1ac-188f-44f8-a158-3a1c97035d42" (UID: "71c8e1ac-188f-44f8-a158-3a1c97035d42"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:06:35 crc kubenswrapper[4642]: I0128 07:06:35.806612 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71c8e1ac-188f-44f8-a158-3a1c97035d42-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "71c8e1ac-188f-44f8-a158-3a1c97035d42" (UID: "71c8e1ac-188f-44f8-a158-3a1c97035d42"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:06:35 crc kubenswrapper[4642]: I0128 07:06:35.820467 4642 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71c8e1ac-188f-44f8-a158-3a1c97035d42-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 07:06:35 crc kubenswrapper[4642]: I0128 07:06:35.820499 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9vqr\" (UniqueName: \"kubernetes.io/projected/71c8e1ac-188f-44f8-a158-3a1c97035d42-kube-api-access-l9vqr\") on node \"crc\" DevicePath \"\"" Jan 28 07:06:35 crc kubenswrapper[4642]: I0128 07:06:35.820513 4642 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/71c8e1ac-188f-44f8-a158-3a1c97035d42-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 28 07:06:35 crc kubenswrapper[4642]: I0128 07:06:35.820522 4642 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/71c8e1ac-188f-44f8-a158-3a1c97035d42-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 07:06:35 crc kubenswrapper[4642]: I0128 07:06:35.820531 4642 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/71c8e1ac-188f-44f8-a158-3a1c97035d42-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 07:06:35 crc kubenswrapper[4642]: I0128 07:06:35.820539 4642 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71c8e1ac-188f-44f8-a158-3a1c97035d42-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 07:06:35 crc kubenswrapper[4642]: I0128 07:06:35.835969 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71c8e1ac-188f-44f8-a158-3a1c97035d42-config-data" (OuterVolumeSpecName: "config-data") pod "71c8e1ac-188f-44f8-a158-3a1c97035d42" (UID: "71c8e1ac-188f-44f8-a158-3a1c97035d42"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:06:35 crc kubenswrapper[4642]: I0128 07:06:35.921527 4642 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71c8e1ac-188f-44f8-a158-3a1c97035d42-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 07:06:36 crc kubenswrapper[4642]: I0128 07:06:36.038381 4642 generic.go:334] "Generic (PLEG): container finished" podID="71c8e1ac-188f-44f8-a158-3a1c97035d42" containerID="7dca94b318fe03e75ba0ad0ac3926bb7f1cedbe07925b261ada02f01fda89ba3" exitCode=0 Jan 28 07:06:36 crc kubenswrapper[4642]: I0128 07:06:36.038476 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"71c8e1ac-188f-44f8-a158-3a1c97035d42","Type":"ContainerDied","Data":"7dca94b318fe03e75ba0ad0ac3926bb7f1cedbe07925b261ada02f01fda89ba3"} Jan 28 07:06:36 crc kubenswrapper[4642]: I0128 07:06:36.038509 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"71c8e1ac-188f-44f8-a158-3a1c97035d42","Type":"ContainerDied","Data":"6c5b259bcfd8680f79dbc89d88c18ece17f92336503e1e1923a35dae6182005c"} Jan 28 07:06:36 crc kubenswrapper[4642]: I0128 07:06:36.038539 4642 scope.go:117] "RemoveContainer" containerID="a7e83eefca0603ae89083cb5c62472b4a4f14db759cbacca76ad9dd5e405ea2b" Jan 28 07:06:36 crc kubenswrapper[4642]: I0128 07:06:36.038969 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 07:06:36 crc kubenswrapper[4642]: I0128 07:06:36.043950 4642 generic.go:334] "Generic (PLEG): container finished" podID="749b478a-1e5f-4637-88b4-fed9484b01a6" containerID="ab2bb35880d579f1e17ca079729973478d5819f01d43f4be5a3936a8c725341e" exitCode=143 Jan 28 07:06:36 crc kubenswrapper[4642]: I0128 07:06:36.044109 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"749b478a-1e5f-4637-88b4-fed9484b01a6","Type":"ContainerDied","Data":"ab2bb35880d579f1e17ca079729973478d5819f01d43f4be5a3936a8c725341e"} Jan 28 07:06:36 crc kubenswrapper[4642]: I0128 07:06:36.046104 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"8befd04d-7f83-44b3-8136-94b85511b14f","Type":"ContainerStarted","Data":"1d0c7e0be732d7ff99f634df0bfb40b87e65545a57356bae1ea98b5310471820"} Jan 28 07:06:36 crc kubenswrapper[4642]: I0128 07:06:36.068198 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.068167877 podStartE2EDuration="2.068167877s" podCreationTimestamp="2026-01-28 07:06:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 07:06:36.062013261 +0000 UTC m=+1119.294102070" watchObservedRunningTime="2026-01-28 07:06:36.068167877 +0000 UTC m=+1119.300256685" Jan 28 07:06:36 crc kubenswrapper[4642]: I0128 07:06:36.080423 4642 scope.go:117] "RemoveContainer" containerID="e118a0c0c1ddf30f75a05a97651134e10797b190ab863647e80a648e8f6b77b1" Jan 28 07:06:36 crc kubenswrapper[4642]: I0128 07:06:36.103806 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 07:06:36 crc kubenswrapper[4642]: I0128 07:06:36.110245 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 28 07:06:36 crc kubenswrapper[4642]: I0128 07:06:36.120428 4642 scope.go:117] "RemoveContainer" containerID="7dca94b318fe03e75ba0ad0ac3926bb7f1cedbe07925b261ada02f01fda89ba3" Jan 28 07:06:36 crc kubenswrapper[4642]: I0128 07:06:36.128586 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 28 07:06:36 crc kubenswrapper[4642]: E0128 07:06:36.128968 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71c8e1ac-188f-44f8-a158-3a1c97035d42" containerName="ceilometer-notification-agent" Jan 28 07:06:36 crc kubenswrapper[4642]: I0128 07:06:36.128981 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="71c8e1ac-188f-44f8-a158-3a1c97035d42" containerName="ceilometer-notification-agent" Jan 28 07:06:36 crc kubenswrapper[4642]: E0128 07:06:36.128990 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71c8e1ac-188f-44f8-a158-3a1c97035d42" containerName="proxy-httpd" Jan 28 07:06:36 crc kubenswrapper[4642]: I0128 07:06:36.128996 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="71c8e1ac-188f-44f8-a158-3a1c97035d42" containerName="proxy-httpd" Jan 28 07:06:36 crc kubenswrapper[4642]: E0128 07:06:36.129012 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71c8e1ac-188f-44f8-a158-3a1c97035d42" containerName="ceilometer-central-agent" Jan 28 07:06:36 crc kubenswrapper[4642]: I0128 07:06:36.129018 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="71c8e1ac-188f-44f8-a158-3a1c97035d42" containerName="ceilometer-central-agent" Jan 28 07:06:36 crc kubenswrapper[4642]: E0128 07:06:36.129029 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71c8e1ac-188f-44f8-a158-3a1c97035d42" containerName="sg-core" Jan 28 07:06:36 crc kubenswrapper[4642]: I0128 07:06:36.129036 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="71c8e1ac-188f-44f8-a158-3a1c97035d42" containerName="sg-core" Jan 28 07:06:36 crc kubenswrapper[4642]: I0128 07:06:36.129212 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="71c8e1ac-188f-44f8-a158-3a1c97035d42" containerName="proxy-httpd" Jan 28 07:06:36 crc kubenswrapper[4642]: I0128 07:06:36.129221 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="71c8e1ac-188f-44f8-a158-3a1c97035d42" containerName="ceilometer-central-agent" Jan 28 07:06:36 crc kubenswrapper[4642]: I0128 07:06:36.129239 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="71c8e1ac-188f-44f8-a158-3a1c97035d42" containerName="ceilometer-notification-agent" Jan 28 07:06:36 crc kubenswrapper[4642]: I0128 07:06:36.129251 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="71c8e1ac-188f-44f8-a158-3a1c97035d42" containerName="sg-core" Jan 28 07:06:36 crc kubenswrapper[4642]: I0128 07:06:36.130859 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 07:06:36 crc kubenswrapper[4642]: I0128 07:06:36.134380 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 07:06:36 crc kubenswrapper[4642]: I0128 07:06:36.134436 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 28 07:06:36 crc kubenswrapper[4642]: I0128 07:06:36.134673 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 28 07:06:36 crc kubenswrapper[4642]: I0128 07:06:36.134811 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 28 07:06:36 crc kubenswrapper[4642]: I0128 07:06:36.150405 4642 scope.go:117] "RemoveContainer" containerID="28a462dbfd6310fa9a18f6a7289f2999423c709e95a1dfed7ef067ae335d3299" Jan 28 07:06:36 crc kubenswrapper[4642]: I0128 07:06:36.168217 4642 scope.go:117] "RemoveContainer" containerID="a7e83eefca0603ae89083cb5c62472b4a4f14db759cbacca76ad9dd5e405ea2b" Jan 28 07:06:36 crc kubenswrapper[4642]: E0128 07:06:36.168541 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7e83eefca0603ae89083cb5c62472b4a4f14db759cbacca76ad9dd5e405ea2b\": container with ID starting with a7e83eefca0603ae89083cb5c62472b4a4f14db759cbacca76ad9dd5e405ea2b not found: ID does not exist" containerID="a7e83eefca0603ae89083cb5c62472b4a4f14db759cbacca76ad9dd5e405ea2b" Jan 28 07:06:36 crc kubenswrapper[4642]: I0128 07:06:36.168584 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7e83eefca0603ae89083cb5c62472b4a4f14db759cbacca76ad9dd5e405ea2b"} err="failed to get container status \"a7e83eefca0603ae89083cb5c62472b4a4f14db759cbacca76ad9dd5e405ea2b\": rpc error: code = NotFound desc = could not find container \"a7e83eefca0603ae89083cb5c62472b4a4f14db759cbacca76ad9dd5e405ea2b\": container with ID starting with a7e83eefca0603ae89083cb5c62472b4a4f14db759cbacca76ad9dd5e405ea2b not found: ID does not exist" Jan 28 07:06:36 crc kubenswrapper[4642]: I0128 07:06:36.168612 4642 scope.go:117] "RemoveContainer" containerID="e118a0c0c1ddf30f75a05a97651134e10797b190ab863647e80a648e8f6b77b1" Jan 28 07:06:36 crc kubenswrapper[4642]: E0128 07:06:36.168963 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e118a0c0c1ddf30f75a05a97651134e10797b190ab863647e80a648e8f6b77b1\": container with ID starting with e118a0c0c1ddf30f75a05a97651134e10797b190ab863647e80a648e8f6b77b1 not found: ID does not exist" containerID="e118a0c0c1ddf30f75a05a97651134e10797b190ab863647e80a648e8f6b77b1" Jan 28 07:06:36 crc kubenswrapper[4642]: I0128 07:06:36.168987 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e118a0c0c1ddf30f75a05a97651134e10797b190ab863647e80a648e8f6b77b1"} err="failed to get container status \"e118a0c0c1ddf30f75a05a97651134e10797b190ab863647e80a648e8f6b77b1\": rpc error: code = NotFound desc = could not find container \"e118a0c0c1ddf30f75a05a97651134e10797b190ab863647e80a648e8f6b77b1\": container with ID starting with e118a0c0c1ddf30f75a05a97651134e10797b190ab863647e80a648e8f6b77b1 not found: ID does not exist" Jan 28 07:06:36 crc kubenswrapper[4642]: I0128 07:06:36.169000 4642 scope.go:117] "RemoveContainer" containerID="7dca94b318fe03e75ba0ad0ac3926bb7f1cedbe07925b261ada02f01fda89ba3" Jan 28 07:06:36 crc kubenswrapper[4642]: E0128 07:06:36.169225 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7dca94b318fe03e75ba0ad0ac3926bb7f1cedbe07925b261ada02f01fda89ba3\": container with ID starting with 7dca94b318fe03e75ba0ad0ac3926bb7f1cedbe07925b261ada02f01fda89ba3 not found: ID does not exist" containerID="7dca94b318fe03e75ba0ad0ac3926bb7f1cedbe07925b261ada02f01fda89ba3" Jan 28 07:06:36 crc kubenswrapper[4642]: I0128 07:06:36.169248 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7dca94b318fe03e75ba0ad0ac3926bb7f1cedbe07925b261ada02f01fda89ba3"} err="failed to get container status \"7dca94b318fe03e75ba0ad0ac3926bb7f1cedbe07925b261ada02f01fda89ba3\": rpc error: code = NotFound desc = could not find container \"7dca94b318fe03e75ba0ad0ac3926bb7f1cedbe07925b261ada02f01fda89ba3\": container with ID starting with 7dca94b318fe03e75ba0ad0ac3926bb7f1cedbe07925b261ada02f01fda89ba3 not found: ID does not exist" Jan 28 07:06:36 crc kubenswrapper[4642]: I0128 07:06:36.169259 4642 scope.go:117] "RemoveContainer" containerID="28a462dbfd6310fa9a18f6a7289f2999423c709e95a1dfed7ef067ae335d3299" Jan 28 07:06:36 crc kubenswrapper[4642]: E0128 07:06:36.169471 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28a462dbfd6310fa9a18f6a7289f2999423c709e95a1dfed7ef067ae335d3299\": container with ID starting with 28a462dbfd6310fa9a18f6a7289f2999423c709e95a1dfed7ef067ae335d3299 not found: ID does not exist" containerID="28a462dbfd6310fa9a18f6a7289f2999423c709e95a1dfed7ef067ae335d3299" Jan 28 07:06:36 crc kubenswrapper[4642]: I0128 07:06:36.169492 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28a462dbfd6310fa9a18f6a7289f2999423c709e95a1dfed7ef067ae335d3299"} err="failed to get container status \"28a462dbfd6310fa9a18f6a7289f2999423c709e95a1dfed7ef067ae335d3299\": rpc error: code = NotFound desc = could not find container \"28a462dbfd6310fa9a18f6a7289f2999423c709e95a1dfed7ef067ae335d3299\": container with ID starting with 28a462dbfd6310fa9a18f6a7289f2999423c709e95a1dfed7ef067ae335d3299 not found: ID does not exist" Jan 28 07:06:36 crc kubenswrapper[4642]: I0128 07:06:36.226114 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f050f220-7652-43cd-8de4-fe0f291f46cc-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f050f220-7652-43cd-8de4-fe0f291f46cc\") " pod="openstack/ceilometer-0" Jan 28 07:06:36 crc kubenswrapper[4642]: I0128 07:06:36.226222 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwngr\" (UniqueName: \"kubernetes.io/projected/f050f220-7652-43cd-8de4-fe0f291f46cc-kube-api-access-xwngr\") pod \"ceilometer-0\" (UID: \"f050f220-7652-43cd-8de4-fe0f291f46cc\") " pod="openstack/ceilometer-0" Jan 28 07:06:36 crc kubenswrapper[4642]: I0128 07:06:36.226253 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f050f220-7652-43cd-8de4-fe0f291f46cc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f050f220-7652-43cd-8de4-fe0f291f46cc\") " pod="openstack/ceilometer-0" Jan 28 07:06:36 crc kubenswrapper[4642]: I0128 07:06:36.226385 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f050f220-7652-43cd-8de4-fe0f291f46cc-scripts\") pod \"ceilometer-0\" (UID: \"f050f220-7652-43cd-8de4-fe0f291f46cc\") " pod="openstack/ceilometer-0" Jan 28 07:06:36 crc kubenswrapper[4642]: I0128 07:06:36.226502 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f050f220-7652-43cd-8de4-fe0f291f46cc-run-httpd\") pod \"ceilometer-0\" (UID: \"f050f220-7652-43cd-8de4-fe0f291f46cc\") " pod="openstack/ceilometer-0" Jan 28 07:06:36 crc kubenswrapper[4642]: I0128 07:06:36.226668 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f050f220-7652-43cd-8de4-fe0f291f46cc-config-data\") pod \"ceilometer-0\" (UID: \"f050f220-7652-43cd-8de4-fe0f291f46cc\") " pod="openstack/ceilometer-0" Jan 28 07:06:36 crc kubenswrapper[4642]: I0128 07:06:36.226733 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f050f220-7652-43cd-8de4-fe0f291f46cc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f050f220-7652-43cd-8de4-fe0f291f46cc\") " pod="openstack/ceilometer-0" Jan 28 07:06:36 crc kubenswrapper[4642]: I0128 07:06:36.226869 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f050f220-7652-43cd-8de4-fe0f291f46cc-log-httpd\") pod \"ceilometer-0\" (UID: \"f050f220-7652-43cd-8de4-fe0f291f46cc\") " pod="openstack/ceilometer-0" Jan 28 07:06:36 crc kubenswrapper[4642]: I0128 07:06:36.328665 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f050f220-7652-43cd-8de4-fe0f291f46cc-log-httpd\") pod \"ceilometer-0\" (UID: \"f050f220-7652-43cd-8de4-fe0f291f46cc\") " pod="openstack/ceilometer-0" Jan 28 07:06:36 crc kubenswrapper[4642]: I0128 07:06:36.328733 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f050f220-7652-43cd-8de4-fe0f291f46cc-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f050f220-7652-43cd-8de4-fe0f291f46cc\") " pod="openstack/ceilometer-0" Jan 28 07:06:36 crc kubenswrapper[4642]: I0128 07:06:36.328774 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwngr\" (UniqueName: \"kubernetes.io/projected/f050f220-7652-43cd-8de4-fe0f291f46cc-kube-api-access-xwngr\") pod \"ceilometer-0\" (UID: \"f050f220-7652-43cd-8de4-fe0f291f46cc\") " pod="openstack/ceilometer-0" Jan 28 07:06:36 crc kubenswrapper[4642]: I0128 07:06:36.328799 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f050f220-7652-43cd-8de4-fe0f291f46cc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f050f220-7652-43cd-8de4-fe0f291f46cc\") " pod="openstack/ceilometer-0" Jan 28 07:06:36 crc kubenswrapper[4642]: I0128 07:06:36.328853 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f050f220-7652-43cd-8de4-fe0f291f46cc-scripts\") pod \"ceilometer-0\" (UID: \"f050f220-7652-43cd-8de4-fe0f291f46cc\") " pod="openstack/ceilometer-0" Jan 28 07:06:36 crc kubenswrapper[4642]: I0128 07:06:36.328888 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f050f220-7652-43cd-8de4-fe0f291f46cc-run-httpd\") pod \"ceilometer-0\" (UID: \"f050f220-7652-43cd-8de4-fe0f291f46cc\") " pod="openstack/ceilometer-0" Jan 28 07:06:36 crc kubenswrapper[4642]: I0128 07:06:36.328915 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f050f220-7652-43cd-8de4-fe0f291f46cc-config-data\") pod \"ceilometer-0\" (UID: \"f050f220-7652-43cd-8de4-fe0f291f46cc\") " pod="openstack/ceilometer-0" Jan 28 07:06:36 crc kubenswrapper[4642]: I0128 07:06:36.328929 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f050f220-7652-43cd-8de4-fe0f291f46cc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f050f220-7652-43cd-8de4-fe0f291f46cc\") " pod="openstack/ceilometer-0" Jan 28 07:06:36 crc kubenswrapper[4642]: I0128 07:06:36.329161 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f050f220-7652-43cd-8de4-fe0f291f46cc-log-httpd\") pod \"ceilometer-0\" (UID: \"f050f220-7652-43cd-8de4-fe0f291f46cc\") " pod="openstack/ceilometer-0" Jan 28 07:06:36 crc kubenswrapper[4642]: I0128 07:06:36.329391 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f050f220-7652-43cd-8de4-fe0f291f46cc-run-httpd\") pod \"ceilometer-0\" (UID: \"f050f220-7652-43cd-8de4-fe0f291f46cc\") " pod="openstack/ceilometer-0" Jan 28 07:06:36 crc kubenswrapper[4642]: I0128 07:06:36.332598 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f050f220-7652-43cd-8de4-fe0f291f46cc-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f050f220-7652-43cd-8de4-fe0f291f46cc\") " pod="openstack/ceilometer-0" Jan 28 07:06:36 crc kubenswrapper[4642]: I0128 07:06:36.334157 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f050f220-7652-43cd-8de4-fe0f291f46cc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f050f220-7652-43cd-8de4-fe0f291f46cc\") " pod="openstack/ceilometer-0" Jan 28 07:06:36 crc kubenswrapper[4642]: I0128 07:06:36.336611 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f050f220-7652-43cd-8de4-fe0f291f46cc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f050f220-7652-43cd-8de4-fe0f291f46cc\") " pod="openstack/ceilometer-0" Jan 28 07:06:36 crc kubenswrapper[4642]: I0128 07:06:36.336827 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f050f220-7652-43cd-8de4-fe0f291f46cc-config-data\") pod \"ceilometer-0\" (UID: \"f050f220-7652-43cd-8de4-fe0f291f46cc\") " pod="openstack/ceilometer-0" Jan 28 07:06:36 crc kubenswrapper[4642]: I0128 07:06:36.337649 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f050f220-7652-43cd-8de4-fe0f291f46cc-scripts\") pod \"ceilometer-0\" (UID: \"f050f220-7652-43cd-8de4-fe0f291f46cc\") " pod="openstack/ceilometer-0" Jan 28 07:06:36 crc kubenswrapper[4642]: I0128 07:06:36.347760 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwngr\" (UniqueName: \"kubernetes.io/projected/f050f220-7652-43cd-8de4-fe0f291f46cc-kube-api-access-xwngr\") pod \"ceilometer-0\" (UID: \"f050f220-7652-43cd-8de4-fe0f291f46cc\") " pod="openstack/ceilometer-0" Jan 28 07:06:36 crc kubenswrapper[4642]: I0128 07:06:36.442885 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 07:06:36 crc kubenswrapper[4642]: I0128 07:06:36.899147 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 07:06:37 crc kubenswrapper[4642]: I0128 07:06:37.053905 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f050f220-7652-43cd-8de4-fe0f291f46cc","Type":"ContainerStarted","Data":"05160d29600cdfcce6ddb3e66eb05843e2e52f5069b6117b6b2533a0eaf75800"} Jan 28 07:06:37 crc kubenswrapper[4642]: I0128 07:06:37.110149 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71c8e1ac-188f-44f8-a158-3a1c97035d42" path="/var/lib/kubelet/pods/71c8e1ac-188f-44f8-a158-3a1c97035d42/volumes" Jan 28 07:06:38 crc kubenswrapper[4642]: I0128 07:06:38.200131 4642 patch_prober.go:28] interesting pod/machine-config-daemon-hdsmf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 07:06:38 crc kubenswrapper[4642]: I0128 07:06:38.200219 4642 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 07:06:38 crc kubenswrapper[4642]: I0128 07:06:38.200277 4642 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" Jan 28 07:06:38 crc kubenswrapper[4642]: I0128 07:06:38.201021 4642 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"141f548b8402c2028aeffc2bc9021f71ad46dc4f636bc6f8740c8315416f2bd3"} pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 07:06:38 crc kubenswrapper[4642]: I0128 07:06:38.201090 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" containerName="machine-config-daemon" containerID="cri-o://141f548b8402c2028aeffc2bc9021f71ad46dc4f636bc6f8740c8315416f2bd3" gracePeriod=600 Jan 28 07:06:38 crc kubenswrapper[4642]: I0128 07:06:38.646081 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 07:06:38 crc kubenswrapper[4642]: I0128 07:06:38.677030 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/749b478a-1e5f-4637-88b4-fed9484b01a6-combined-ca-bundle\") pod \"749b478a-1e5f-4637-88b4-fed9484b01a6\" (UID: \"749b478a-1e5f-4637-88b4-fed9484b01a6\") " Jan 28 07:06:38 crc kubenswrapper[4642]: I0128 07:06:38.677262 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6lldg\" (UniqueName: \"kubernetes.io/projected/749b478a-1e5f-4637-88b4-fed9484b01a6-kube-api-access-6lldg\") pod \"749b478a-1e5f-4637-88b4-fed9484b01a6\" (UID: \"749b478a-1e5f-4637-88b4-fed9484b01a6\") " Jan 28 07:06:38 crc kubenswrapper[4642]: I0128 07:06:38.677306 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/749b478a-1e5f-4637-88b4-fed9484b01a6-logs\") pod \"749b478a-1e5f-4637-88b4-fed9484b01a6\" (UID: \"749b478a-1e5f-4637-88b4-fed9484b01a6\") " Jan 28 07:06:38 crc kubenswrapper[4642]: I0128 07:06:38.677406 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/749b478a-1e5f-4637-88b4-fed9484b01a6-config-data\") pod \"749b478a-1e5f-4637-88b4-fed9484b01a6\" (UID: \"749b478a-1e5f-4637-88b4-fed9484b01a6\") " Jan 28 07:06:38 crc kubenswrapper[4642]: I0128 07:06:38.677852 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/749b478a-1e5f-4637-88b4-fed9484b01a6-logs" (OuterVolumeSpecName: "logs") pod "749b478a-1e5f-4637-88b4-fed9484b01a6" (UID: "749b478a-1e5f-4637-88b4-fed9484b01a6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 07:06:38 crc kubenswrapper[4642]: I0128 07:06:38.678042 4642 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/749b478a-1e5f-4637-88b4-fed9484b01a6-logs\") on node \"crc\" DevicePath \"\"" Jan 28 07:06:38 crc kubenswrapper[4642]: I0128 07:06:38.684346 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/749b478a-1e5f-4637-88b4-fed9484b01a6-kube-api-access-6lldg" (OuterVolumeSpecName: "kube-api-access-6lldg") pod "749b478a-1e5f-4637-88b4-fed9484b01a6" (UID: "749b478a-1e5f-4637-88b4-fed9484b01a6"). InnerVolumeSpecName "kube-api-access-6lldg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:06:38 crc kubenswrapper[4642]: I0128 07:06:38.711830 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/749b478a-1e5f-4637-88b4-fed9484b01a6-config-data" (OuterVolumeSpecName: "config-data") pod "749b478a-1e5f-4637-88b4-fed9484b01a6" (UID: "749b478a-1e5f-4637-88b4-fed9484b01a6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:06:38 crc kubenswrapper[4642]: I0128 07:06:38.718080 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/749b478a-1e5f-4637-88b4-fed9484b01a6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "749b478a-1e5f-4637-88b4-fed9484b01a6" (UID: "749b478a-1e5f-4637-88b4-fed9484b01a6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:06:38 crc kubenswrapper[4642]: I0128 07:06:38.779568 4642 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/749b478a-1e5f-4637-88b4-fed9484b01a6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 07:06:38 crc kubenswrapper[4642]: I0128 07:06:38.779719 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6lldg\" (UniqueName: \"kubernetes.io/projected/749b478a-1e5f-4637-88b4-fed9484b01a6-kube-api-access-6lldg\") on node \"crc\" DevicePath \"\"" Jan 28 07:06:38 crc kubenswrapper[4642]: I0128 07:06:38.779731 4642 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/749b478a-1e5f-4637-88b4-fed9484b01a6-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 07:06:39 crc kubenswrapper[4642]: I0128 07:06:39.074954 4642 generic.go:334] "Generic (PLEG): container finished" podID="749b478a-1e5f-4637-88b4-fed9484b01a6" containerID="9a805a21d78df60974b018c82e29f8f2a272e4ec877db7a46c4af32d181aa19c" exitCode=0 Jan 28 07:06:39 crc kubenswrapper[4642]: I0128 07:06:39.075017 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"749b478a-1e5f-4637-88b4-fed9484b01a6","Type":"ContainerDied","Data":"9a805a21d78df60974b018c82e29f8f2a272e4ec877db7a46c4af32d181aa19c"} Jan 28 07:06:39 crc kubenswrapper[4642]: I0128 07:06:39.075064 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 07:06:39 crc kubenswrapper[4642]: I0128 07:06:39.075392 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"749b478a-1e5f-4637-88b4-fed9484b01a6","Type":"ContainerDied","Data":"f77e24f49c7200f0739a9f8ef0cb1fff5beaf968fcd91e272fc34a89439871f2"} Jan 28 07:06:39 crc kubenswrapper[4642]: I0128 07:06:39.075474 4642 scope.go:117] "RemoveContainer" containerID="9a805a21d78df60974b018c82e29f8f2a272e4ec877db7a46c4af32d181aa19c" Jan 28 07:06:39 crc kubenswrapper[4642]: I0128 07:06:39.078819 4642 generic.go:334] "Generic (PLEG): container finished" podID="338ae955-434d-40bd-8519-580badf3e175" containerID="141f548b8402c2028aeffc2bc9021f71ad46dc4f636bc6f8740c8315416f2bd3" exitCode=0 Jan 28 07:06:39 crc kubenswrapper[4642]: I0128 07:06:39.078955 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" event={"ID":"338ae955-434d-40bd-8519-580badf3e175","Type":"ContainerDied","Data":"141f548b8402c2028aeffc2bc9021f71ad46dc4f636bc6f8740c8315416f2bd3"} Jan 28 07:06:39 crc kubenswrapper[4642]: I0128 07:06:39.079035 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" event={"ID":"338ae955-434d-40bd-8519-580badf3e175","Type":"ContainerStarted","Data":"50c14e99bdc05e634d65f44bc9c435f3fa5f0310a7195243e717dcf863377f09"} Jan 28 07:06:39 crc kubenswrapper[4642]: I0128 07:06:39.097276 4642 scope.go:117] "RemoveContainer" containerID="ab2bb35880d579f1e17ca079729973478d5819f01d43f4be5a3936a8c725341e" Jan 28 07:06:39 crc kubenswrapper[4642]: I0128 07:06:39.131812 4642 scope.go:117] "RemoveContainer" containerID="9a805a21d78df60974b018c82e29f8f2a272e4ec877db7a46c4af32d181aa19c" Jan 28 07:06:39 crc kubenswrapper[4642]: E0128 07:06:39.132285 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a805a21d78df60974b018c82e29f8f2a272e4ec877db7a46c4af32d181aa19c\": container with ID starting with 9a805a21d78df60974b018c82e29f8f2a272e4ec877db7a46c4af32d181aa19c not found: ID does not exist" containerID="9a805a21d78df60974b018c82e29f8f2a272e4ec877db7a46c4af32d181aa19c" Jan 28 07:06:39 crc kubenswrapper[4642]: I0128 07:06:39.132351 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a805a21d78df60974b018c82e29f8f2a272e4ec877db7a46c4af32d181aa19c"} err="failed to get container status \"9a805a21d78df60974b018c82e29f8f2a272e4ec877db7a46c4af32d181aa19c\": rpc error: code = NotFound desc = could not find container \"9a805a21d78df60974b018c82e29f8f2a272e4ec877db7a46c4af32d181aa19c\": container with ID starting with 9a805a21d78df60974b018c82e29f8f2a272e4ec877db7a46c4af32d181aa19c not found: ID does not exist" Jan 28 07:06:39 crc kubenswrapper[4642]: I0128 07:06:39.132379 4642 scope.go:117] "RemoveContainer" containerID="ab2bb35880d579f1e17ca079729973478d5819f01d43f4be5a3936a8c725341e" Jan 28 07:06:39 crc kubenswrapper[4642]: E0128 07:06:39.132790 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab2bb35880d579f1e17ca079729973478d5819f01d43f4be5a3936a8c725341e\": container with ID starting with ab2bb35880d579f1e17ca079729973478d5819f01d43f4be5a3936a8c725341e not found: ID does not exist" containerID="ab2bb35880d579f1e17ca079729973478d5819f01d43f4be5a3936a8c725341e" Jan 28 07:06:39 crc kubenswrapper[4642]: I0128 07:06:39.132836 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab2bb35880d579f1e17ca079729973478d5819f01d43f4be5a3936a8c725341e"} err="failed to get container status \"ab2bb35880d579f1e17ca079729973478d5819f01d43f4be5a3936a8c725341e\": rpc error: code = NotFound desc = could not find container \"ab2bb35880d579f1e17ca079729973478d5819f01d43f4be5a3936a8c725341e\": container with ID starting with ab2bb35880d579f1e17ca079729973478d5819f01d43f4be5a3936a8c725341e not found: ID does not exist" Jan 28 07:06:39 crc kubenswrapper[4642]: I0128 07:06:39.132872 4642 scope.go:117] "RemoveContainer" containerID="682b9c3bf1397b4c59a77c5d98ab360bbde5aa7c24a95922a398468c4fd1fcb1" Jan 28 07:06:39 crc kubenswrapper[4642]: I0128 07:06:39.151257 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 28 07:06:39 crc kubenswrapper[4642]: I0128 07:06:39.168169 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 28 07:06:39 crc kubenswrapper[4642]: I0128 07:06:39.178594 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 28 07:06:39 crc kubenswrapper[4642]: E0128 07:06:39.179016 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="749b478a-1e5f-4637-88b4-fed9484b01a6" containerName="nova-api-log" Jan 28 07:06:39 crc kubenswrapper[4642]: I0128 07:06:39.179039 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="749b478a-1e5f-4637-88b4-fed9484b01a6" containerName="nova-api-log" Jan 28 07:06:39 crc kubenswrapper[4642]: E0128 07:06:39.179057 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="749b478a-1e5f-4637-88b4-fed9484b01a6" containerName="nova-api-api" Jan 28 07:06:39 crc kubenswrapper[4642]: I0128 07:06:39.179063 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="749b478a-1e5f-4637-88b4-fed9484b01a6" containerName="nova-api-api" Jan 28 07:06:39 crc kubenswrapper[4642]: I0128 07:06:39.179314 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="749b478a-1e5f-4637-88b4-fed9484b01a6" containerName="nova-api-log" Jan 28 07:06:39 crc kubenswrapper[4642]: I0128 07:06:39.179332 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="749b478a-1e5f-4637-88b4-fed9484b01a6" containerName="nova-api-api" Jan 28 07:06:39 crc kubenswrapper[4642]: I0128 07:06:39.180900 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 07:06:39 crc kubenswrapper[4642]: I0128 07:06:39.183259 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 28 07:06:39 crc kubenswrapper[4642]: I0128 07:06:39.183586 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 28 07:06:39 crc kubenswrapper[4642]: I0128 07:06:39.183628 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 28 07:06:39 crc kubenswrapper[4642]: I0128 07:06:39.191783 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 28 07:06:39 crc kubenswrapper[4642]: I0128 07:06:39.289959 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc1ac90f-da6a-4452-b8fb-e9a369ee208e-logs\") pod \"nova-api-0\" (UID: \"cc1ac90f-da6a-4452-b8fb-e9a369ee208e\") " pod="openstack/nova-api-0" Jan 28 07:06:39 crc kubenswrapper[4642]: I0128 07:06:39.290071 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc1ac90f-da6a-4452-b8fb-e9a369ee208e-config-data\") pod \"nova-api-0\" (UID: \"cc1ac90f-da6a-4452-b8fb-e9a369ee208e\") " pod="openstack/nova-api-0" Jan 28 07:06:39 crc kubenswrapper[4642]: I0128 07:06:39.290098 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc1ac90f-da6a-4452-b8fb-e9a369ee208e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"cc1ac90f-da6a-4452-b8fb-e9a369ee208e\") " pod="openstack/nova-api-0" Jan 28 07:06:39 crc kubenswrapper[4642]: I0128 07:06:39.290284 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc1ac90f-da6a-4452-b8fb-e9a369ee208e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"cc1ac90f-da6a-4452-b8fb-e9a369ee208e\") " pod="openstack/nova-api-0" Jan 28 07:06:39 crc kubenswrapper[4642]: I0128 07:06:39.290322 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrx7g\" (UniqueName: \"kubernetes.io/projected/cc1ac90f-da6a-4452-b8fb-e9a369ee208e-kube-api-access-wrx7g\") pod \"nova-api-0\" (UID: \"cc1ac90f-da6a-4452-b8fb-e9a369ee208e\") " pod="openstack/nova-api-0" Jan 28 07:06:39 crc kubenswrapper[4642]: I0128 07:06:39.290349 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc1ac90f-da6a-4452-b8fb-e9a369ee208e-public-tls-certs\") pod \"nova-api-0\" (UID: \"cc1ac90f-da6a-4452-b8fb-e9a369ee208e\") " pod="openstack/nova-api-0" Jan 28 07:06:39 crc kubenswrapper[4642]: I0128 07:06:39.392785 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc1ac90f-da6a-4452-b8fb-e9a369ee208e-config-data\") pod \"nova-api-0\" (UID: \"cc1ac90f-da6a-4452-b8fb-e9a369ee208e\") " pod="openstack/nova-api-0" Jan 28 07:06:39 crc kubenswrapper[4642]: I0128 07:06:39.393144 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc1ac90f-da6a-4452-b8fb-e9a369ee208e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"cc1ac90f-da6a-4452-b8fb-e9a369ee208e\") " pod="openstack/nova-api-0" Jan 28 07:06:39 crc kubenswrapper[4642]: I0128 07:06:39.393303 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc1ac90f-da6a-4452-b8fb-e9a369ee208e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"cc1ac90f-da6a-4452-b8fb-e9a369ee208e\") " pod="openstack/nova-api-0" Jan 28 07:06:39 crc kubenswrapper[4642]: I0128 07:06:39.393356 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrx7g\" (UniqueName: \"kubernetes.io/projected/cc1ac90f-da6a-4452-b8fb-e9a369ee208e-kube-api-access-wrx7g\") pod \"nova-api-0\" (UID: \"cc1ac90f-da6a-4452-b8fb-e9a369ee208e\") " pod="openstack/nova-api-0" Jan 28 07:06:39 crc kubenswrapper[4642]: I0128 07:06:39.393384 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc1ac90f-da6a-4452-b8fb-e9a369ee208e-public-tls-certs\") pod \"nova-api-0\" (UID: \"cc1ac90f-da6a-4452-b8fb-e9a369ee208e\") " pod="openstack/nova-api-0" Jan 28 07:06:39 crc kubenswrapper[4642]: I0128 07:06:39.393521 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc1ac90f-da6a-4452-b8fb-e9a369ee208e-logs\") pod \"nova-api-0\" (UID: \"cc1ac90f-da6a-4452-b8fb-e9a369ee208e\") " pod="openstack/nova-api-0" Jan 28 07:06:39 crc kubenswrapper[4642]: I0128 07:06:39.394017 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc1ac90f-da6a-4452-b8fb-e9a369ee208e-logs\") pod \"nova-api-0\" (UID: \"cc1ac90f-da6a-4452-b8fb-e9a369ee208e\") " pod="openstack/nova-api-0" Jan 28 07:06:39 crc kubenswrapper[4642]: I0128 07:06:39.397800 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc1ac90f-da6a-4452-b8fb-e9a369ee208e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"cc1ac90f-da6a-4452-b8fb-e9a369ee208e\") " pod="openstack/nova-api-0" Jan 28 07:06:39 crc kubenswrapper[4642]: I0128 07:06:39.398857 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc1ac90f-da6a-4452-b8fb-e9a369ee208e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"cc1ac90f-da6a-4452-b8fb-e9a369ee208e\") " pod="openstack/nova-api-0" Jan 28 07:06:39 crc kubenswrapper[4642]: I0128 07:06:39.399620 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc1ac90f-da6a-4452-b8fb-e9a369ee208e-config-data\") pod \"nova-api-0\" (UID: \"cc1ac90f-da6a-4452-b8fb-e9a369ee208e\") " pod="openstack/nova-api-0" Jan 28 07:06:39 crc kubenswrapper[4642]: I0128 07:06:39.402100 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc1ac90f-da6a-4452-b8fb-e9a369ee208e-public-tls-certs\") pod \"nova-api-0\" (UID: \"cc1ac90f-da6a-4452-b8fb-e9a369ee208e\") " pod="openstack/nova-api-0" Jan 28 07:06:39 crc kubenswrapper[4642]: I0128 07:06:39.410316 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrx7g\" (UniqueName: \"kubernetes.io/projected/cc1ac90f-da6a-4452-b8fb-e9a369ee208e-kube-api-access-wrx7g\") pod \"nova-api-0\" (UID: \"cc1ac90f-da6a-4452-b8fb-e9a369ee208e\") " pod="openstack/nova-api-0" Jan 28 07:06:39 crc kubenswrapper[4642]: I0128 07:06:39.500200 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 07:06:39 crc kubenswrapper[4642]: I0128 07:06:39.510128 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 28 07:06:39 crc kubenswrapper[4642]: I0128 07:06:39.923042 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 28 07:06:39 crc kubenswrapper[4642]: W0128 07:06:39.927774 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcc1ac90f_da6a_4452_b8fb_e9a369ee208e.slice/crio-d0d9bab873ca3d47ada0881245b648fece172da2fe79b74f501764c680b27f3e WatchSource:0}: Error finding container d0d9bab873ca3d47ada0881245b648fece172da2fe79b74f501764c680b27f3e: Status 404 returned error can't find the container with id d0d9bab873ca3d47ada0881245b648fece172da2fe79b74f501764c680b27f3e Jan 28 07:06:40 crc kubenswrapper[4642]: I0128 07:06:40.094020 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cc1ac90f-da6a-4452-b8fb-e9a369ee208e","Type":"ContainerStarted","Data":"1cb9f1c65e54e140199e0c2c2edbac756d70e9c54471b8cecdfdac99fec40748"} Jan 28 07:06:40 crc kubenswrapper[4642]: I0128 07:06:40.094065 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cc1ac90f-da6a-4452-b8fb-e9a369ee208e","Type":"ContainerStarted","Data":"d0d9bab873ca3d47ada0881245b648fece172da2fe79b74f501764c680b27f3e"} Jan 28 07:06:40 crc kubenswrapper[4642]: I0128 07:06:40.095505 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f050f220-7652-43cd-8de4-fe0f291f46cc","Type":"ContainerStarted","Data":"37fae0a9a64ad79db86d7d35f842047f4b1c8870793253712a89084a9cd4dbd6"} Jan 28 07:06:41 crc kubenswrapper[4642]: I0128 07:06:41.108224 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="749b478a-1e5f-4637-88b4-fed9484b01a6" path="/var/lib/kubelet/pods/749b478a-1e5f-4637-88b4-fed9484b01a6/volumes" Jan 28 07:06:41 crc kubenswrapper[4642]: I0128 07:06:41.109593 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f050f220-7652-43cd-8de4-fe0f291f46cc","Type":"ContainerStarted","Data":"0768ff77bcc95ba7f56474fffd3e3a39b15f3e6e211aec9e96367a85ab076a9b"} Jan 28 07:06:41 crc kubenswrapper[4642]: I0128 07:06:41.109635 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f050f220-7652-43cd-8de4-fe0f291f46cc","Type":"ContainerStarted","Data":"4f05dab48f94d788839a04b62c7b307e795e4542d442c420c81cf598c77cd0ab"} Jan 28 07:06:41 crc kubenswrapper[4642]: I0128 07:06:41.109647 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cc1ac90f-da6a-4452-b8fb-e9a369ee208e","Type":"ContainerStarted","Data":"60058dda9974735518e7ed17d9d4822532490c971aadd52b397b6d777c479cde"} Jan 28 07:06:41 crc kubenswrapper[4642]: I0128 07:06:41.130546 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.130528253 podStartE2EDuration="2.130528253s" podCreationTimestamp="2026-01-28 07:06:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 07:06:41.12214443 +0000 UTC m=+1124.354233239" watchObservedRunningTime="2026-01-28 07:06:41.130528253 +0000 UTC m=+1124.362617062" Jan 28 07:06:42 crc kubenswrapper[4642]: I0128 07:06:42.117044 4642 generic.go:334] "Generic (PLEG): container finished" podID="0a6a27a0-9f13-4980-95f5-ec06d1be7492" containerID="cb3aa49821a306881840a596197f91894ddf6efbfc716166791e344624614393" exitCode=137 Jan 28 07:06:42 crc kubenswrapper[4642]: I0128 07:06:42.117244 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0a6a27a0-9f13-4980-95f5-ec06d1be7492","Type":"ContainerDied","Data":"cb3aa49821a306881840a596197f91894ddf6efbfc716166791e344624614393"} Jan 28 07:06:42 crc kubenswrapper[4642]: I0128 07:06:42.644135 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 07:06:42 crc kubenswrapper[4642]: I0128 07:06:42.644414 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-59775c5b57-vjbrx" Jan 28 07:06:42 crc kubenswrapper[4642]: I0128 07:06:42.649372 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a6a27a0-9f13-4980-95f5-ec06d1be7492-config-data\") pod \"0a6a27a0-9f13-4980-95f5-ec06d1be7492\" (UID: \"0a6a27a0-9f13-4980-95f5-ec06d1be7492\") " Jan 28 07:06:42 crc kubenswrapper[4642]: I0128 07:06:42.649406 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qh7rq\" (UniqueName: \"kubernetes.io/projected/0a6a27a0-9f13-4980-95f5-ec06d1be7492-kube-api-access-qh7rq\") pod \"0a6a27a0-9f13-4980-95f5-ec06d1be7492\" (UID: \"0a6a27a0-9f13-4980-95f5-ec06d1be7492\") " Jan 28 07:06:42 crc kubenswrapper[4642]: I0128 07:06:42.649570 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a6a27a0-9f13-4980-95f5-ec06d1be7492-combined-ca-bundle\") pod \"0a6a27a0-9f13-4980-95f5-ec06d1be7492\" (UID: \"0a6a27a0-9f13-4980-95f5-ec06d1be7492\") " Jan 28 07:06:42 crc kubenswrapper[4642]: I0128 07:06:42.654336 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a6a27a0-9f13-4980-95f5-ec06d1be7492-kube-api-access-qh7rq" (OuterVolumeSpecName: "kube-api-access-qh7rq") pod "0a6a27a0-9f13-4980-95f5-ec06d1be7492" (UID: "0a6a27a0-9f13-4980-95f5-ec06d1be7492"). InnerVolumeSpecName "kube-api-access-qh7rq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:06:42 crc kubenswrapper[4642]: I0128 07:06:42.727025 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84f7f4c6c9-w4c7k"] Jan 28 07:06:42 crc kubenswrapper[4642]: I0128 07:06:42.727359 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-84f7f4c6c9-w4c7k" podUID="4c8ab084-e8fb-46a0-8196-1b16c11574cc" containerName="dnsmasq-dns" containerID="cri-o://5ca8d221107a797165bcbbe394753e29d5cef4cb1e8fdf395528377cf1de7395" gracePeriod=10 Jan 28 07:06:42 crc kubenswrapper[4642]: I0128 07:06:42.727398 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a6a27a0-9f13-4980-95f5-ec06d1be7492-config-data" (OuterVolumeSpecName: "config-data") pod "0a6a27a0-9f13-4980-95f5-ec06d1be7492" (UID: "0a6a27a0-9f13-4980-95f5-ec06d1be7492"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:06:42 crc kubenswrapper[4642]: I0128 07:06:42.738429 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a6a27a0-9f13-4980-95f5-ec06d1be7492-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0a6a27a0-9f13-4980-95f5-ec06d1be7492" (UID: "0a6a27a0-9f13-4980-95f5-ec06d1be7492"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:06:42 crc kubenswrapper[4642]: I0128 07:06:42.750790 4642 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a6a27a0-9f13-4980-95f5-ec06d1be7492-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 07:06:42 crc kubenswrapper[4642]: I0128 07:06:42.750825 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qh7rq\" (UniqueName: \"kubernetes.io/projected/0a6a27a0-9f13-4980-95f5-ec06d1be7492-kube-api-access-qh7rq\") on node \"crc\" DevicePath \"\"" Jan 28 07:06:42 crc kubenswrapper[4642]: I0128 07:06:42.750837 4642 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a6a27a0-9f13-4980-95f5-ec06d1be7492-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 07:06:43 crc kubenswrapper[4642]: I0128 07:06:43.126818 4642 generic.go:334] "Generic (PLEG): container finished" podID="4c8ab084-e8fb-46a0-8196-1b16c11574cc" containerID="5ca8d221107a797165bcbbe394753e29d5cef4cb1e8fdf395528377cf1de7395" exitCode=0 Jan 28 07:06:43 crc kubenswrapper[4642]: I0128 07:06:43.127098 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84f7f4c6c9-w4c7k" event={"ID":"4c8ab084-e8fb-46a0-8196-1b16c11574cc","Type":"ContainerDied","Data":"5ca8d221107a797165bcbbe394753e29d5cef4cb1e8fdf395528377cf1de7395"} Jan 28 07:06:43 crc kubenswrapper[4642]: I0128 07:06:43.127123 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84f7f4c6c9-w4c7k" event={"ID":"4c8ab084-e8fb-46a0-8196-1b16c11574cc","Type":"ContainerDied","Data":"95b5da2f34c8c1d03d774f09fb985ca7c07273343ec305bf2c2c4a15183adefe"} Jan 28 07:06:43 crc kubenswrapper[4642]: I0128 07:06:43.127134 4642 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="95b5da2f34c8c1d03d774f09fb985ca7c07273343ec305bf2c2c4a15183adefe" Jan 28 07:06:43 crc kubenswrapper[4642]: I0128 07:06:43.128974 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f050f220-7652-43cd-8de4-fe0f291f46cc","Type":"ContainerStarted","Data":"5759b6754446ea10a4baa0ebd3298c0375e57ad0d8edd578f716020188533178"} Jan 28 07:06:43 crc kubenswrapper[4642]: I0128 07:06:43.129921 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 28 07:06:43 crc kubenswrapper[4642]: I0128 07:06:43.132650 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0a6a27a0-9f13-4980-95f5-ec06d1be7492","Type":"ContainerDied","Data":"b5eefdaad6eebc05c7d2c9748459c727908002b2bf34914d8d0903731a1b4a48"} Jan 28 07:06:43 crc kubenswrapper[4642]: I0128 07:06:43.132702 4642 scope.go:117] "RemoveContainer" containerID="cb3aa49821a306881840a596197f91894ddf6efbfc716166791e344624614393" Jan 28 07:06:43 crc kubenswrapper[4642]: I0128 07:06:43.132812 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 07:06:43 crc kubenswrapper[4642]: I0128 07:06:43.134003 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84f7f4c6c9-w4c7k" Jan 28 07:06:43 crc kubenswrapper[4642]: I0128 07:06:43.160328 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.290548409 podStartE2EDuration="7.160310823s" podCreationTimestamp="2026-01-28 07:06:36 +0000 UTC" firstStartedPulling="2026-01-28 07:06:36.898087456 +0000 UTC m=+1120.130176255" lastFinishedPulling="2026-01-28 07:06:42.76784986 +0000 UTC m=+1125.999938669" observedRunningTime="2026-01-28 07:06:43.149869912 +0000 UTC m=+1126.381958722" watchObservedRunningTime="2026-01-28 07:06:43.160310823 +0000 UTC m=+1126.392399631" Jan 28 07:06:43 crc kubenswrapper[4642]: I0128 07:06:43.216503 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 07:06:43 crc kubenswrapper[4642]: I0128 07:06:43.239024 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 07:06:43 crc kubenswrapper[4642]: I0128 07:06:43.248613 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 07:06:43 crc kubenswrapper[4642]: E0128 07:06:43.248912 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c8ab084-e8fb-46a0-8196-1b16c11574cc" containerName="init" Jan 28 07:06:43 crc kubenswrapper[4642]: I0128 07:06:43.248923 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c8ab084-e8fb-46a0-8196-1b16c11574cc" containerName="init" Jan 28 07:06:43 crc kubenswrapper[4642]: E0128 07:06:43.248933 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a6a27a0-9f13-4980-95f5-ec06d1be7492" containerName="nova-scheduler-scheduler" Jan 28 07:06:43 crc kubenswrapper[4642]: I0128 07:06:43.248939 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a6a27a0-9f13-4980-95f5-ec06d1be7492" containerName="nova-scheduler-scheduler" Jan 28 07:06:43 crc kubenswrapper[4642]: E0128 07:06:43.248958 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c8ab084-e8fb-46a0-8196-1b16c11574cc" containerName="dnsmasq-dns" Jan 28 07:06:43 crc kubenswrapper[4642]: I0128 07:06:43.248964 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c8ab084-e8fb-46a0-8196-1b16c11574cc" containerName="dnsmasq-dns" Jan 28 07:06:43 crc kubenswrapper[4642]: I0128 07:06:43.249122 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a6a27a0-9f13-4980-95f5-ec06d1be7492" containerName="nova-scheduler-scheduler" Jan 28 07:06:43 crc kubenswrapper[4642]: I0128 07:06:43.249137 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c8ab084-e8fb-46a0-8196-1b16c11574cc" containerName="dnsmasq-dns" Jan 28 07:06:43 crc kubenswrapper[4642]: I0128 07:06:43.249651 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 07:06:43 crc kubenswrapper[4642]: I0128 07:06:43.257152 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 28 07:06:43 crc kubenswrapper[4642]: I0128 07:06:43.262830 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 07:06:43 crc kubenswrapper[4642]: I0128 07:06:43.267897 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c8ab084-e8fb-46a0-8196-1b16c11574cc-config\") pod \"4c8ab084-e8fb-46a0-8196-1b16c11574cc\" (UID: \"4c8ab084-e8fb-46a0-8196-1b16c11574cc\") " Jan 28 07:06:43 crc kubenswrapper[4642]: I0128 07:06:43.267930 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4c8ab084-e8fb-46a0-8196-1b16c11574cc-ovsdbserver-sb\") pod \"4c8ab084-e8fb-46a0-8196-1b16c11574cc\" (UID: \"4c8ab084-e8fb-46a0-8196-1b16c11574cc\") " Jan 28 07:06:43 crc kubenswrapper[4642]: I0128 07:06:43.268438 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5df9\" (UniqueName: \"kubernetes.io/projected/4c8ab084-e8fb-46a0-8196-1b16c11574cc-kube-api-access-p5df9\") pod \"4c8ab084-e8fb-46a0-8196-1b16c11574cc\" (UID: \"4c8ab084-e8fb-46a0-8196-1b16c11574cc\") " Jan 28 07:06:43 crc kubenswrapper[4642]: I0128 07:06:43.268471 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4c8ab084-e8fb-46a0-8196-1b16c11574cc-ovsdbserver-nb\") pod \"4c8ab084-e8fb-46a0-8196-1b16c11574cc\" (UID: \"4c8ab084-e8fb-46a0-8196-1b16c11574cc\") " Jan 28 07:06:43 crc kubenswrapper[4642]: I0128 07:06:43.268573 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4c8ab084-e8fb-46a0-8196-1b16c11574cc-dns-svc\") pod \"4c8ab084-e8fb-46a0-8196-1b16c11574cc\" (UID: \"4c8ab084-e8fb-46a0-8196-1b16c11574cc\") " Jan 28 07:06:43 crc kubenswrapper[4642]: I0128 07:06:43.268588 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4c8ab084-e8fb-46a0-8196-1b16c11574cc-dns-swift-storage-0\") pod \"4c8ab084-e8fb-46a0-8196-1b16c11574cc\" (UID: \"4c8ab084-e8fb-46a0-8196-1b16c11574cc\") " Jan 28 07:06:43 crc kubenswrapper[4642]: I0128 07:06:43.268922 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6cba030-bc1a-4fb5-8129-a97647377bdf-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f6cba030-bc1a-4fb5-8129-a97647377bdf\") " pod="openstack/nova-scheduler-0" Jan 28 07:06:43 crc kubenswrapper[4642]: I0128 07:06:43.269004 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7mjk\" (UniqueName: \"kubernetes.io/projected/f6cba030-bc1a-4fb5-8129-a97647377bdf-kube-api-access-k7mjk\") pod \"nova-scheduler-0\" (UID: \"f6cba030-bc1a-4fb5-8129-a97647377bdf\") " pod="openstack/nova-scheduler-0" Jan 28 07:06:43 crc kubenswrapper[4642]: I0128 07:06:43.269058 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6cba030-bc1a-4fb5-8129-a97647377bdf-config-data\") pod \"nova-scheduler-0\" (UID: \"f6cba030-bc1a-4fb5-8129-a97647377bdf\") " pod="openstack/nova-scheduler-0" Jan 28 07:06:43 crc kubenswrapper[4642]: I0128 07:06:43.272585 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c8ab084-e8fb-46a0-8196-1b16c11574cc-kube-api-access-p5df9" (OuterVolumeSpecName: "kube-api-access-p5df9") pod "4c8ab084-e8fb-46a0-8196-1b16c11574cc" (UID: "4c8ab084-e8fb-46a0-8196-1b16c11574cc"). InnerVolumeSpecName "kube-api-access-p5df9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:06:43 crc kubenswrapper[4642]: I0128 07:06:43.308674 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c8ab084-e8fb-46a0-8196-1b16c11574cc-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4c8ab084-e8fb-46a0-8196-1b16c11574cc" (UID: "4c8ab084-e8fb-46a0-8196-1b16c11574cc"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:06:43 crc kubenswrapper[4642]: I0128 07:06:43.318636 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c8ab084-e8fb-46a0-8196-1b16c11574cc-config" (OuterVolumeSpecName: "config") pod "4c8ab084-e8fb-46a0-8196-1b16c11574cc" (UID: "4c8ab084-e8fb-46a0-8196-1b16c11574cc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:06:43 crc kubenswrapper[4642]: I0128 07:06:43.322653 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c8ab084-e8fb-46a0-8196-1b16c11574cc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4c8ab084-e8fb-46a0-8196-1b16c11574cc" (UID: "4c8ab084-e8fb-46a0-8196-1b16c11574cc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:06:43 crc kubenswrapper[4642]: I0128 07:06:43.322660 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c8ab084-e8fb-46a0-8196-1b16c11574cc-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4c8ab084-e8fb-46a0-8196-1b16c11574cc" (UID: "4c8ab084-e8fb-46a0-8196-1b16c11574cc"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:06:43 crc kubenswrapper[4642]: I0128 07:06:43.323201 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c8ab084-e8fb-46a0-8196-1b16c11574cc-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4c8ab084-e8fb-46a0-8196-1b16c11574cc" (UID: "4c8ab084-e8fb-46a0-8196-1b16c11574cc"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:06:43 crc kubenswrapper[4642]: I0128 07:06:43.370608 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6cba030-bc1a-4fb5-8129-a97647377bdf-config-data\") pod \"nova-scheduler-0\" (UID: \"f6cba030-bc1a-4fb5-8129-a97647377bdf\") " pod="openstack/nova-scheduler-0" Jan 28 07:06:43 crc kubenswrapper[4642]: I0128 07:06:43.370748 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6cba030-bc1a-4fb5-8129-a97647377bdf-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f6cba030-bc1a-4fb5-8129-a97647377bdf\") " pod="openstack/nova-scheduler-0" Jan 28 07:06:43 crc kubenswrapper[4642]: I0128 07:06:43.370796 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7mjk\" (UniqueName: \"kubernetes.io/projected/f6cba030-bc1a-4fb5-8129-a97647377bdf-kube-api-access-k7mjk\") pod \"nova-scheduler-0\" (UID: \"f6cba030-bc1a-4fb5-8129-a97647377bdf\") " pod="openstack/nova-scheduler-0" Jan 28 07:06:43 crc kubenswrapper[4642]: I0128 07:06:43.370853 4642 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4c8ab084-e8fb-46a0-8196-1b16c11574cc-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 07:06:43 crc kubenswrapper[4642]: I0128 07:06:43.370864 4642 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4c8ab084-e8fb-46a0-8196-1b16c11574cc-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 28 07:06:43 crc kubenswrapper[4642]: I0128 07:06:43.370875 4642 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c8ab084-e8fb-46a0-8196-1b16c11574cc-config\") on node \"crc\" DevicePath \"\"" Jan 28 07:06:43 crc kubenswrapper[4642]: I0128 07:06:43.370884 4642 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4c8ab084-e8fb-46a0-8196-1b16c11574cc-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 07:06:43 crc kubenswrapper[4642]: I0128 07:06:43.370893 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p5df9\" (UniqueName: \"kubernetes.io/projected/4c8ab084-e8fb-46a0-8196-1b16c11574cc-kube-api-access-p5df9\") on node \"crc\" DevicePath \"\"" Jan 28 07:06:43 crc kubenswrapper[4642]: I0128 07:06:43.370901 4642 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4c8ab084-e8fb-46a0-8196-1b16c11574cc-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 07:06:43 crc kubenswrapper[4642]: I0128 07:06:43.374836 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6cba030-bc1a-4fb5-8129-a97647377bdf-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f6cba030-bc1a-4fb5-8129-a97647377bdf\") " pod="openstack/nova-scheduler-0" Jan 28 07:06:43 crc kubenswrapper[4642]: I0128 07:06:43.374891 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6cba030-bc1a-4fb5-8129-a97647377bdf-config-data\") pod \"nova-scheduler-0\" (UID: \"f6cba030-bc1a-4fb5-8129-a97647377bdf\") " pod="openstack/nova-scheduler-0" Jan 28 07:06:43 crc kubenswrapper[4642]: I0128 07:06:43.393362 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7mjk\" (UniqueName: \"kubernetes.io/projected/f6cba030-bc1a-4fb5-8129-a97647377bdf-kube-api-access-k7mjk\") pod \"nova-scheduler-0\" (UID: \"f6cba030-bc1a-4fb5-8129-a97647377bdf\") " pod="openstack/nova-scheduler-0" Jan 28 07:06:43 crc kubenswrapper[4642]: I0128 07:06:43.586098 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 07:06:44 crc kubenswrapper[4642]: I0128 07:06:44.016567 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 07:06:44 crc kubenswrapper[4642]: I0128 07:06:44.144078 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f6cba030-bc1a-4fb5-8129-a97647377bdf","Type":"ContainerStarted","Data":"1cdc1d2eae632861bd3601676fae0f74bed2abe322d1d3c9a863a969ed55e605"} Jan 28 07:06:44 crc kubenswrapper[4642]: I0128 07:06:44.146974 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84f7f4c6c9-w4c7k" Jan 28 07:06:44 crc kubenswrapper[4642]: I0128 07:06:44.178661 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84f7f4c6c9-w4c7k"] Jan 28 07:06:44 crc kubenswrapper[4642]: I0128 07:06:44.185124 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-84f7f4c6c9-w4c7k"] Jan 28 07:06:44 crc kubenswrapper[4642]: I0128 07:06:44.511071 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 28 07:06:44 crc kubenswrapper[4642]: I0128 07:06:44.529606 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 28 07:06:45 crc kubenswrapper[4642]: I0128 07:06:45.107225 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a6a27a0-9f13-4980-95f5-ec06d1be7492" path="/var/lib/kubelet/pods/0a6a27a0-9f13-4980-95f5-ec06d1be7492/volumes" Jan 28 07:06:45 crc kubenswrapper[4642]: I0128 07:06:45.107725 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c8ab084-e8fb-46a0-8196-1b16c11574cc" path="/var/lib/kubelet/pods/4c8ab084-e8fb-46a0-8196-1b16c11574cc/volumes" Jan 28 07:06:45 crc kubenswrapper[4642]: I0128 07:06:45.164136 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f6cba030-bc1a-4fb5-8129-a97647377bdf","Type":"ContainerStarted","Data":"5e1243183b4cec484e4fc9f4009af3d1d825e10a215e036dec907f67796ea5e8"} Jan 28 07:06:45 crc kubenswrapper[4642]: I0128 07:06:45.187421 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 28 07:06:45 crc kubenswrapper[4642]: I0128 07:06:45.204666 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.204652089 podStartE2EDuration="2.204652089s" podCreationTimestamp="2026-01-28 07:06:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 07:06:45.187779243 +0000 UTC m=+1128.419868042" watchObservedRunningTime="2026-01-28 07:06:45.204652089 +0000 UTC m=+1128.436740898" Jan 28 07:06:45 crc kubenswrapper[4642]: I0128 07:06:45.316866 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-qkm72"] Jan 28 07:06:45 crc kubenswrapper[4642]: I0128 07:06:45.318032 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-qkm72" Jan 28 07:06:45 crc kubenswrapper[4642]: I0128 07:06:45.319471 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 28 07:06:45 crc kubenswrapper[4642]: I0128 07:06:45.319490 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 28 07:06:45 crc kubenswrapper[4642]: I0128 07:06:45.324436 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-qkm72"] Jan 28 07:06:45 crc kubenswrapper[4642]: I0128 07:06:45.415630 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc69b5fc-a582-4c57-972a-fa5a1e0f0203-scripts\") pod \"nova-cell1-cell-mapping-qkm72\" (UID: \"dc69b5fc-a582-4c57-972a-fa5a1e0f0203\") " pod="openstack/nova-cell1-cell-mapping-qkm72" Jan 28 07:06:45 crc kubenswrapper[4642]: I0128 07:06:45.415978 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xf4jv\" (UniqueName: \"kubernetes.io/projected/dc69b5fc-a582-4c57-972a-fa5a1e0f0203-kube-api-access-xf4jv\") pod \"nova-cell1-cell-mapping-qkm72\" (UID: \"dc69b5fc-a582-4c57-972a-fa5a1e0f0203\") " pod="openstack/nova-cell1-cell-mapping-qkm72" Jan 28 07:06:45 crc kubenswrapper[4642]: I0128 07:06:45.416134 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc69b5fc-a582-4c57-972a-fa5a1e0f0203-config-data\") pod \"nova-cell1-cell-mapping-qkm72\" (UID: \"dc69b5fc-a582-4c57-972a-fa5a1e0f0203\") " pod="openstack/nova-cell1-cell-mapping-qkm72" Jan 28 07:06:45 crc kubenswrapper[4642]: I0128 07:06:45.416235 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc69b5fc-a582-4c57-972a-fa5a1e0f0203-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-qkm72\" (UID: \"dc69b5fc-a582-4c57-972a-fa5a1e0f0203\") " pod="openstack/nova-cell1-cell-mapping-qkm72" Jan 28 07:06:45 crc kubenswrapper[4642]: I0128 07:06:45.518346 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc69b5fc-a582-4c57-972a-fa5a1e0f0203-scripts\") pod \"nova-cell1-cell-mapping-qkm72\" (UID: \"dc69b5fc-a582-4c57-972a-fa5a1e0f0203\") " pod="openstack/nova-cell1-cell-mapping-qkm72" Jan 28 07:06:45 crc kubenswrapper[4642]: I0128 07:06:45.518489 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xf4jv\" (UniqueName: \"kubernetes.io/projected/dc69b5fc-a582-4c57-972a-fa5a1e0f0203-kube-api-access-xf4jv\") pod \"nova-cell1-cell-mapping-qkm72\" (UID: \"dc69b5fc-a582-4c57-972a-fa5a1e0f0203\") " pod="openstack/nova-cell1-cell-mapping-qkm72" Jan 28 07:06:45 crc kubenswrapper[4642]: I0128 07:06:45.518565 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc69b5fc-a582-4c57-972a-fa5a1e0f0203-config-data\") pod \"nova-cell1-cell-mapping-qkm72\" (UID: \"dc69b5fc-a582-4c57-972a-fa5a1e0f0203\") " pod="openstack/nova-cell1-cell-mapping-qkm72" Jan 28 07:06:45 crc kubenswrapper[4642]: I0128 07:06:45.518594 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc69b5fc-a582-4c57-972a-fa5a1e0f0203-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-qkm72\" (UID: \"dc69b5fc-a582-4c57-972a-fa5a1e0f0203\") " pod="openstack/nova-cell1-cell-mapping-qkm72" Jan 28 07:06:45 crc kubenswrapper[4642]: I0128 07:06:45.525284 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc69b5fc-a582-4c57-972a-fa5a1e0f0203-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-qkm72\" (UID: \"dc69b5fc-a582-4c57-972a-fa5a1e0f0203\") " pod="openstack/nova-cell1-cell-mapping-qkm72" Jan 28 07:06:45 crc kubenswrapper[4642]: I0128 07:06:45.525378 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc69b5fc-a582-4c57-972a-fa5a1e0f0203-scripts\") pod \"nova-cell1-cell-mapping-qkm72\" (UID: \"dc69b5fc-a582-4c57-972a-fa5a1e0f0203\") " pod="openstack/nova-cell1-cell-mapping-qkm72" Jan 28 07:06:45 crc kubenswrapper[4642]: I0128 07:06:45.525610 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc69b5fc-a582-4c57-972a-fa5a1e0f0203-config-data\") pod \"nova-cell1-cell-mapping-qkm72\" (UID: \"dc69b5fc-a582-4c57-972a-fa5a1e0f0203\") " pod="openstack/nova-cell1-cell-mapping-qkm72" Jan 28 07:06:45 crc kubenswrapper[4642]: I0128 07:06:45.540211 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xf4jv\" (UniqueName: \"kubernetes.io/projected/dc69b5fc-a582-4c57-972a-fa5a1e0f0203-kube-api-access-xf4jv\") pod \"nova-cell1-cell-mapping-qkm72\" (UID: \"dc69b5fc-a582-4c57-972a-fa5a1e0f0203\") " pod="openstack/nova-cell1-cell-mapping-qkm72" Jan 28 07:06:45 crc kubenswrapper[4642]: I0128 07:06:45.634475 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-qkm72" Jan 28 07:06:46 crc kubenswrapper[4642]: I0128 07:06:46.058344 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-qkm72"] Jan 28 07:06:46 crc kubenswrapper[4642]: I0128 07:06:46.176859 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-qkm72" event={"ID":"dc69b5fc-a582-4c57-972a-fa5a1e0f0203","Type":"ContainerStarted","Data":"b2ecdcce3d0f8caff8ce9aeb12eb2587d957dc916f3c6f21f8a7d606f4310ed5"} Jan 28 07:06:47 crc kubenswrapper[4642]: I0128 07:06:47.184119 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-qkm72" event={"ID":"dc69b5fc-a582-4c57-972a-fa5a1e0f0203","Type":"ContainerStarted","Data":"5f0e66c6206c665d6b35e28f1980eb6c6b8888516ed8ccf1196fdd05c874ac2c"} Jan 28 07:06:47 crc kubenswrapper[4642]: I0128 07:06:47.204497 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-qkm72" podStartSLOduration=2.204482295 podStartE2EDuration="2.204482295s" podCreationTimestamp="2026-01-28 07:06:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 07:06:47.196544438 +0000 UTC m=+1130.428633247" watchObservedRunningTime="2026-01-28 07:06:47.204482295 +0000 UTC m=+1130.436571103" Jan 28 07:06:48 crc kubenswrapper[4642]: I0128 07:06:48.587302 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 28 07:06:49 crc kubenswrapper[4642]: I0128 07:06:49.500689 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 28 07:06:49 crc kubenswrapper[4642]: I0128 07:06:49.500743 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 28 07:06:50 crc kubenswrapper[4642]: I0128 07:06:50.216592 4642 generic.go:334] "Generic (PLEG): container finished" podID="dc69b5fc-a582-4c57-972a-fa5a1e0f0203" containerID="5f0e66c6206c665d6b35e28f1980eb6c6b8888516ed8ccf1196fdd05c874ac2c" exitCode=0 Jan 28 07:06:50 crc kubenswrapper[4642]: I0128 07:06:50.216689 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-qkm72" event={"ID":"dc69b5fc-a582-4c57-972a-fa5a1e0f0203","Type":"ContainerDied","Data":"5f0e66c6206c665d6b35e28f1980eb6c6b8888516ed8ccf1196fdd05c874ac2c"} Jan 28 07:06:50 crc kubenswrapper[4642]: I0128 07:06:50.518323 4642 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="cc1ac90f-da6a-4452-b8fb-e9a369ee208e" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.198:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 07:06:50 crc kubenswrapper[4642]: I0128 07:06:50.518365 4642 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="cc1ac90f-da6a-4452-b8fb-e9a369ee208e" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.198:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 07:06:51 crc kubenswrapper[4642]: I0128 07:06:51.544622 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-qkm72" Jan 28 07:06:51 crc kubenswrapper[4642]: I0128 07:06:51.550293 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc69b5fc-a582-4c57-972a-fa5a1e0f0203-combined-ca-bundle\") pod \"dc69b5fc-a582-4c57-972a-fa5a1e0f0203\" (UID: \"dc69b5fc-a582-4c57-972a-fa5a1e0f0203\") " Jan 28 07:06:51 crc kubenswrapper[4642]: I0128 07:06:51.550465 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc69b5fc-a582-4c57-972a-fa5a1e0f0203-config-data\") pod \"dc69b5fc-a582-4c57-972a-fa5a1e0f0203\" (UID: \"dc69b5fc-a582-4c57-972a-fa5a1e0f0203\") " Jan 28 07:06:51 crc kubenswrapper[4642]: I0128 07:06:51.550633 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc69b5fc-a582-4c57-972a-fa5a1e0f0203-scripts\") pod \"dc69b5fc-a582-4c57-972a-fa5a1e0f0203\" (UID: \"dc69b5fc-a582-4c57-972a-fa5a1e0f0203\") " Jan 28 07:06:51 crc kubenswrapper[4642]: I0128 07:06:51.550724 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xf4jv\" (UniqueName: \"kubernetes.io/projected/dc69b5fc-a582-4c57-972a-fa5a1e0f0203-kube-api-access-xf4jv\") pod \"dc69b5fc-a582-4c57-972a-fa5a1e0f0203\" (UID: \"dc69b5fc-a582-4c57-972a-fa5a1e0f0203\") " Jan 28 07:06:51 crc kubenswrapper[4642]: I0128 07:06:51.557323 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc69b5fc-a582-4c57-972a-fa5a1e0f0203-kube-api-access-xf4jv" (OuterVolumeSpecName: "kube-api-access-xf4jv") pod "dc69b5fc-a582-4c57-972a-fa5a1e0f0203" (UID: "dc69b5fc-a582-4c57-972a-fa5a1e0f0203"). InnerVolumeSpecName "kube-api-access-xf4jv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:06:51 crc kubenswrapper[4642]: I0128 07:06:51.562097 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc69b5fc-a582-4c57-972a-fa5a1e0f0203-scripts" (OuterVolumeSpecName: "scripts") pod "dc69b5fc-a582-4c57-972a-fa5a1e0f0203" (UID: "dc69b5fc-a582-4c57-972a-fa5a1e0f0203"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:06:51 crc kubenswrapper[4642]: I0128 07:06:51.588723 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc69b5fc-a582-4c57-972a-fa5a1e0f0203-config-data" (OuterVolumeSpecName: "config-data") pod "dc69b5fc-a582-4c57-972a-fa5a1e0f0203" (UID: "dc69b5fc-a582-4c57-972a-fa5a1e0f0203"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:06:51 crc kubenswrapper[4642]: I0128 07:06:51.595699 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc69b5fc-a582-4c57-972a-fa5a1e0f0203-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dc69b5fc-a582-4c57-972a-fa5a1e0f0203" (UID: "dc69b5fc-a582-4c57-972a-fa5a1e0f0203"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:06:51 crc kubenswrapper[4642]: I0128 07:06:51.653425 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xf4jv\" (UniqueName: \"kubernetes.io/projected/dc69b5fc-a582-4c57-972a-fa5a1e0f0203-kube-api-access-xf4jv\") on node \"crc\" DevicePath \"\"" Jan 28 07:06:51 crc kubenswrapper[4642]: I0128 07:06:51.653457 4642 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc69b5fc-a582-4c57-972a-fa5a1e0f0203-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 07:06:51 crc kubenswrapper[4642]: I0128 07:06:51.653561 4642 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc69b5fc-a582-4c57-972a-fa5a1e0f0203-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 07:06:51 crc kubenswrapper[4642]: I0128 07:06:51.653572 4642 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc69b5fc-a582-4c57-972a-fa5a1e0f0203-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 07:06:52 crc kubenswrapper[4642]: I0128 07:06:52.234620 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-qkm72" event={"ID":"dc69b5fc-a582-4c57-972a-fa5a1e0f0203","Type":"ContainerDied","Data":"b2ecdcce3d0f8caff8ce9aeb12eb2587d957dc916f3c6f21f8a7d606f4310ed5"} Jan 28 07:06:52 crc kubenswrapper[4642]: I0128 07:06:52.234664 4642 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2ecdcce3d0f8caff8ce9aeb12eb2587d957dc916f3c6f21f8a7d606f4310ed5" Jan 28 07:06:52 crc kubenswrapper[4642]: I0128 07:06:52.234679 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-qkm72" Jan 28 07:06:52 crc kubenswrapper[4642]: I0128 07:06:52.419227 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 28 07:06:52 crc kubenswrapper[4642]: I0128 07:06:52.419466 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="cc1ac90f-da6a-4452-b8fb-e9a369ee208e" containerName="nova-api-log" containerID="cri-o://1cb9f1c65e54e140199e0c2c2edbac756d70e9c54471b8cecdfdac99fec40748" gracePeriod=30 Jan 28 07:06:52 crc kubenswrapper[4642]: I0128 07:06:52.419805 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="cc1ac90f-da6a-4452-b8fb-e9a369ee208e" containerName="nova-api-api" containerID="cri-o://60058dda9974735518e7ed17d9d4822532490c971aadd52b397b6d777c479cde" gracePeriod=30 Jan 28 07:06:52 crc kubenswrapper[4642]: I0128 07:06:52.428479 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 07:06:52 crc kubenswrapper[4642]: I0128 07:06:52.428612 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="f6cba030-bc1a-4fb5-8129-a97647377bdf" containerName="nova-scheduler-scheduler" containerID="cri-o://5e1243183b4cec484e4fc9f4009af3d1d825e10a215e036dec907f67796ea5e8" gracePeriod=30 Jan 28 07:06:52 crc kubenswrapper[4642]: I0128 07:06:52.440392 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 07:06:52 crc kubenswrapper[4642]: I0128 07:06:52.440800 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="7864162e-d7ed-4456-8f84-5b75fcb78ab4" containerName="nova-metadata-log" containerID="cri-o://1d337e693226c8fd6868e88f8c15c1d56249490f6a9248ae144cf9fc2dbf214e" gracePeriod=30 Jan 28 07:06:52 crc kubenswrapper[4642]: I0128 07:06:52.441254 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="7864162e-d7ed-4456-8f84-5b75fcb78ab4" containerName="nova-metadata-metadata" containerID="cri-o://75f3a9902ebcf91905e2c7dc10a2802d619e0085f28af9a63045e72989f11814" gracePeriod=30 Jan 28 07:06:53 crc kubenswrapper[4642]: I0128 07:06:53.246072 4642 generic.go:334] "Generic (PLEG): container finished" podID="cc1ac90f-da6a-4452-b8fb-e9a369ee208e" containerID="1cb9f1c65e54e140199e0c2c2edbac756d70e9c54471b8cecdfdac99fec40748" exitCode=143 Jan 28 07:06:53 crc kubenswrapper[4642]: I0128 07:06:53.246161 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cc1ac90f-da6a-4452-b8fb-e9a369ee208e","Type":"ContainerDied","Data":"1cb9f1c65e54e140199e0c2c2edbac756d70e9c54471b8cecdfdac99fec40748"} Jan 28 07:06:53 crc kubenswrapper[4642]: I0128 07:06:53.252930 4642 generic.go:334] "Generic (PLEG): container finished" podID="7864162e-d7ed-4456-8f84-5b75fcb78ab4" containerID="1d337e693226c8fd6868e88f8c15c1d56249490f6a9248ae144cf9fc2dbf214e" exitCode=143 Jan 28 07:06:53 crc kubenswrapper[4642]: I0128 07:06:53.252986 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7864162e-d7ed-4456-8f84-5b75fcb78ab4","Type":"ContainerDied","Data":"1d337e693226c8fd6868e88f8c15c1d56249490f6a9248ae144cf9fc2dbf214e"} Jan 28 07:06:55 crc kubenswrapper[4642]: I0128 07:06:55.270529 4642 generic.go:334] "Generic (PLEG): container finished" podID="f6cba030-bc1a-4fb5-8129-a97647377bdf" containerID="5e1243183b4cec484e4fc9f4009af3d1d825e10a215e036dec907f67796ea5e8" exitCode=0 Jan 28 07:06:55 crc kubenswrapper[4642]: I0128 07:06:55.270844 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f6cba030-bc1a-4fb5-8129-a97647377bdf","Type":"ContainerDied","Data":"5e1243183b4cec484e4fc9f4009af3d1d825e10a215e036dec907f67796ea5e8"} Jan 28 07:06:55 crc kubenswrapper[4642]: I0128 07:06:55.426381 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 07:06:55 crc kubenswrapper[4642]: I0128 07:06:55.438403 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6cba030-bc1a-4fb5-8129-a97647377bdf-config-data\") pod \"f6cba030-bc1a-4fb5-8129-a97647377bdf\" (UID: \"f6cba030-bc1a-4fb5-8129-a97647377bdf\") " Jan 28 07:06:55 crc kubenswrapper[4642]: I0128 07:06:55.462695 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6cba030-bc1a-4fb5-8129-a97647377bdf-config-data" (OuterVolumeSpecName: "config-data") pod "f6cba030-bc1a-4fb5-8129-a97647377bdf" (UID: "f6cba030-bc1a-4fb5-8129-a97647377bdf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:06:55 crc kubenswrapper[4642]: I0128 07:06:55.539764 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6cba030-bc1a-4fb5-8129-a97647377bdf-combined-ca-bundle\") pod \"f6cba030-bc1a-4fb5-8129-a97647377bdf\" (UID: \"f6cba030-bc1a-4fb5-8129-a97647377bdf\") " Jan 28 07:06:55 crc kubenswrapper[4642]: I0128 07:06:55.539839 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k7mjk\" (UniqueName: \"kubernetes.io/projected/f6cba030-bc1a-4fb5-8129-a97647377bdf-kube-api-access-k7mjk\") pod \"f6cba030-bc1a-4fb5-8129-a97647377bdf\" (UID: \"f6cba030-bc1a-4fb5-8129-a97647377bdf\") " Jan 28 07:06:55 crc kubenswrapper[4642]: I0128 07:06:55.540217 4642 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6cba030-bc1a-4fb5-8129-a97647377bdf-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 07:06:55 crc kubenswrapper[4642]: I0128 07:06:55.542710 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6cba030-bc1a-4fb5-8129-a97647377bdf-kube-api-access-k7mjk" (OuterVolumeSpecName: "kube-api-access-k7mjk") pod "f6cba030-bc1a-4fb5-8129-a97647377bdf" (UID: "f6cba030-bc1a-4fb5-8129-a97647377bdf"). InnerVolumeSpecName "kube-api-access-k7mjk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:06:55 crc kubenswrapper[4642]: I0128 07:06:55.562700 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6cba030-bc1a-4fb5-8129-a97647377bdf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f6cba030-bc1a-4fb5-8129-a97647377bdf" (UID: "f6cba030-bc1a-4fb5-8129-a97647377bdf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:06:55 crc kubenswrapper[4642]: I0128 07:06:55.572474 4642 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="7864162e-d7ed-4456-8f84-5b75fcb78ab4" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.194:8775/\": read tcp 10.217.0.2:37400->10.217.0.194:8775: read: connection reset by peer" Jan 28 07:06:55 crc kubenswrapper[4642]: I0128 07:06:55.572529 4642 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="7864162e-d7ed-4456-8f84-5b75fcb78ab4" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.194:8775/\": read tcp 10.217.0.2:37414->10.217.0.194:8775: read: connection reset by peer" Jan 28 07:06:55 crc kubenswrapper[4642]: I0128 07:06:55.642607 4642 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6cba030-bc1a-4fb5-8129-a97647377bdf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 07:06:55 crc kubenswrapper[4642]: I0128 07:06:55.642647 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k7mjk\" (UniqueName: \"kubernetes.io/projected/f6cba030-bc1a-4fb5-8129-a97647377bdf-kube-api-access-k7mjk\") on node \"crc\" DevicePath \"\"" Jan 28 07:06:55 crc kubenswrapper[4642]: I0128 07:06:55.892942 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 07:06:55 crc kubenswrapper[4642]: I0128 07:06:55.932614 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.047988 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc1ac90f-da6a-4452-b8fb-e9a369ee208e-public-tls-certs\") pod \"cc1ac90f-da6a-4452-b8fb-e9a369ee208e\" (UID: \"cc1ac90f-da6a-4452-b8fb-e9a369ee208e\") " Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.048034 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc1ac90f-da6a-4452-b8fb-e9a369ee208e-combined-ca-bundle\") pod \"cc1ac90f-da6a-4452-b8fb-e9a369ee208e\" (UID: \"cc1ac90f-da6a-4452-b8fb-e9a369ee208e\") " Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.048055 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7864162e-d7ed-4456-8f84-5b75fcb78ab4-combined-ca-bundle\") pod \"7864162e-d7ed-4456-8f84-5b75fcb78ab4\" (UID: \"7864162e-d7ed-4456-8f84-5b75fcb78ab4\") " Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.048109 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7864162e-d7ed-4456-8f84-5b75fcb78ab4-logs\") pod \"7864162e-d7ed-4456-8f84-5b75fcb78ab4\" (UID: \"7864162e-d7ed-4456-8f84-5b75fcb78ab4\") " Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.048125 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc1ac90f-da6a-4452-b8fb-e9a369ee208e-config-data\") pod \"cc1ac90f-da6a-4452-b8fb-e9a369ee208e\" (UID: \"cc1ac90f-da6a-4452-b8fb-e9a369ee208e\") " Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.048154 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7864162e-d7ed-4456-8f84-5b75fcb78ab4-config-data\") pod \"7864162e-d7ed-4456-8f84-5b75fcb78ab4\" (UID: \"7864162e-d7ed-4456-8f84-5b75fcb78ab4\") " Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.048252 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc1ac90f-da6a-4452-b8fb-e9a369ee208e-internal-tls-certs\") pod \"cc1ac90f-da6a-4452-b8fb-e9a369ee208e\" (UID: \"cc1ac90f-da6a-4452-b8fb-e9a369ee208e\") " Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.048793 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7864162e-d7ed-4456-8f84-5b75fcb78ab4-logs" (OuterVolumeSpecName: "logs") pod "7864162e-d7ed-4456-8f84-5b75fcb78ab4" (UID: "7864162e-d7ed-4456-8f84-5b75fcb78ab4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.049704 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc1ac90f-da6a-4452-b8fb-e9a369ee208e-logs\") pod \"cc1ac90f-da6a-4452-b8fb-e9a369ee208e\" (UID: \"cc1ac90f-da6a-4452-b8fb-e9a369ee208e\") " Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.050025 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc1ac90f-da6a-4452-b8fb-e9a369ee208e-logs" (OuterVolumeSpecName: "logs") pod "cc1ac90f-da6a-4452-b8fb-e9a369ee208e" (UID: "cc1ac90f-da6a-4452-b8fb-e9a369ee208e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.049740 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vc7k7\" (UniqueName: \"kubernetes.io/projected/7864162e-d7ed-4456-8f84-5b75fcb78ab4-kube-api-access-vc7k7\") pod \"7864162e-d7ed-4456-8f84-5b75fcb78ab4\" (UID: \"7864162e-d7ed-4456-8f84-5b75fcb78ab4\") " Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.050144 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrx7g\" (UniqueName: \"kubernetes.io/projected/cc1ac90f-da6a-4452-b8fb-e9a369ee208e-kube-api-access-wrx7g\") pod \"cc1ac90f-da6a-4452-b8fb-e9a369ee208e\" (UID: \"cc1ac90f-da6a-4452-b8fb-e9a369ee208e\") " Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.050407 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7864162e-d7ed-4456-8f84-5b75fcb78ab4-nova-metadata-tls-certs\") pod \"7864162e-d7ed-4456-8f84-5b75fcb78ab4\" (UID: \"7864162e-d7ed-4456-8f84-5b75fcb78ab4\") " Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.050668 4642 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7864162e-d7ed-4456-8f84-5b75fcb78ab4-logs\") on node \"crc\" DevicePath \"\"" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.050685 4642 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc1ac90f-da6a-4452-b8fb-e9a369ee208e-logs\") on node \"crc\" DevicePath \"\"" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.052456 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc1ac90f-da6a-4452-b8fb-e9a369ee208e-kube-api-access-wrx7g" (OuterVolumeSpecName: "kube-api-access-wrx7g") pod "cc1ac90f-da6a-4452-b8fb-e9a369ee208e" (UID: "cc1ac90f-da6a-4452-b8fb-e9a369ee208e"). InnerVolumeSpecName "kube-api-access-wrx7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.053272 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7864162e-d7ed-4456-8f84-5b75fcb78ab4-kube-api-access-vc7k7" (OuterVolumeSpecName: "kube-api-access-vc7k7") pod "7864162e-d7ed-4456-8f84-5b75fcb78ab4" (UID: "7864162e-d7ed-4456-8f84-5b75fcb78ab4"). InnerVolumeSpecName "kube-api-access-vc7k7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.069858 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc1ac90f-da6a-4452-b8fb-e9a369ee208e-config-data" (OuterVolumeSpecName: "config-data") pod "cc1ac90f-da6a-4452-b8fb-e9a369ee208e" (UID: "cc1ac90f-da6a-4452-b8fb-e9a369ee208e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.071474 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7864162e-d7ed-4456-8f84-5b75fcb78ab4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7864162e-d7ed-4456-8f84-5b75fcb78ab4" (UID: "7864162e-d7ed-4456-8f84-5b75fcb78ab4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.071790 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc1ac90f-da6a-4452-b8fb-e9a369ee208e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cc1ac90f-da6a-4452-b8fb-e9a369ee208e" (UID: "cc1ac90f-da6a-4452-b8fb-e9a369ee208e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.077293 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7864162e-d7ed-4456-8f84-5b75fcb78ab4-config-data" (OuterVolumeSpecName: "config-data") pod "7864162e-d7ed-4456-8f84-5b75fcb78ab4" (UID: "7864162e-d7ed-4456-8f84-5b75fcb78ab4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.089289 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc1ac90f-da6a-4452-b8fb-e9a369ee208e-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "cc1ac90f-da6a-4452-b8fb-e9a369ee208e" (UID: "cc1ac90f-da6a-4452-b8fb-e9a369ee208e"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.095161 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7864162e-d7ed-4456-8f84-5b75fcb78ab4-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "7864162e-d7ed-4456-8f84-5b75fcb78ab4" (UID: "7864162e-d7ed-4456-8f84-5b75fcb78ab4"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.098574 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc1ac90f-da6a-4452-b8fb-e9a369ee208e-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "cc1ac90f-da6a-4452-b8fb-e9a369ee208e" (UID: "cc1ac90f-da6a-4452-b8fb-e9a369ee208e"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.153824 4642 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc1ac90f-da6a-4452-b8fb-e9a369ee208e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.153871 4642 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7864162e-d7ed-4456-8f84-5b75fcb78ab4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.153887 4642 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc1ac90f-da6a-4452-b8fb-e9a369ee208e-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.153898 4642 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7864162e-d7ed-4456-8f84-5b75fcb78ab4-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.153908 4642 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc1ac90f-da6a-4452-b8fb-e9a369ee208e-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.153918 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vc7k7\" (UniqueName: \"kubernetes.io/projected/7864162e-d7ed-4456-8f84-5b75fcb78ab4-kube-api-access-vc7k7\") on node \"crc\" DevicePath \"\"" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.153936 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wrx7g\" (UniqueName: \"kubernetes.io/projected/cc1ac90f-da6a-4452-b8fb-e9a369ee208e-kube-api-access-wrx7g\") on node \"crc\" DevicePath \"\"" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.153948 4642 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7864162e-d7ed-4456-8f84-5b75fcb78ab4-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.153957 4642 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc1ac90f-da6a-4452-b8fb-e9a369ee208e-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.283026 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f6cba030-bc1a-4fb5-8129-a97647377bdf","Type":"ContainerDied","Data":"1cdc1d2eae632861bd3601676fae0f74bed2abe322d1d3c9a863a969ed55e605"} Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.283098 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.283614 4642 scope.go:117] "RemoveContainer" containerID="5e1243183b4cec484e4fc9f4009af3d1d825e10a215e036dec907f67796ea5e8" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.285597 4642 generic.go:334] "Generic (PLEG): container finished" podID="7864162e-d7ed-4456-8f84-5b75fcb78ab4" containerID="75f3a9902ebcf91905e2c7dc10a2802d619e0085f28af9a63045e72989f11814" exitCode=0 Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.285653 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7864162e-d7ed-4456-8f84-5b75fcb78ab4","Type":"ContainerDied","Data":"75f3a9902ebcf91905e2c7dc10a2802d619e0085f28af9a63045e72989f11814"} Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.285717 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7864162e-d7ed-4456-8f84-5b75fcb78ab4","Type":"ContainerDied","Data":"746e988de22a35d54a0eee2e9f608e3ce4161ba5f45a350eaed5f30333478c2d"} Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.285762 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.290820 4642 generic.go:334] "Generic (PLEG): container finished" podID="cc1ac90f-da6a-4452-b8fb-e9a369ee208e" containerID="60058dda9974735518e7ed17d9d4822532490c971aadd52b397b6d777c479cde" exitCode=0 Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.290858 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cc1ac90f-da6a-4452-b8fb-e9a369ee208e","Type":"ContainerDied","Data":"60058dda9974735518e7ed17d9d4822532490c971aadd52b397b6d777c479cde"} Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.290972 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cc1ac90f-da6a-4452-b8fb-e9a369ee208e","Type":"ContainerDied","Data":"d0d9bab873ca3d47ada0881245b648fece172da2fe79b74f501764c680b27f3e"} Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.290907 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.315272 4642 scope.go:117] "RemoveContainer" containerID="75f3a9902ebcf91905e2c7dc10a2802d619e0085f28af9a63045e72989f11814" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.323465 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.336301 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.353327 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 07:06:56 crc kubenswrapper[4642]: E0128 07:06:56.353781 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6cba030-bc1a-4fb5-8129-a97647377bdf" containerName="nova-scheduler-scheduler" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.353802 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6cba030-bc1a-4fb5-8129-a97647377bdf" containerName="nova-scheduler-scheduler" Jan 28 07:06:56 crc kubenswrapper[4642]: E0128 07:06:56.353824 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7864162e-d7ed-4456-8f84-5b75fcb78ab4" containerName="nova-metadata-log" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.353830 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="7864162e-d7ed-4456-8f84-5b75fcb78ab4" containerName="nova-metadata-log" Jan 28 07:06:56 crc kubenswrapper[4642]: E0128 07:06:56.353843 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7864162e-d7ed-4456-8f84-5b75fcb78ab4" containerName="nova-metadata-metadata" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.353849 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="7864162e-d7ed-4456-8f84-5b75fcb78ab4" containerName="nova-metadata-metadata" Jan 28 07:06:56 crc kubenswrapper[4642]: E0128 07:06:56.353860 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc1ac90f-da6a-4452-b8fb-e9a369ee208e" containerName="nova-api-log" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.353865 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc1ac90f-da6a-4452-b8fb-e9a369ee208e" containerName="nova-api-log" Jan 28 07:06:56 crc kubenswrapper[4642]: E0128 07:06:56.353872 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc1ac90f-da6a-4452-b8fb-e9a369ee208e" containerName="nova-api-api" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.353877 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc1ac90f-da6a-4452-b8fb-e9a369ee208e" containerName="nova-api-api" Jan 28 07:06:56 crc kubenswrapper[4642]: E0128 07:06:56.353890 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc69b5fc-a582-4c57-972a-fa5a1e0f0203" containerName="nova-manage" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.353895 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc69b5fc-a582-4c57-972a-fa5a1e0f0203" containerName="nova-manage" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.354067 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc1ac90f-da6a-4452-b8fb-e9a369ee208e" containerName="nova-api-log" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.354078 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc1ac90f-da6a-4452-b8fb-e9a369ee208e" containerName="nova-api-api" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.354087 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc69b5fc-a582-4c57-972a-fa5a1e0f0203" containerName="nova-manage" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.354097 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="7864162e-d7ed-4456-8f84-5b75fcb78ab4" containerName="nova-metadata-log" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.354115 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6cba030-bc1a-4fb5-8129-a97647377bdf" containerName="nova-scheduler-scheduler" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.354127 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="7864162e-d7ed-4456-8f84-5b75fcb78ab4" containerName="nova-metadata-metadata" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.354727 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.354881 4642 scope.go:117] "RemoveContainer" containerID="1d337e693226c8fd6868e88f8c15c1d56249490f6a9248ae144cf9fc2dbf214e" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.366860 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.367277 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.384367 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.404315 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.408910 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.410366 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.410528 4642 scope.go:117] "RemoveContainer" containerID="75f3a9902ebcf91905e2c7dc10a2802d619e0085f28af9a63045e72989f11814" Jan 28 07:06:56 crc kubenswrapper[4642]: E0128 07:06:56.410979 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75f3a9902ebcf91905e2c7dc10a2802d619e0085f28af9a63045e72989f11814\": container with ID starting with 75f3a9902ebcf91905e2c7dc10a2802d619e0085f28af9a63045e72989f11814 not found: ID does not exist" containerID="75f3a9902ebcf91905e2c7dc10a2802d619e0085f28af9a63045e72989f11814" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.411013 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75f3a9902ebcf91905e2c7dc10a2802d619e0085f28af9a63045e72989f11814"} err="failed to get container status \"75f3a9902ebcf91905e2c7dc10a2802d619e0085f28af9a63045e72989f11814\": rpc error: code = NotFound desc = could not find container \"75f3a9902ebcf91905e2c7dc10a2802d619e0085f28af9a63045e72989f11814\": container with ID starting with 75f3a9902ebcf91905e2c7dc10a2802d619e0085f28af9a63045e72989f11814 not found: ID does not exist" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.411036 4642 scope.go:117] "RemoveContainer" containerID="1d337e693226c8fd6868e88f8c15c1d56249490f6a9248ae144cf9fc2dbf214e" Jan 28 07:06:56 crc kubenswrapper[4642]: E0128 07:06:56.411394 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d337e693226c8fd6868e88f8c15c1d56249490f6a9248ae144cf9fc2dbf214e\": container with ID starting with 1d337e693226c8fd6868e88f8c15c1d56249490f6a9248ae144cf9fc2dbf214e not found: ID does not exist" containerID="1d337e693226c8fd6868e88f8c15c1d56249490f6a9248ae144cf9fc2dbf214e" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.411449 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d337e693226c8fd6868e88f8c15c1d56249490f6a9248ae144cf9fc2dbf214e"} err="failed to get container status \"1d337e693226c8fd6868e88f8c15c1d56249490f6a9248ae144cf9fc2dbf214e\": rpc error: code = NotFound desc = could not find container \"1d337e693226c8fd6868e88f8c15c1d56249490f6a9248ae144cf9fc2dbf214e\": container with ID starting with 1d337e693226c8fd6868e88f8c15c1d56249490f6a9248ae144cf9fc2dbf214e not found: ID does not exist" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.411495 4642 scope.go:117] "RemoveContainer" containerID="60058dda9974735518e7ed17d9d4822532490c971aadd52b397b6d777c479cde" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.412173 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.412484 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.412853 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.422716 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.432277 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.436781 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.441903 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.443142 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.444861 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.445052 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.448484 4642 scope.go:117] "RemoveContainer" containerID="1cb9f1c65e54e140199e0c2c2edbac756d70e9c54471b8cecdfdac99fec40748" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.453453 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.461835 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68213c74-0be2-4d55-8f7c-7f5991da4f75-config-data\") pod \"nova-scheduler-0\" (UID: \"68213c74-0be2-4d55-8f7c-7f5991da4f75\") " pod="openstack/nova-scheduler-0" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.462255 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68213c74-0be2-4d55-8f7c-7f5991da4f75-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"68213c74-0be2-4d55-8f7c-7f5991da4f75\") " pod="openstack/nova-scheduler-0" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.462303 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-755b9\" (UniqueName: \"kubernetes.io/projected/68213c74-0be2-4d55-8f7c-7f5991da4f75-kube-api-access-755b9\") pod \"nova-scheduler-0\" (UID: \"68213c74-0be2-4d55-8f7c-7f5991da4f75\") " pod="openstack/nova-scheduler-0" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.468860 4642 scope.go:117] "RemoveContainer" containerID="60058dda9974735518e7ed17d9d4822532490c971aadd52b397b6d777c479cde" Jan 28 07:06:56 crc kubenswrapper[4642]: E0128 07:06:56.469368 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60058dda9974735518e7ed17d9d4822532490c971aadd52b397b6d777c479cde\": container with ID starting with 60058dda9974735518e7ed17d9d4822532490c971aadd52b397b6d777c479cde not found: ID does not exist" containerID="60058dda9974735518e7ed17d9d4822532490c971aadd52b397b6d777c479cde" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.469502 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60058dda9974735518e7ed17d9d4822532490c971aadd52b397b6d777c479cde"} err="failed to get container status \"60058dda9974735518e7ed17d9d4822532490c971aadd52b397b6d777c479cde\": rpc error: code = NotFound desc = could not find container \"60058dda9974735518e7ed17d9d4822532490c971aadd52b397b6d777c479cde\": container with ID starting with 60058dda9974735518e7ed17d9d4822532490c971aadd52b397b6d777c479cde not found: ID does not exist" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.469595 4642 scope.go:117] "RemoveContainer" containerID="1cb9f1c65e54e140199e0c2c2edbac756d70e9c54471b8cecdfdac99fec40748" Jan 28 07:06:56 crc kubenswrapper[4642]: E0128 07:06:56.470003 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1cb9f1c65e54e140199e0c2c2edbac756d70e9c54471b8cecdfdac99fec40748\": container with ID starting with 1cb9f1c65e54e140199e0c2c2edbac756d70e9c54471b8cecdfdac99fec40748 not found: ID does not exist" containerID="1cb9f1c65e54e140199e0c2c2edbac756d70e9c54471b8cecdfdac99fec40748" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.470088 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1cb9f1c65e54e140199e0c2c2edbac756d70e9c54471b8cecdfdac99fec40748"} err="failed to get container status \"1cb9f1c65e54e140199e0c2c2edbac756d70e9c54471b8cecdfdac99fec40748\": rpc error: code = NotFound desc = could not find container \"1cb9f1c65e54e140199e0c2c2edbac756d70e9c54471b8cecdfdac99fec40748\": container with ID starting with 1cb9f1c65e54e140199e0c2c2edbac756d70e9c54471b8cecdfdac99fec40748 not found: ID does not exist" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.564036 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fafb88b9-f909-4a9c-92af-63b0428e44e8-config-data\") pod \"nova-api-0\" (UID: \"fafb88b9-f909-4a9c-92af-63b0428e44e8\") " pod="openstack/nova-api-0" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.564090 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-755b9\" (UniqueName: \"kubernetes.io/projected/68213c74-0be2-4d55-8f7c-7f5991da4f75-kube-api-access-755b9\") pod \"nova-scheduler-0\" (UID: \"68213c74-0be2-4d55-8f7c-7f5991da4f75\") " pod="openstack/nova-scheduler-0" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.564112 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fafb88b9-f909-4a9c-92af-63b0428e44e8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fafb88b9-f909-4a9c-92af-63b0428e44e8\") " pod="openstack/nova-api-0" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.564152 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fafb88b9-f909-4a9c-92af-63b0428e44e8-internal-tls-certs\") pod \"nova-api-0\" (UID: \"fafb88b9-f909-4a9c-92af-63b0428e44e8\") " pod="openstack/nova-api-0" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.564334 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msfd4\" (UniqueName: \"kubernetes.io/projected/fca45030-caaf-4344-8a7c-5440a27f8e57-kube-api-access-msfd4\") pod \"nova-metadata-0\" (UID: \"fca45030-caaf-4344-8a7c-5440a27f8e57\") " pod="openstack/nova-metadata-0" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.564453 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fca45030-caaf-4344-8a7c-5440a27f8e57-config-data\") pod \"nova-metadata-0\" (UID: \"fca45030-caaf-4344-8a7c-5440a27f8e57\") " pod="openstack/nova-metadata-0" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.564559 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llmx7\" (UniqueName: \"kubernetes.io/projected/fafb88b9-f909-4a9c-92af-63b0428e44e8-kube-api-access-llmx7\") pod \"nova-api-0\" (UID: \"fafb88b9-f909-4a9c-92af-63b0428e44e8\") " pod="openstack/nova-api-0" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.564602 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68213c74-0be2-4d55-8f7c-7f5991da4f75-config-data\") pod \"nova-scheduler-0\" (UID: \"68213c74-0be2-4d55-8f7c-7f5991da4f75\") " pod="openstack/nova-scheduler-0" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.564725 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fca45030-caaf-4344-8a7c-5440a27f8e57-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"fca45030-caaf-4344-8a7c-5440a27f8e57\") " pod="openstack/nova-metadata-0" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.564782 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fafb88b9-f909-4a9c-92af-63b0428e44e8-public-tls-certs\") pod \"nova-api-0\" (UID: \"fafb88b9-f909-4a9c-92af-63b0428e44e8\") " pod="openstack/nova-api-0" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.564852 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fafb88b9-f909-4a9c-92af-63b0428e44e8-logs\") pod \"nova-api-0\" (UID: \"fafb88b9-f909-4a9c-92af-63b0428e44e8\") " pod="openstack/nova-api-0" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.564908 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fca45030-caaf-4344-8a7c-5440a27f8e57-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"fca45030-caaf-4344-8a7c-5440a27f8e57\") " pod="openstack/nova-metadata-0" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.564964 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68213c74-0be2-4d55-8f7c-7f5991da4f75-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"68213c74-0be2-4d55-8f7c-7f5991da4f75\") " pod="openstack/nova-scheduler-0" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.564983 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fca45030-caaf-4344-8a7c-5440a27f8e57-logs\") pod \"nova-metadata-0\" (UID: \"fca45030-caaf-4344-8a7c-5440a27f8e57\") " pod="openstack/nova-metadata-0" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.568381 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68213c74-0be2-4d55-8f7c-7f5991da4f75-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"68213c74-0be2-4d55-8f7c-7f5991da4f75\") " pod="openstack/nova-scheduler-0" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.568453 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68213c74-0be2-4d55-8f7c-7f5991da4f75-config-data\") pod \"nova-scheduler-0\" (UID: \"68213c74-0be2-4d55-8f7c-7f5991da4f75\") " pod="openstack/nova-scheduler-0" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.579453 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-755b9\" (UniqueName: \"kubernetes.io/projected/68213c74-0be2-4d55-8f7c-7f5991da4f75-kube-api-access-755b9\") pod \"nova-scheduler-0\" (UID: \"68213c74-0be2-4d55-8f7c-7f5991da4f75\") " pod="openstack/nova-scheduler-0" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.666386 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llmx7\" (UniqueName: \"kubernetes.io/projected/fafb88b9-f909-4a9c-92af-63b0428e44e8-kube-api-access-llmx7\") pod \"nova-api-0\" (UID: \"fafb88b9-f909-4a9c-92af-63b0428e44e8\") " pod="openstack/nova-api-0" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.666499 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fca45030-caaf-4344-8a7c-5440a27f8e57-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"fca45030-caaf-4344-8a7c-5440a27f8e57\") " pod="openstack/nova-metadata-0" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.666544 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fafb88b9-f909-4a9c-92af-63b0428e44e8-public-tls-certs\") pod \"nova-api-0\" (UID: \"fafb88b9-f909-4a9c-92af-63b0428e44e8\") " pod="openstack/nova-api-0" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.666584 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fafb88b9-f909-4a9c-92af-63b0428e44e8-logs\") pod \"nova-api-0\" (UID: \"fafb88b9-f909-4a9c-92af-63b0428e44e8\") " pod="openstack/nova-api-0" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.666613 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fca45030-caaf-4344-8a7c-5440a27f8e57-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"fca45030-caaf-4344-8a7c-5440a27f8e57\") " pod="openstack/nova-metadata-0" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.666639 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fca45030-caaf-4344-8a7c-5440a27f8e57-logs\") pod \"nova-metadata-0\" (UID: \"fca45030-caaf-4344-8a7c-5440a27f8e57\") " pod="openstack/nova-metadata-0" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.666663 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fafb88b9-f909-4a9c-92af-63b0428e44e8-config-data\") pod \"nova-api-0\" (UID: \"fafb88b9-f909-4a9c-92af-63b0428e44e8\") " pod="openstack/nova-api-0" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.666688 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fafb88b9-f909-4a9c-92af-63b0428e44e8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fafb88b9-f909-4a9c-92af-63b0428e44e8\") " pod="openstack/nova-api-0" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.666719 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fafb88b9-f909-4a9c-92af-63b0428e44e8-internal-tls-certs\") pod \"nova-api-0\" (UID: \"fafb88b9-f909-4a9c-92af-63b0428e44e8\") " pod="openstack/nova-api-0" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.666768 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msfd4\" (UniqueName: \"kubernetes.io/projected/fca45030-caaf-4344-8a7c-5440a27f8e57-kube-api-access-msfd4\") pod \"nova-metadata-0\" (UID: \"fca45030-caaf-4344-8a7c-5440a27f8e57\") " pod="openstack/nova-metadata-0" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.666801 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fca45030-caaf-4344-8a7c-5440a27f8e57-config-data\") pod \"nova-metadata-0\" (UID: \"fca45030-caaf-4344-8a7c-5440a27f8e57\") " pod="openstack/nova-metadata-0" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.667634 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fafb88b9-f909-4a9c-92af-63b0428e44e8-logs\") pod \"nova-api-0\" (UID: \"fafb88b9-f909-4a9c-92af-63b0428e44e8\") " pod="openstack/nova-api-0" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.668141 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fca45030-caaf-4344-8a7c-5440a27f8e57-logs\") pod \"nova-metadata-0\" (UID: \"fca45030-caaf-4344-8a7c-5440a27f8e57\") " pod="openstack/nova-metadata-0" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.671756 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fafb88b9-f909-4a9c-92af-63b0428e44e8-internal-tls-certs\") pod \"nova-api-0\" (UID: \"fafb88b9-f909-4a9c-92af-63b0428e44e8\") " pod="openstack/nova-api-0" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.671837 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fafb88b9-f909-4a9c-92af-63b0428e44e8-public-tls-certs\") pod \"nova-api-0\" (UID: \"fafb88b9-f909-4a9c-92af-63b0428e44e8\") " pod="openstack/nova-api-0" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.671869 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fca45030-caaf-4344-8a7c-5440a27f8e57-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"fca45030-caaf-4344-8a7c-5440a27f8e57\") " pod="openstack/nova-metadata-0" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.672931 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fafb88b9-f909-4a9c-92af-63b0428e44e8-config-data\") pod \"nova-api-0\" (UID: \"fafb88b9-f909-4a9c-92af-63b0428e44e8\") " pod="openstack/nova-api-0" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.673372 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fafb88b9-f909-4a9c-92af-63b0428e44e8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fafb88b9-f909-4a9c-92af-63b0428e44e8\") " pod="openstack/nova-api-0" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.673983 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fca45030-caaf-4344-8a7c-5440a27f8e57-config-data\") pod \"nova-metadata-0\" (UID: \"fca45030-caaf-4344-8a7c-5440a27f8e57\") " pod="openstack/nova-metadata-0" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.676060 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fca45030-caaf-4344-8a7c-5440a27f8e57-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"fca45030-caaf-4344-8a7c-5440a27f8e57\") " pod="openstack/nova-metadata-0" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.682943 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msfd4\" (UniqueName: \"kubernetes.io/projected/fca45030-caaf-4344-8a7c-5440a27f8e57-kube-api-access-msfd4\") pod \"nova-metadata-0\" (UID: \"fca45030-caaf-4344-8a7c-5440a27f8e57\") " pod="openstack/nova-metadata-0" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.683347 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llmx7\" (UniqueName: \"kubernetes.io/projected/fafb88b9-f909-4a9c-92af-63b0428e44e8-kube-api-access-llmx7\") pod \"nova-api-0\" (UID: \"fafb88b9-f909-4a9c-92af-63b0428e44e8\") " pod="openstack/nova-api-0" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.689278 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.725080 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 07:06:56 crc kubenswrapper[4642]: I0128 07:06:56.754844 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 07:06:57 crc kubenswrapper[4642]: I0128 07:06:57.109322 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7864162e-d7ed-4456-8f84-5b75fcb78ab4" path="/var/lib/kubelet/pods/7864162e-d7ed-4456-8f84-5b75fcb78ab4/volumes" Jan 28 07:06:57 crc kubenswrapper[4642]: I0128 07:06:57.110070 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc1ac90f-da6a-4452-b8fb-e9a369ee208e" path="/var/lib/kubelet/pods/cc1ac90f-da6a-4452-b8fb-e9a369ee208e/volumes" Jan 28 07:06:57 crc kubenswrapper[4642]: I0128 07:06:57.110598 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6cba030-bc1a-4fb5-8129-a97647377bdf" path="/var/lib/kubelet/pods/f6cba030-bc1a-4fb5-8129-a97647377bdf/volumes" Jan 28 07:06:57 crc kubenswrapper[4642]: I0128 07:06:57.123050 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 07:06:57 crc kubenswrapper[4642]: W0128 07:06:57.123763 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod68213c74_0be2_4d55_8f7c_7f5991da4f75.slice/crio-94fb126dd45952e07e5d4c01313a0d8638fdaa1c7b518ff824433f2b8dc1d7b6 WatchSource:0}: Error finding container 94fb126dd45952e07e5d4c01313a0d8638fdaa1c7b518ff824433f2b8dc1d7b6: Status 404 returned error can't find the container with id 94fb126dd45952e07e5d4c01313a0d8638fdaa1c7b518ff824433f2b8dc1d7b6 Jan 28 07:06:57 crc kubenswrapper[4642]: I0128 07:06:57.276700 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 28 07:06:57 crc kubenswrapper[4642]: W0128 07:06:57.281955 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfafb88b9_f909_4a9c_92af_63b0428e44e8.slice/crio-0a872135d1b069a9c409e8f39100d58b710b61cdd4990f260f3a39299ab9ecb4 WatchSource:0}: Error finding container 0a872135d1b069a9c409e8f39100d58b710b61cdd4990f260f3a39299ab9ecb4: Status 404 returned error can't find the container with id 0a872135d1b069a9c409e8f39100d58b710b61cdd4990f260f3a39299ab9ecb4 Jan 28 07:06:57 crc kubenswrapper[4642]: W0128 07:06:57.283777 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfca45030_caaf_4344_8a7c_5440a27f8e57.slice/crio-4ab8090d193e133197eb19c946954ac0294639280b933833fd861c374232a625 WatchSource:0}: Error finding container 4ab8090d193e133197eb19c946954ac0294639280b933833fd861c374232a625: Status 404 returned error can't find the container with id 4ab8090d193e133197eb19c946954ac0294639280b933833fd861c374232a625 Jan 28 07:06:57 crc kubenswrapper[4642]: I0128 07:06:57.285941 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 07:06:57 crc kubenswrapper[4642]: I0128 07:06:57.310317 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fafb88b9-f909-4a9c-92af-63b0428e44e8","Type":"ContainerStarted","Data":"0a872135d1b069a9c409e8f39100d58b710b61cdd4990f260f3a39299ab9ecb4"} Jan 28 07:06:57 crc kubenswrapper[4642]: I0128 07:06:57.315849 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"68213c74-0be2-4d55-8f7c-7f5991da4f75","Type":"ContainerStarted","Data":"94fb126dd45952e07e5d4c01313a0d8638fdaa1c7b518ff824433f2b8dc1d7b6"} Jan 28 07:06:57 crc kubenswrapper[4642]: I0128 07:06:57.324168 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fca45030-caaf-4344-8a7c-5440a27f8e57","Type":"ContainerStarted","Data":"4ab8090d193e133197eb19c946954ac0294639280b933833fd861c374232a625"} Jan 28 07:06:58 crc kubenswrapper[4642]: I0128 07:06:58.334118 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fafb88b9-f909-4a9c-92af-63b0428e44e8","Type":"ContainerStarted","Data":"05eaa2a59cb86b1ae1d08930367fd22af4a040ccc62999edf3af2bba4c4c2687"} Jan 28 07:06:58 crc kubenswrapper[4642]: I0128 07:06:58.334808 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fafb88b9-f909-4a9c-92af-63b0428e44e8","Type":"ContainerStarted","Data":"0de14494713a3e7400cb7360b23674343d0a50626349fae9138d83a663a72c21"} Jan 28 07:06:58 crc kubenswrapper[4642]: I0128 07:06:58.336538 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"68213c74-0be2-4d55-8f7c-7f5991da4f75","Type":"ContainerStarted","Data":"57b1100b85fe59e289f5fe232b2bf640b69038c8082826cede32aaba5ae93573"} Jan 28 07:06:58 crc kubenswrapper[4642]: I0128 07:06:58.339001 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fca45030-caaf-4344-8a7c-5440a27f8e57","Type":"ContainerStarted","Data":"54d3a6e2487a3754277889bdcfcdaefc9db2c68bcd80331dba88a84f66627b8b"} Jan 28 07:06:58 crc kubenswrapper[4642]: I0128 07:06:58.339029 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fca45030-caaf-4344-8a7c-5440a27f8e57","Type":"ContainerStarted","Data":"41a595acdd7ebe832b19550e0eb1b0269212dab09759368813dd470c607178a4"} Jan 28 07:06:58 crc kubenswrapper[4642]: I0128 07:06:58.357241 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.357226855 podStartE2EDuration="2.357226855s" podCreationTimestamp="2026-01-28 07:06:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 07:06:58.351148494 +0000 UTC m=+1141.583237303" watchObservedRunningTime="2026-01-28 07:06:58.357226855 +0000 UTC m=+1141.589315664" Jan 28 07:06:58 crc kubenswrapper[4642]: I0128 07:06:58.374470 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.374443497 podStartE2EDuration="2.374443497s" podCreationTimestamp="2026-01-28 07:06:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 07:06:58.3701666 +0000 UTC m=+1141.602255410" watchObservedRunningTime="2026-01-28 07:06:58.374443497 +0000 UTC m=+1141.606532306" Jan 28 07:06:58 crc kubenswrapper[4642]: I0128 07:06:58.399292 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.399269227 podStartE2EDuration="2.399269227s" podCreationTimestamp="2026-01-28 07:06:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 07:06:58.383898705 +0000 UTC m=+1141.615987513" watchObservedRunningTime="2026-01-28 07:06:58.399269227 +0000 UTC m=+1141.631358035" Jan 28 07:07:01 crc kubenswrapper[4642]: I0128 07:07:01.690080 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 28 07:07:01 crc kubenswrapper[4642]: I0128 07:07:01.755485 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 28 07:07:01 crc kubenswrapper[4642]: I0128 07:07:01.755535 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 28 07:07:06 crc kubenswrapper[4642]: I0128 07:07:06.453042 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 28 07:07:06 crc kubenswrapper[4642]: I0128 07:07:06.690137 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 28 07:07:06 crc kubenswrapper[4642]: I0128 07:07:06.712868 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 28 07:07:06 crc kubenswrapper[4642]: I0128 07:07:06.726924 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 28 07:07:06 crc kubenswrapper[4642]: I0128 07:07:06.727019 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 28 07:07:06 crc kubenswrapper[4642]: I0128 07:07:06.755578 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 28 07:07:06 crc kubenswrapper[4642]: I0128 07:07:06.755659 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 28 07:07:07 crc kubenswrapper[4642]: I0128 07:07:07.452367 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 28 07:07:07 crc kubenswrapper[4642]: I0128 07:07:07.743318 4642 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="fafb88b9-f909-4a9c-92af-63b0428e44e8" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.202:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 07:07:07 crc kubenswrapper[4642]: I0128 07:07:07.743537 4642 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="fafb88b9-f909-4a9c-92af-63b0428e44e8" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.202:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 07:07:07 crc kubenswrapper[4642]: I0128 07:07:07.768336 4642 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="fca45030-caaf-4344-8a7c-5440a27f8e57" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.203:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 07:07:07 crc kubenswrapper[4642]: I0128 07:07:07.768345 4642 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="fca45030-caaf-4344-8a7c-5440a27f8e57" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.203:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 07:07:16 crc kubenswrapper[4642]: I0128 07:07:16.731685 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 28 07:07:16 crc kubenswrapper[4642]: I0128 07:07:16.733241 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 28 07:07:16 crc kubenswrapper[4642]: I0128 07:07:16.733420 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 28 07:07:16 crc kubenswrapper[4642]: I0128 07:07:16.733493 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 28 07:07:16 crc kubenswrapper[4642]: I0128 07:07:16.752741 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 28 07:07:16 crc kubenswrapper[4642]: I0128 07:07:16.753832 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 28 07:07:16 crc kubenswrapper[4642]: I0128 07:07:16.758967 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 28 07:07:16 crc kubenswrapper[4642]: I0128 07:07:16.759300 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 28 07:07:16 crc kubenswrapper[4642]: I0128 07:07:16.766002 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 28 07:07:17 crc kubenswrapper[4642]: I0128 07:07:17.515423 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 28 07:07:24 crc kubenswrapper[4642]: I0128 07:07:24.429131 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 28 07:07:25 crc kubenswrapper[4642]: I0128 07:07:25.372493 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 28 07:07:27 crc kubenswrapper[4642]: I0128 07:07:27.832265 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80" containerName="rabbitmq" containerID="cri-o://a6dca50e95cfeef3ed355eb217290c8896aaf62041e2eb565475c2cf4b367379" gracePeriod=604797 Jan 28 07:07:28 crc kubenswrapper[4642]: I0128 07:07:28.873459 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="716da2e6-dc75-431b-aa9e-d22bb4e0f91b" containerName="rabbitmq" containerID="cri-o://674cf5c033eb05b2ed7c34c5140170164a7e274c8ffffc8e1733de0f0dff9491" gracePeriod=604797 Jan 28 07:07:31 crc kubenswrapper[4642]: I0128 07:07:31.209695 4642 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.98:5671: connect: connection refused" Jan 28 07:07:31 crc kubenswrapper[4642]: I0128 07:07:31.468609 4642 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="716da2e6-dc75-431b-aa9e-d22bb4e0f91b" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.99:5671: connect: connection refused" Jan 28 07:07:34 crc kubenswrapper[4642]: E0128 07:07:34.061256 4642 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podffc1c6a3_55ed_4caf_a1a6_5f5f90c41b80.slice/crio-a6dca50e95cfeef3ed355eb217290c8896aaf62041e2eb565475c2cf4b367379.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podffc1c6a3_55ed_4caf_a1a6_5f5f90c41b80.slice/crio-conmon-a6dca50e95cfeef3ed355eb217290c8896aaf62041e2eb565475c2cf4b367379.scope\": RecentStats: unable to find data in memory cache]" Jan 28 07:07:34 crc kubenswrapper[4642]: I0128 07:07:34.223992 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 28 07:07:34 crc kubenswrapper[4642]: I0128 07:07:34.278268 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80-config-data\") pod \"ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80\" (UID: \"ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80\") " Jan 28 07:07:34 crc kubenswrapper[4642]: I0128 07:07:34.278533 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80-rabbitmq-plugins\") pod \"ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80\" (UID: \"ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80\") " Jan 28 07:07:34 crc kubenswrapper[4642]: I0128 07:07:34.278593 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80\" (UID: \"ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80\") " Jan 28 07:07:34 crc kubenswrapper[4642]: I0128 07:07:34.278612 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80-plugins-conf\") pod \"ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80\" (UID: \"ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80\") " Jan 28 07:07:34 crc kubenswrapper[4642]: I0128 07:07:34.278630 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2gsm\" (UniqueName: \"kubernetes.io/projected/ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80-kube-api-access-w2gsm\") pod \"ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80\" (UID: \"ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80\") " Jan 28 07:07:34 crc kubenswrapper[4642]: I0128 07:07:34.278650 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80-rabbitmq-tls\") pod \"ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80\" (UID: \"ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80\") " Jan 28 07:07:34 crc kubenswrapper[4642]: I0128 07:07:34.279600 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80" (UID: "ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 07:07:34 crc kubenswrapper[4642]: I0128 07:07:34.280911 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80" (UID: "ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:07:34 crc kubenswrapper[4642]: I0128 07:07:34.285814 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80-kube-api-access-w2gsm" (OuterVolumeSpecName: "kube-api-access-w2gsm") pod "ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80" (UID: "ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80"). InnerVolumeSpecName "kube-api-access-w2gsm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:07:34 crc kubenswrapper[4642]: I0128 07:07:34.285936 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80" (UID: "ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:07:34 crc kubenswrapper[4642]: I0128 07:07:34.298655 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80-config-data" (OuterVolumeSpecName: "config-data") pod "ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80" (UID: "ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:07:34 crc kubenswrapper[4642]: I0128 07:07:34.298744 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "persistence") pod "ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80" (UID: "ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 28 07:07:34 crc kubenswrapper[4642]: I0128 07:07:34.380230 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80-pod-info\") pod \"ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80\" (UID: \"ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80\") " Jan 28 07:07:34 crc kubenswrapper[4642]: I0128 07:07:34.380294 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80-rabbitmq-erlang-cookie\") pod \"ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80\" (UID: \"ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80\") " Jan 28 07:07:34 crc kubenswrapper[4642]: I0128 07:07:34.380351 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80-erlang-cookie-secret\") pod \"ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80\" (UID: \"ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80\") " Jan 28 07:07:34 crc kubenswrapper[4642]: I0128 07:07:34.380429 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80-server-conf\") pod \"ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80\" (UID: \"ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80\") " Jan 28 07:07:34 crc kubenswrapper[4642]: I0128 07:07:34.380446 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80-rabbitmq-confd\") pod \"ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80\" (UID: \"ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80\") " Jan 28 07:07:34 crc kubenswrapper[4642]: I0128 07:07:34.380748 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80" (UID: "ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 07:07:34 crc kubenswrapper[4642]: I0128 07:07:34.380907 4642 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 28 07:07:34 crc kubenswrapper[4642]: I0128 07:07:34.380921 4642 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 07:07:34 crc kubenswrapper[4642]: I0128 07:07:34.380930 4642 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 28 07:07:34 crc kubenswrapper[4642]: I0128 07:07:34.380949 4642 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Jan 28 07:07:34 crc kubenswrapper[4642]: I0128 07:07:34.380958 4642 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 28 07:07:34 crc kubenswrapper[4642]: I0128 07:07:34.380966 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w2gsm\" (UniqueName: \"kubernetes.io/projected/ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80-kube-api-access-w2gsm\") on node \"crc\" DevicePath \"\"" Jan 28 07:07:34 crc kubenswrapper[4642]: I0128 07:07:34.380975 4642 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 28 07:07:34 crc kubenswrapper[4642]: I0128 07:07:34.384306 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80" (UID: "ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:07:34 crc kubenswrapper[4642]: I0128 07:07:34.386354 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80-pod-info" (OuterVolumeSpecName: "pod-info") pod "ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80" (UID: "ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 28 07:07:34 crc kubenswrapper[4642]: I0128 07:07:34.405498 4642 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Jan 28 07:07:34 crc kubenswrapper[4642]: I0128 07:07:34.429261 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80-server-conf" (OuterVolumeSpecName: "server-conf") pod "ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80" (UID: "ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:07:34 crc kubenswrapper[4642]: I0128 07:07:34.467050 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80" (UID: "ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:07:34 crc kubenswrapper[4642]: I0128 07:07:34.482412 4642 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Jan 28 07:07:34 crc kubenswrapper[4642]: I0128 07:07:34.482441 4642 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80-pod-info\") on node \"crc\" DevicePath \"\"" Jan 28 07:07:34 crc kubenswrapper[4642]: I0128 07:07:34.482453 4642 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 28 07:07:34 crc kubenswrapper[4642]: I0128 07:07:34.482464 4642 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 28 07:07:34 crc kubenswrapper[4642]: I0128 07:07:34.482472 4642 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80-server-conf\") on node \"crc\" DevicePath \"\"" Jan 28 07:07:34 crc kubenswrapper[4642]: I0128 07:07:34.634181 4642 generic.go:334] "Generic (PLEG): container finished" podID="ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80" containerID="a6dca50e95cfeef3ed355eb217290c8896aaf62041e2eb565475c2cf4b367379" exitCode=0 Jan 28 07:07:34 crc kubenswrapper[4642]: I0128 07:07:34.634254 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80","Type":"ContainerDied","Data":"a6dca50e95cfeef3ed355eb217290c8896aaf62041e2eb565475c2cf4b367379"} Jan 28 07:07:34 crc kubenswrapper[4642]: I0128 07:07:34.634282 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80","Type":"ContainerDied","Data":"f0ef3837e00efb888bf3dd13571ba78d0dc10a40f4512621065043fc4a8751b9"} Jan 28 07:07:34 crc kubenswrapper[4642]: I0128 07:07:34.634291 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 28 07:07:34 crc kubenswrapper[4642]: I0128 07:07:34.634300 4642 scope.go:117] "RemoveContainer" containerID="a6dca50e95cfeef3ed355eb217290c8896aaf62041e2eb565475c2cf4b367379" Jan 28 07:07:34 crc kubenswrapper[4642]: I0128 07:07:34.658875 4642 scope.go:117] "RemoveContainer" containerID="73f34878b20e4eb14d546def2579541109d36943f72b1b052629caa985ad90b5" Jan 28 07:07:34 crc kubenswrapper[4642]: I0128 07:07:34.659701 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 28 07:07:34 crc kubenswrapper[4642]: I0128 07:07:34.665560 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 28 07:07:34 crc kubenswrapper[4642]: I0128 07:07:34.679521 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 28 07:07:34 crc kubenswrapper[4642]: E0128 07:07:34.679874 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80" containerName="setup-container" Jan 28 07:07:34 crc kubenswrapper[4642]: I0128 07:07:34.679892 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80" containerName="setup-container" Jan 28 07:07:34 crc kubenswrapper[4642]: E0128 07:07:34.679909 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80" containerName="rabbitmq" Jan 28 07:07:34 crc kubenswrapper[4642]: I0128 07:07:34.679915 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80" containerName="rabbitmq" Jan 28 07:07:34 crc kubenswrapper[4642]: I0128 07:07:34.680138 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80" containerName="rabbitmq" Jan 28 07:07:34 crc kubenswrapper[4642]: I0128 07:07:34.681037 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 28 07:07:34 crc kubenswrapper[4642]: I0128 07:07:34.683137 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 28 07:07:34 crc kubenswrapper[4642]: I0128 07:07:34.683331 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 28 07:07:34 crc kubenswrapper[4642]: I0128 07:07:34.683462 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 28 07:07:34 crc kubenswrapper[4642]: I0128 07:07:34.683726 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 28 07:07:34 crc kubenswrapper[4642]: I0128 07:07:34.683862 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-gjs64" Jan 28 07:07:34 crc kubenswrapper[4642]: I0128 07:07:34.684013 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 28 07:07:34 crc kubenswrapper[4642]: I0128 07:07:34.685434 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 28 07:07:34 crc kubenswrapper[4642]: I0128 07:07:34.687274 4642 scope.go:117] "RemoveContainer" containerID="a6dca50e95cfeef3ed355eb217290c8896aaf62041e2eb565475c2cf4b367379" Jan 28 07:07:34 crc kubenswrapper[4642]: E0128 07:07:34.687690 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6dca50e95cfeef3ed355eb217290c8896aaf62041e2eb565475c2cf4b367379\": container with ID starting with a6dca50e95cfeef3ed355eb217290c8896aaf62041e2eb565475c2cf4b367379 not found: ID does not exist" containerID="a6dca50e95cfeef3ed355eb217290c8896aaf62041e2eb565475c2cf4b367379" Jan 28 07:07:34 crc kubenswrapper[4642]: I0128 07:07:34.687721 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6dca50e95cfeef3ed355eb217290c8896aaf62041e2eb565475c2cf4b367379"} err="failed to get container status \"a6dca50e95cfeef3ed355eb217290c8896aaf62041e2eb565475c2cf4b367379\": rpc error: code = NotFound desc = could not find container \"a6dca50e95cfeef3ed355eb217290c8896aaf62041e2eb565475c2cf4b367379\": container with ID starting with a6dca50e95cfeef3ed355eb217290c8896aaf62041e2eb565475c2cf4b367379 not found: ID does not exist" Jan 28 07:07:34 crc kubenswrapper[4642]: I0128 07:07:34.687747 4642 scope.go:117] "RemoveContainer" containerID="73f34878b20e4eb14d546def2579541109d36943f72b1b052629caa985ad90b5" Jan 28 07:07:34 crc kubenswrapper[4642]: E0128 07:07:34.688159 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73f34878b20e4eb14d546def2579541109d36943f72b1b052629caa985ad90b5\": container with ID starting with 73f34878b20e4eb14d546def2579541109d36943f72b1b052629caa985ad90b5 not found: ID does not exist" containerID="73f34878b20e4eb14d546def2579541109d36943f72b1b052629caa985ad90b5" Jan 28 07:07:34 crc kubenswrapper[4642]: I0128 07:07:34.688216 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73f34878b20e4eb14d546def2579541109d36943f72b1b052629caa985ad90b5"} err="failed to get container status \"73f34878b20e4eb14d546def2579541109d36943f72b1b052629caa985ad90b5\": rpc error: code = NotFound desc = could not find container \"73f34878b20e4eb14d546def2579541109d36943f72b1b052629caa985ad90b5\": container with ID starting with 73f34878b20e4eb14d546def2579541109d36943f72b1b052629caa985ad90b5 not found: ID does not exist" Jan 28 07:07:34 crc kubenswrapper[4642]: I0128 07:07:34.692983 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 28 07:07:34 crc kubenswrapper[4642]: I0128 07:07:34.789273 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/62614395-0b52-4d39-865d-c42587ac034b-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"62614395-0b52-4d39-865d-c42587ac034b\") " pod="openstack/rabbitmq-server-0" Jan 28 07:07:34 crc kubenswrapper[4642]: I0128 07:07:34.789325 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/62614395-0b52-4d39-865d-c42587ac034b-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"62614395-0b52-4d39-865d-c42587ac034b\") " pod="openstack/rabbitmq-server-0" Jan 28 07:07:34 crc kubenswrapper[4642]: I0128 07:07:34.789421 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/62614395-0b52-4d39-865d-c42587ac034b-config-data\") pod \"rabbitmq-server-0\" (UID: \"62614395-0b52-4d39-865d-c42587ac034b\") " pod="openstack/rabbitmq-server-0" Jan 28 07:07:34 crc kubenswrapper[4642]: I0128 07:07:34.789440 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/62614395-0b52-4d39-865d-c42587ac034b-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"62614395-0b52-4d39-865d-c42587ac034b\") " pod="openstack/rabbitmq-server-0" Jan 28 07:07:34 crc kubenswrapper[4642]: I0128 07:07:34.789521 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/62614395-0b52-4d39-865d-c42587ac034b-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"62614395-0b52-4d39-865d-c42587ac034b\") " pod="openstack/rabbitmq-server-0" Jan 28 07:07:34 crc kubenswrapper[4642]: I0128 07:07:34.789550 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/62614395-0b52-4d39-865d-c42587ac034b-pod-info\") pod \"rabbitmq-server-0\" (UID: \"62614395-0b52-4d39-865d-c42587ac034b\") " pod="openstack/rabbitmq-server-0" Jan 28 07:07:34 crc kubenswrapper[4642]: I0128 07:07:34.789568 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/62614395-0b52-4d39-865d-c42587ac034b-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"62614395-0b52-4d39-865d-c42587ac034b\") " pod="openstack/rabbitmq-server-0" Jan 28 07:07:34 crc kubenswrapper[4642]: I0128 07:07:34.789585 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/62614395-0b52-4d39-865d-c42587ac034b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"62614395-0b52-4d39-865d-c42587ac034b\") " pod="openstack/rabbitmq-server-0" Jan 28 07:07:34 crc kubenswrapper[4642]: I0128 07:07:34.789630 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/62614395-0b52-4d39-865d-c42587ac034b-server-conf\") pod \"rabbitmq-server-0\" (UID: \"62614395-0b52-4d39-865d-c42587ac034b\") " pod="openstack/rabbitmq-server-0" Jan 28 07:07:34 crc kubenswrapper[4642]: I0128 07:07:34.789654 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"62614395-0b52-4d39-865d-c42587ac034b\") " pod="openstack/rabbitmq-server-0" Jan 28 07:07:34 crc kubenswrapper[4642]: I0128 07:07:34.789721 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7k25\" (UniqueName: \"kubernetes.io/projected/62614395-0b52-4d39-865d-c42587ac034b-kube-api-access-f7k25\") pod \"rabbitmq-server-0\" (UID: \"62614395-0b52-4d39-865d-c42587ac034b\") " pod="openstack/rabbitmq-server-0" Jan 28 07:07:34 crc kubenswrapper[4642]: I0128 07:07:34.890866 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/62614395-0b52-4d39-865d-c42587ac034b-config-data\") pod \"rabbitmq-server-0\" (UID: \"62614395-0b52-4d39-865d-c42587ac034b\") " pod="openstack/rabbitmq-server-0" Jan 28 07:07:34 crc kubenswrapper[4642]: I0128 07:07:34.890896 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/62614395-0b52-4d39-865d-c42587ac034b-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"62614395-0b52-4d39-865d-c42587ac034b\") " pod="openstack/rabbitmq-server-0" Jan 28 07:07:34 crc kubenswrapper[4642]: I0128 07:07:34.890937 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/62614395-0b52-4d39-865d-c42587ac034b-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"62614395-0b52-4d39-865d-c42587ac034b\") " pod="openstack/rabbitmq-server-0" Jan 28 07:07:34 crc kubenswrapper[4642]: I0128 07:07:34.890952 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/62614395-0b52-4d39-865d-c42587ac034b-pod-info\") pod \"rabbitmq-server-0\" (UID: \"62614395-0b52-4d39-865d-c42587ac034b\") " pod="openstack/rabbitmq-server-0" Jan 28 07:07:34 crc kubenswrapper[4642]: I0128 07:07:34.890969 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/62614395-0b52-4d39-865d-c42587ac034b-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"62614395-0b52-4d39-865d-c42587ac034b\") " pod="openstack/rabbitmq-server-0" Jan 28 07:07:34 crc kubenswrapper[4642]: I0128 07:07:34.890993 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/62614395-0b52-4d39-865d-c42587ac034b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"62614395-0b52-4d39-865d-c42587ac034b\") " pod="openstack/rabbitmq-server-0" Jan 28 07:07:34 crc kubenswrapper[4642]: I0128 07:07:34.891035 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/62614395-0b52-4d39-865d-c42587ac034b-server-conf\") pod \"rabbitmq-server-0\" (UID: \"62614395-0b52-4d39-865d-c42587ac034b\") " pod="openstack/rabbitmq-server-0" Jan 28 07:07:34 crc kubenswrapper[4642]: I0128 07:07:34.891058 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"62614395-0b52-4d39-865d-c42587ac034b\") " pod="openstack/rabbitmq-server-0" Jan 28 07:07:34 crc kubenswrapper[4642]: I0128 07:07:34.891161 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7k25\" (UniqueName: \"kubernetes.io/projected/62614395-0b52-4d39-865d-c42587ac034b-kube-api-access-f7k25\") pod \"rabbitmq-server-0\" (UID: \"62614395-0b52-4d39-865d-c42587ac034b\") " pod="openstack/rabbitmq-server-0" Jan 28 07:07:34 crc kubenswrapper[4642]: I0128 07:07:34.891253 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/62614395-0b52-4d39-865d-c42587ac034b-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"62614395-0b52-4d39-865d-c42587ac034b\") " pod="openstack/rabbitmq-server-0" Jan 28 07:07:34 crc kubenswrapper[4642]: I0128 07:07:34.891297 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/62614395-0b52-4d39-865d-c42587ac034b-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"62614395-0b52-4d39-865d-c42587ac034b\") " pod="openstack/rabbitmq-server-0" Jan 28 07:07:34 crc kubenswrapper[4642]: I0128 07:07:34.891356 4642 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"62614395-0b52-4d39-865d-c42587ac034b\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/rabbitmq-server-0" Jan 28 07:07:34 crc kubenswrapper[4642]: I0128 07:07:34.892024 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/62614395-0b52-4d39-865d-c42587ac034b-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"62614395-0b52-4d39-865d-c42587ac034b\") " pod="openstack/rabbitmq-server-0" Jan 28 07:07:34 crc kubenswrapper[4642]: I0128 07:07:34.892181 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/62614395-0b52-4d39-865d-c42587ac034b-config-data\") pod \"rabbitmq-server-0\" (UID: \"62614395-0b52-4d39-865d-c42587ac034b\") " pod="openstack/rabbitmq-server-0" Jan 28 07:07:34 crc kubenswrapper[4642]: I0128 07:07:34.892327 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/62614395-0b52-4d39-865d-c42587ac034b-server-conf\") pod \"rabbitmq-server-0\" (UID: \"62614395-0b52-4d39-865d-c42587ac034b\") " pod="openstack/rabbitmq-server-0" Jan 28 07:07:34 crc kubenswrapper[4642]: I0128 07:07:34.892538 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/62614395-0b52-4d39-865d-c42587ac034b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"62614395-0b52-4d39-865d-c42587ac034b\") " pod="openstack/rabbitmq-server-0" Jan 28 07:07:34 crc kubenswrapper[4642]: I0128 07:07:34.894055 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/62614395-0b52-4d39-865d-c42587ac034b-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"62614395-0b52-4d39-865d-c42587ac034b\") " pod="openstack/rabbitmq-server-0" Jan 28 07:07:34 crc kubenswrapper[4642]: I0128 07:07:34.894458 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/62614395-0b52-4d39-865d-c42587ac034b-pod-info\") pod \"rabbitmq-server-0\" (UID: \"62614395-0b52-4d39-865d-c42587ac034b\") " pod="openstack/rabbitmq-server-0" Jan 28 07:07:34 crc kubenswrapper[4642]: I0128 07:07:34.894481 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/62614395-0b52-4d39-865d-c42587ac034b-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"62614395-0b52-4d39-865d-c42587ac034b\") " pod="openstack/rabbitmq-server-0" Jan 28 07:07:34 crc kubenswrapper[4642]: I0128 07:07:34.895038 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/62614395-0b52-4d39-865d-c42587ac034b-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"62614395-0b52-4d39-865d-c42587ac034b\") " pod="openstack/rabbitmq-server-0" Jan 28 07:07:34 crc kubenswrapper[4642]: I0128 07:07:34.902653 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/62614395-0b52-4d39-865d-c42587ac034b-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"62614395-0b52-4d39-865d-c42587ac034b\") " pod="openstack/rabbitmq-server-0" Jan 28 07:07:34 crc kubenswrapper[4642]: I0128 07:07:34.904271 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7k25\" (UniqueName: \"kubernetes.io/projected/62614395-0b52-4d39-865d-c42587ac034b-kube-api-access-f7k25\") pod \"rabbitmq-server-0\" (UID: \"62614395-0b52-4d39-865d-c42587ac034b\") " pod="openstack/rabbitmq-server-0" Jan 28 07:07:34 crc kubenswrapper[4642]: I0128 07:07:34.923691 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"62614395-0b52-4d39-865d-c42587ac034b\") " pod="openstack/rabbitmq-server-0" Jan 28 07:07:35 crc kubenswrapper[4642]: I0128 07:07:35.000571 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 28 07:07:35 crc kubenswrapper[4642]: I0128 07:07:35.112742 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80" path="/var/lib/kubelet/pods/ffc1c6a3-55ed-4caf-a1a6-5f5f90c41b80/volumes" Jan 28 07:07:35 crc kubenswrapper[4642]: I0128 07:07:35.374279 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 28 07:07:35 crc kubenswrapper[4642]: I0128 07:07:35.388625 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 28 07:07:35 crc kubenswrapper[4642]: I0128 07:07:35.504015 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/716da2e6-dc75-431b-aa9e-d22bb4e0f91b-pod-info\") pod \"716da2e6-dc75-431b-aa9e-d22bb4e0f91b\" (UID: \"716da2e6-dc75-431b-aa9e-d22bb4e0f91b\") " Jan 28 07:07:35 crc kubenswrapper[4642]: I0128 07:07:35.504054 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/716da2e6-dc75-431b-aa9e-d22bb4e0f91b-server-conf\") pod \"716da2e6-dc75-431b-aa9e-d22bb4e0f91b\" (UID: \"716da2e6-dc75-431b-aa9e-d22bb4e0f91b\") " Jan 28 07:07:35 crc kubenswrapper[4642]: I0128 07:07:35.504084 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/716da2e6-dc75-431b-aa9e-d22bb4e0f91b-config-data\") pod \"716da2e6-dc75-431b-aa9e-d22bb4e0f91b\" (UID: \"716da2e6-dc75-431b-aa9e-d22bb4e0f91b\") " Jan 28 07:07:35 crc kubenswrapper[4642]: I0128 07:07:35.504128 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/716da2e6-dc75-431b-aa9e-d22bb4e0f91b-rabbitmq-confd\") pod \"716da2e6-dc75-431b-aa9e-d22bb4e0f91b\" (UID: \"716da2e6-dc75-431b-aa9e-d22bb4e0f91b\") " Jan 28 07:07:35 crc kubenswrapper[4642]: I0128 07:07:35.504480 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/716da2e6-dc75-431b-aa9e-d22bb4e0f91b-rabbitmq-plugins\") pod \"716da2e6-dc75-431b-aa9e-d22bb4e0f91b\" (UID: \"716da2e6-dc75-431b-aa9e-d22bb4e0f91b\") " Jan 28 07:07:35 crc kubenswrapper[4642]: I0128 07:07:35.504567 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tfgff\" (UniqueName: \"kubernetes.io/projected/716da2e6-dc75-431b-aa9e-d22bb4e0f91b-kube-api-access-tfgff\") pod \"716da2e6-dc75-431b-aa9e-d22bb4e0f91b\" (UID: \"716da2e6-dc75-431b-aa9e-d22bb4e0f91b\") " Jan 28 07:07:35 crc kubenswrapper[4642]: I0128 07:07:35.504708 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/716da2e6-dc75-431b-aa9e-d22bb4e0f91b-rabbitmq-erlang-cookie\") pod \"716da2e6-dc75-431b-aa9e-d22bb4e0f91b\" (UID: \"716da2e6-dc75-431b-aa9e-d22bb4e0f91b\") " Jan 28 07:07:35 crc kubenswrapper[4642]: I0128 07:07:35.505052 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/716da2e6-dc75-431b-aa9e-d22bb4e0f91b-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "716da2e6-dc75-431b-aa9e-d22bb4e0f91b" (UID: "716da2e6-dc75-431b-aa9e-d22bb4e0f91b"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 07:07:35 crc kubenswrapper[4642]: I0128 07:07:35.505127 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/716da2e6-dc75-431b-aa9e-d22bb4e0f91b-plugins-conf\") pod \"716da2e6-dc75-431b-aa9e-d22bb4e0f91b\" (UID: \"716da2e6-dc75-431b-aa9e-d22bb4e0f91b\") " Jan 28 07:07:35 crc kubenswrapper[4642]: I0128 07:07:35.505162 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/716da2e6-dc75-431b-aa9e-d22bb4e0f91b-rabbitmq-tls\") pod \"716da2e6-dc75-431b-aa9e-d22bb4e0f91b\" (UID: \"716da2e6-dc75-431b-aa9e-d22bb4e0f91b\") " Jan 28 07:07:35 crc kubenswrapper[4642]: I0128 07:07:35.505210 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"716da2e6-dc75-431b-aa9e-d22bb4e0f91b\" (UID: \"716da2e6-dc75-431b-aa9e-d22bb4e0f91b\") " Jan 28 07:07:35 crc kubenswrapper[4642]: I0128 07:07:35.505245 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/716da2e6-dc75-431b-aa9e-d22bb4e0f91b-erlang-cookie-secret\") pod \"716da2e6-dc75-431b-aa9e-d22bb4e0f91b\" (UID: \"716da2e6-dc75-431b-aa9e-d22bb4e0f91b\") " Jan 28 07:07:35 crc kubenswrapper[4642]: I0128 07:07:35.505529 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/716da2e6-dc75-431b-aa9e-d22bb4e0f91b-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "716da2e6-dc75-431b-aa9e-d22bb4e0f91b" (UID: "716da2e6-dc75-431b-aa9e-d22bb4e0f91b"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 07:07:35 crc kubenswrapper[4642]: I0128 07:07:35.505701 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/716da2e6-dc75-431b-aa9e-d22bb4e0f91b-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "716da2e6-dc75-431b-aa9e-d22bb4e0f91b" (UID: "716da2e6-dc75-431b-aa9e-d22bb4e0f91b"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:07:35 crc kubenswrapper[4642]: I0128 07:07:35.506050 4642 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/716da2e6-dc75-431b-aa9e-d22bb4e0f91b-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 28 07:07:35 crc kubenswrapper[4642]: I0128 07:07:35.506070 4642 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/716da2e6-dc75-431b-aa9e-d22bb4e0f91b-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 28 07:07:35 crc kubenswrapper[4642]: I0128 07:07:35.506079 4642 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/716da2e6-dc75-431b-aa9e-d22bb4e0f91b-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 28 07:07:35 crc kubenswrapper[4642]: I0128 07:07:35.507612 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/716da2e6-dc75-431b-aa9e-d22bb4e0f91b-kube-api-access-tfgff" (OuterVolumeSpecName: "kube-api-access-tfgff") pod "716da2e6-dc75-431b-aa9e-d22bb4e0f91b" (UID: "716da2e6-dc75-431b-aa9e-d22bb4e0f91b"). InnerVolumeSpecName "kube-api-access-tfgff". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:07:35 crc kubenswrapper[4642]: I0128 07:07:35.508219 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/716da2e6-dc75-431b-aa9e-d22bb4e0f91b-pod-info" (OuterVolumeSpecName: "pod-info") pod "716da2e6-dc75-431b-aa9e-d22bb4e0f91b" (UID: "716da2e6-dc75-431b-aa9e-d22bb4e0f91b"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 28 07:07:35 crc kubenswrapper[4642]: I0128 07:07:35.508444 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "persistence") pod "716da2e6-dc75-431b-aa9e-d22bb4e0f91b" (UID: "716da2e6-dc75-431b-aa9e-d22bb4e0f91b"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 28 07:07:35 crc kubenswrapper[4642]: I0128 07:07:35.508452 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/716da2e6-dc75-431b-aa9e-d22bb4e0f91b-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "716da2e6-dc75-431b-aa9e-d22bb4e0f91b" (UID: "716da2e6-dc75-431b-aa9e-d22bb4e0f91b"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:07:35 crc kubenswrapper[4642]: I0128 07:07:35.509377 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/716da2e6-dc75-431b-aa9e-d22bb4e0f91b-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "716da2e6-dc75-431b-aa9e-d22bb4e0f91b" (UID: "716da2e6-dc75-431b-aa9e-d22bb4e0f91b"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:07:35 crc kubenswrapper[4642]: I0128 07:07:35.529484 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/716da2e6-dc75-431b-aa9e-d22bb4e0f91b-config-data" (OuterVolumeSpecName: "config-data") pod "716da2e6-dc75-431b-aa9e-d22bb4e0f91b" (UID: "716da2e6-dc75-431b-aa9e-d22bb4e0f91b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:07:35 crc kubenswrapper[4642]: I0128 07:07:35.542435 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/716da2e6-dc75-431b-aa9e-d22bb4e0f91b-server-conf" (OuterVolumeSpecName: "server-conf") pod "716da2e6-dc75-431b-aa9e-d22bb4e0f91b" (UID: "716da2e6-dc75-431b-aa9e-d22bb4e0f91b"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:07:35 crc kubenswrapper[4642]: I0128 07:07:35.581655 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/716da2e6-dc75-431b-aa9e-d22bb4e0f91b-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "716da2e6-dc75-431b-aa9e-d22bb4e0f91b" (UID: "716da2e6-dc75-431b-aa9e-d22bb4e0f91b"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:07:35 crc kubenswrapper[4642]: I0128 07:07:35.608128 4642 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/716da2e6-dc75-431b-aa9e-d22bb4e0f91b-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 28 07:07:35 crc kubenswrapper[4642]: I0128 07:07:35.608153 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tfgff\" (UniqueName: \"kubernetes.io/projected/716da2e6-dc75-431b-aa9e-d22bb4e0f91b-kube-api-access-tfgff\") on node \"crc\" DevicePath \"\"" Jan 28 07:07:35 crc kubenswrapper[4642]: I0128 07:07:35.608164 4642 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/716da2e6-dc75-431b-aa9e-d22bb4e0f91b-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 28 07:07:35 crc kubenswrapper[4642]: I0128 07:07:35.608206 4642 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 28 07:07:35 crc kubenswrapper[4642]: I0128 07:07:35.608217 4642 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/716da2e6-dc75-431b-aa9e-d22bb4e0f91b-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 28 07:07:35 crc kubenswrapper[4642]: I0128 07:07:35.608225 4642 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/716da2e6-dc75-431b-aa9e-d22bb4e0f91b-pod-info\") on node \"crc\" DevicePath \"\"" Jan 28 07:07:35 crc kubenswrapper[4642]: I0128 07:07:35.608234 4642 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/716da2e6-dc75-431b-aa9e-d22bb4e0f91b-server-conf\") on node \"crc\" DevicePath \"\"" Jan 28 07:07:35 crc kubenswrapper[4642]: I0128 07:07:35.608243 4642 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/716da2e6-dc75-431b-aa9e-d22bb4e0f91b-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 07:07:35 crc kubenswrapper[4642]: I0128 07:07:35.622788 4642 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 28 07:07:35 crc kubenswrapper[4642]: I0128 07:07:35.643159 4642 generic.go:334] "Generic (PLEG): container finished" podID="716da2e6-dc75-431b-aa9e-d22bb4e0f91b" containerID="674cf5c033eb05b2ed7c34c5140170164a7e274c8ffffc8e1733de0f0dff9491" exitCode=0 Jan 28 07:07:35 crc kubenswrapper[4642]: I0128 07:07:35.643220 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"716da2e6-dc75-431b-aa9e-d22bb4e0f91b","Type":"ContainerDied","Data":"674cf5c033eb05b2ed7c34c5140170164a7e274c8ffffc8e1733de0f0dff9491"} Jan 28 07:07:35 crc kubenswrapper[4642]: I0128 07:07:35.643450 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"716da2e6-dc75-431b-aa9e-d22bb4e0f91b","Type":"ContainerDied","Data":"d193fd150436f5a279a12e20709d3c4bc0c3e2f8c94762370ec3731abbe81992"} Jan 28 07:07:35 crc kubenswrapper[4642]: I0128 07:07:35.643484 4642 scope.go:117] "RemoveContainer" containerID="674cf5c033eb05b2ed7c34c5140170164a7e274c8ffffc8e1733de0f0dff9491" Jan 28 07:07:35 crc kubenswrapper[4642]: I0128 07:07:35.643265 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 28 07:07:35 crc kubenswrapper[4642]: I0128 07:07:35.651251 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"62614395-0b52-4d39-865d-c42587ac034b","Type":"ContainerStarted","Data":"5a9d88a18eb76d5c268ce5211b1e259ce64c4b89839e43f8bf2b4c4d3c7bbf38"} Jan 28 07:07:35 crc kubenswrapper[4642]: I0128 07:07:35.671402 4642 scope.go:117] "RemoveContainer" containerID="083b2e1874e89fce6b3baf1494aef28230adc9faaa23cf7b76895400a29413b9" Jan 28 07:07:35 crc kubenswrapper[4642]: I0128 07:07:35.694082 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 28 07:07:35 crc kubenswrapper[4642]: I0128 07:07:35.705881 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 28 07:07:35 crc kubenswrapper[4642]: I0128 07:07:35.705895 4642 scope.go:117] "RemoveContainer" containerID="674cf5c033eb05b2ed7c34c5140170164a7e274c8ffffc8e1733de0f0dff9491" Jan 28 07:07:35 crc kubenswrapper[4642]: E0128 07:07:35.706630 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"674cf5c033eb05b2ed7c34c5140170164a7e274c8ffffc8e1733de0f0dff9491\": container with ID starting with 674cf5c033eb05b2ed7c34c5140170164a7e274c8ffffc8e1733de0f0dff9491 not found: ID does not exist" containerID="674cf5c033eb05b2ed7c34c5140170164a7e274c8ffffc8e1733de0f0dff9491" Jan 28 07:07:35 crc kubenswrapper[4642]: I0128 07:07:35.706667 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"674cf5c033eb05b2ed7c34c5140170164a7e274c8ffffc8e1733de0f0dff9491"} err="failed to get container status \"674cf5c033eb05b2ed7c34c5140170164a7e274c8ffffc8e1733de0f0dff9491\": rpc error: code = NotFound desc = could not find container \"674cf5c033eb05b2ed7c34c5140170164a7e274c8ffffc8e1733de0f0dff9491\": container with ID starting with 674cf5c033eb05b2ed7c34c5140170164a7e274c8ffffc8e1733de0f0dff9491 not found: ID does not exist" Jan 28 07:07:35 crc kubenswrapper[4642]: I0128 07:07:35.706691 4642 scope.go:117] "RemoveContainer" containerID="083b2e1874e89fce6b3baf1494aef28230adc9faaa23cf7b76895400a29413b9" Jan 28 07:07:35 crc kubenswrapper[4642]: E0128 07:07:35.707367 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"083b2e1874e89fce6b3baf1494aef28230adc9faaa23cf7b76895400a29413b9\": container with ID starting with 083b2e1874e89fce6b3baf1494aef28230adc9faaa23cf7b76895400a29413b9 not found: ID does not exist" containerID="083b2e1874e89fce6b3baf1494aef28230adc9faaa23cf7b76895400a29413b9" Jan 28 07:07:35 crc kubenswrapper[4642]: I0128 07:07:35.707415 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"083b2e1874e89fce6b3baf1494aef28230adc9faaa23cf7b76895400a29413b9"} err="failed to get container status \"083b2e1874e89fce6b3baf1494aef28230adc9faaa23cf7b76895400a29413b9\": rpc error: code = NotFound desc = could not find container \"083b2e1874e89fce6b3baf1494aef28230adc9faaa23cf7b76895400a29413b9\": container with ID starting with 083b2e1874e89fce6b3baf1494aef28230adc9faaa23cf7b76895400a29413b9 not found: ID does not exist" Jan 28 07:07:35 crc kubenswrapper[4642]: I0128 07:07:35.709906 4642 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 28 07:07:35 crc kubenswrapper[4642]: I0128 07:07:35.716311 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 28 07:07:35 crc kubenswrapper[4642]: E0128 07:07:35.716719 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="716da2e6-dc75-431b-aa9e-d22bb4e0f91b" containerName="setup-container" Jan 28 07:07:35 crc kubenswrapper[4642]: I0128 07:07:35.716778 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="716da2e6-dc75-431b-aa9e-d22bb4e0f91b" containerName="setup-container" Jan 28 07:07:35 crc kubenswrapper[4642]: E0128 07:07:35.716848 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="716da2e6-dc75-431b-aa9e-d22bb4e0f91b" containerName="rabbitmq" Jan 28 07:07:35 crc kubenswrapper[4642]: I0128 07:07:35.716894 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="716da2e6-dc75-431b-aa9e-d22bb4e0f91b" containerName="rabbitmq" Jan 28 07:07:35 crc kubenswrapper[4642]: I0128 07:07:35.717110 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="716da2e6-dc75-431b-aa9e-d22bb4e0f91b" containerName="rabbitmq" Jan 28 07:07:35 crc kubenswrapper[4642]: I0128 07:07:35.717996 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 28 07:07:35 crc kubenswrapper[4642]: I0128 07:07:35.720413 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 28 07:07:35 crc kubenswrapper[4642]: I0128 07:07:35.720551 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 28 07:07:35 crc kubenswrapper[4642]: I0128 07:07:35.720850 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-ntsp8" Jan 28 07:07:35 crc kubenswrapper[4642]: I0128 07:07:35.720435 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 28 07:07:35 crc kubenswrapper[4642]: I0128 07:07:35.720447 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 28 07:07:35 crc kubenswrapper[4642]: I0128 07:07:35.723539 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 28 07:07:35 crc kubenswrapper[4642]: I0128 07:07:35.723557 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 28 07:07:35 crc kubenswrapper[4642]: I0128 07:07:35.728032 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 28 07:07:35 crc kubenswrapper[4642]: I0128 07:07:35.913930 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/830d2eb5-3d8a-4b74-833e-758894985129-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"830d2eb5-3d8a-4b74-833e-758894985129\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 07:07:35 crc kubenswrapper[4642]: I0128 07:07:35.913998 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/830d2eb5-3d8a-4b74-833e-758894985129-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"830d2eb5-3d8a-4b74-833e-758894985129\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 07:07:35 crc kubenswrapper[4642]: I0128 07:07:35.914046 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/830d2eb5-3d8a-4b74-833e-758894985129-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"830d2eb5-3d8a-4b74-833e-758894985129\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 07:07:35 crc kubenswrapper[4642]: I0128 07:07:35.914090 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/830d2eb5-3d8a-4b74-833e-758894985129-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"830d2eb5-3d8a-4b74-833e-758894985129\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 07:07:35 crc kubenswrapper[4642]: I0128 07:07:35.914146 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/830d2eb5-3d8a-4b74-833e-758894985129-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"830d2eb5-3d8a-4b74-833e-758894985129\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 07:07:35 crc kubenswrapper[4642]: I0128 07:07:35.914203 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"830d2eb5-3d8a-4b74-833e-758894985129\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 07:07:35 crc kubenswrapper[4642]: I0128 07:07:35.914265 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8z74d\" (UniqueName: \"kubernetes.io/projected/830d2eb5-3d8a-4b74-833e-758894985129-kube-api-access-8z74d\") pod \"rabbitmq-cell1-server-0\" (UID: \"830d2eb5-3d8a-4b74-833e-758894985129\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 07:07:35 crc kubenswrapper[4642]: I0128 07:07:35.914280 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/830d2eb5-3d8a-4b74-833e-758894985129-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"830d2eb5-3d8a-4b74-833e-758894985129\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 07:07:35 crc kubenswrapper[4642]: I0128 07:07:35.914312 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/830d2eb5-3d8a-4b74-833e-758894985129-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"830d2eb5-3d8a-4b74-833e-758894985129\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 07:07:35 crc kubenswrapper[4642]: I0128 07:07:35.914327 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/830d2eb5-3d8a-4b74-833e-758894985129-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"830d2eb5-3d8a-4b74-833e-758894985129\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 07:07:35 crc kubenswrapper[4642]: I0128 07:07:35.914408 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/830d2eb5-3d8a-4b74-833e-758894985129-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"830d2eb5-3d8a-4b74-833e-758894985129\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 07:07:36 crc kubenswrapper[4642]: I0128 07:07:36.016522 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"830d2eb5-3d8a-4b74-833e-758894985129\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 07:07:36 crc kubenswrapper[4642]: I0128 07:07:36.016630 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8z74d\" (UniqueName: \"kubernetes.io/projected/830d2eb5-3d8a-4b74-833e-758894985129-kube-api-access-8z74d\") pod \"rabbitmq-cell1-server-0\" (UID: \"830d2eb5-3d8a-4b74-833e-758894985129\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 07:07:36 crc kubenswrapper[4642]: I0128 07:07:36.016662 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/830d2eb5-3d8a-4b74-833e-758894985129-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"830d2eb5-3d8a-4b74-833e-758894985129\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 07:07:36 crc kubenswrapper[4642]: I0128 07:07:36.016700 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/830d2eb5-3d8a-4b74-833e-758894985129-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"830d2eb5-3d8a-4b74-833e-758894985129\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 07:07:36 crc kubenswrapper[4642]: I0128 07:07:36.016716 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/830d2eb5-3d8a-4b74-833e-758894985129-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"830d2eb5-3d8a-4b74-833e-758894985129\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 07:07:36 crc kubenswrapper[4642]: I0128 07:07:36.016781 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/830d2eb5-3d8a-4b74-833e-758894985129-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"830d2eb5-3d8a-4b74-833e-758894985129\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 07:07:36 crc kubenswrapper[4642]: I0128 07:07:36.016798 4642 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"830d2eb5-3d8a-4b74-833e-758894985129\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-cell1-server-0" Jan 28 07:07:36 crc kubenswrapper[4642]: I0128 07:07:36.016825 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/830d2eb5-3d8a-4b74-833e-758894985129-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"830d2eb5-3d8a-4b74-833e-758894985129\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 07:07:36 crc kubenswrapper[4642]: I0128 07:07:36.016877 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/830d2eb5-3d8a-4b74-833e-758894985129-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"830d2eb5-3d8a-4b74-833e-758894985129\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 07:07:36 crc kubenswrapper[4642]: I0128 07:07:36.016925 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/830d2eb5-3d8a-4b74-833e-758894985129-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"830d2eb5-3d8a-4b74-833e-758894985129\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 07:07:36 crc kubenswrapper[4642]: I0128 07:07:36.016966 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/830d2eb5-3d8a-4b74-833e-758894985129-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"830d2eb5-3d8a-4b74-833e-758894985129\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 07:07:36 crc kubenswrapper[4642]: I0128 07:07:36.017007 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/830d2eb5-3d8a-4b74-833e-758894985129-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"830d2eb5-3d8a-4b74-833e-758894985129\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 07:07:36 crc kubenswrapper[4642]: I0128 07:07:36.017900 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/830d2eb5-3d8a-4b74-833e-758894985129-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"830d2eb5-3d8a-4b74-833e-758894985129\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 07:07:36 crc kubenswrapper[4642]: I0128 07:07:36.018436 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/830d2eb5-3d8a-4b74-833e-758894985129-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"830d2eb5-3d8a-4b74-833e-758894985129\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 07:07:36 crc kubenswrapper[4642]: I0128 07:07:36.018701 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/830d2eb5-3d8a-4b74-833e-758894985129-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"830d2eb5-3d8a-4b74-833e-758894985129\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 07:07:36 crc kubenswrapper[4642]: I0128 07:07:36.018841 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/830d2eb5-3d8a-4b74-833e-758894985129-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"830d2eb5-3d8a-4b74-833e-758894985129\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 07:07:36 crc kubenswrapper[4642]: I0128 07:07:36.019769 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/830d2eb5-3d8a-4b74-833e-758894985129-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"830d2eb5-3d8a-4b74-833e-758894985129\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 07:07:36 crc kubenswrapper[4642]: I0128 07:07:36.031597 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8z74d\" (UniqueName: \"kubernetes.io/projected/830d2eb5-3d8a-4b74-833e-758894985129-kube-api-access-8z74d\") pod \"rabbitmq-cell1-server-0\" (UID: \"830d2eb5-3d8a-4b74-833e-758894985129\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 07:07:36 crc kubenswrapper[4642]: I0128 07:07:36.035957 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"830d2eb5-3d8a-4b74-833e-758894985129\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 07:07:36 crc kubenswrapper[4642]: I0128 07:07:36.112993 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/830d2eb5-3d8a-4b74-833e-758894985129-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"830d2eb5-3d8a-4b74-833e-758894985129\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 07:07:36 crc kubenswrapper[4642]: I0128 07:07:36.113521 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/830d2eb5-3d8a-4b74-833e-758894985129-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"830d2eb5-3d8a-4b74-833e-758894985129\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 07:07:36 crc kubenswrapper[4642]: I0128 07:07:36.114049 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/830d2eb5-3d8a-4b74-833e-758894985129-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"830d2eb5-3d8a-4b74-833e-758894985129\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 07:07:36 crc kubenswrapper[4642]: I0128 07:07:36.114874 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/830d2eb5-3d8a-4b74-833e-758894985129-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"830d2eb5-3d8a-4b74-833e-758894985129\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 07:07:36 crc kubenswrapper[4642]: I0128 07:07:36.333791 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 28 07:07:36 crc kubenswrapper[4642]: I0128 07:07:36.661960 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"62614395-0b52-4d39-865d-c42587ac034b","Type":"ContainerStarted","Data":"e15e11a1bf6c44f634ff6bb3e3f518f98158c7841fb8fd6981ee7e3d720e42a3"} Jan 28 07:07:36 crc kubenswrapper[4642]: W0128 07:07:36.720810 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod830d2eb5_3d8a_4b74_833e_758894985129.slice/crio-d6e5e9c7fa01ab55c00be7b5646ed3b690a086349b4244b8921cbb5a60907d5b WatchSource:0}: Error finding container d6e5e9c7fa01ab55c00be7b5646ed3b690a086349b4244b8921cbb5a60907d5b: Status 404 returned error can't find the container with id d6e5e9c7fa01ab55c00be7b5646ed3b690a086349b4244b8921cbb5a60907d5b Jan 28 07:07:36 crc kubenswrapper[4642]: I0128 07:07:36.720854 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 28 07:07:37 crc kubenswrapper[4642]: I0128 07:07:37.107750 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="716da2e6-dc75-431b-aa9e-d22bb4e0f91b" path="/var/lib/kubelet/pods/716da2e6-dc75-431b-aa9e-d22bb4e0f91b/volumes" Jan 28 07:07:37 crc kubenswrapper[4642]: I0128 07:07:37.672557 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"830d2eb5-3d8a-4b74-833e-758894985129","Type":"ContainerStarted","Data":"d4d710cb2b90d7c911233a5af5bdbd9c2de7328bf8f14ff71e247ccdd5b3d7f7"} Jan 28 07:07:37 crc kubenswrapper[4642]: I0128 07:07:37.672775 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"830d2eb5-3d8a-4b74-833e-758894985129","Type":"ContainerStarted","Data":"d6e5e9c7fa01ab55c00be7b5646ed3b690a086349b4244b8921cbb5a60907d5b"} Jan 28 07:07:39 crc kubenswrapper[4642]: I0128 07:07:39.751264 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d764c6d8c-5m899"] Jan 28 07:07:39 crc kubenswrapper[4642]: I0128 07:07:39.753051 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d764c6d8c-5m899" Jan 28 07:07:39 crc kubenswrapper[4642]: I0128 07:07:39.754820 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 28 07:07:39 crc kubenswrapper[4642]: I0128 07:07:39.762544 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d764c6d8c-5m899"] Jan 28 07:07:39 crc kubenswrapper[4642]: I0128 07:07:39.885580 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56be9971-eb5a-4b59-8f28-ef459c6269ec-ovsdbserver-nb\") pod \"dnsmasq-dns-7d764c6d8c-5m899\" (UID: \"56be9971-eb5a-4b59-8f28-ef459c6269ec\") " pod="openstack/dnsmasq-dns-7d764c6d8c-5m899" Jan 28 07:07:39 crc kubenswrapper[4642]: I0128 07:07:39.885722 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/56be9971-eb5a-4b59-8f28-ef459c6269ec-dns-swift-storage-0\") pod \"dnsmasq-dns-7d764c6d8c-5m899\" (UID: \"56be9971-eb5a-4b59-8f28-ef459c6269ec\") " pod="openstack/dnsmasq-dns-7d764c6d8c-5m899" Jan 28 07:07:39 crc kubenswrapper[4642]: I0128 07:07:39.885815 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56be9971-eb5a-4b59-8f28-ef459c6269ec-ovsdbserver-sb\") pod \"dnsmasq-dns-7d764c6d8c-5m899\" (UID: \"56be9971-eb5a-4b59-8f28-ef459c6269ec\") " pod="openstack/dnsmasq-dns-7d764c6d8c-5m899" Jan 28 07:07:39 crc kubenswrapper[4642]: I0128 07:07:39.885849 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prhmq\" (UniqueName: \"kubernetes.io/projected/56be9971-eb5a-4b59-8f28-ef459c6269ec-kube-api-access-prhmq\") pod \"dnsmasq-dns-7d764c6d8c-5m899\" (UID: \"56be9971-eb5a-4b59-8f28-ef459c6269ec\") " pod="openstack/dnsmasq-dns-7d764c6d8c-5m899" Jan 28 07:07:39 crc kubenswrapper[4642]: I0128 07:07:39.886117 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56be9971-eb5a-4b59-8f28-ef459c6269ec-dns-svc\") pod \"dnsmasq-dns-7d764c6d8c-5m899\" (UID: \"56be9971-eb5a-4b59-8f28-ef459c6269ec\") " pod="openstack/dnsmasq-dns-7d764c6d8c-5m899" Jan 28 07:07:39 crc kubenswrapper[4642]: I0128 07:07:39.886225 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56be9971-eb5a-4b59-8f28-ef459c6269ec-config\") pod \"dnsmasq-dns-7d764c6d8c-5m899\" (UID: \"56be9971-eb5a-4b59-8f28-ef459c6269ec\") " pod="openstack/dnsmasq-dns-7d764c6d8c-5m899" Jan 28 07:07:39 crc kubenswrapper[4642]: I0128 07:07:39.886270 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/56be9971-eb5a-4b59-8f28-ef459c6269ec-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d764c6d8c-5m899\" (UID: \"56be9971-eb5a-4b59-8f28-ef459c6269ec\") " pod="openstack/dnsmasq-dns-7d764c6d8c-5m899" Jan 28 07:07:39 crc kubenswrapper[4642]: I0128 07:07:39.988369 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56be9971-eb5a-4b59-8f28-ef459c6269ec-ovsdbserver-sb\") pod \"dnsmasq-dns-7d764c6d8c-5m899\" (UID: \"56be9971-eb5a-4b59-8f28-ef459c6269ec\") " pod="openstack/dnsmasq-dns-7d764c6d8c-5m899" Jan 28 07:07:39 crc kubenswrapper[4642]: I0128 07:07:39.988537 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prhmq\" (UniqueName: \"kubernetes.io/projected/56be9971-eb5a-4b59-8f28-ef459c6269ec-kube-api-access-prhmq\") pod \"dnsmasq-dns-7d764c6d8c-5m899\" (UID: \"56be9971-eb5a-4b59-8f28-ef459c6269ec\") " pod="openstack/dnsmasq-dns-7d764c6d8c-5m899" Jan 28 07:07:39 crc kubenswrapper[4642]: I0128 07:07:39.988679 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56be9971-eb5a-4b59-8f28-ef459c6269ec-dns-svc\") pod \"dnsmasq-dns-7d764c6d8c-5m899\" (UID: \"56be9971-eb5a-4b59-8f28-ef459c6269ec\") " pod="openstack/dnsmasq-dns-7d764c6d8c-5m899" Jan 28 07:07:39 crc kubenswrapper[4642]: I0128 07:07:39.988785 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56be9971-eb5a-4b59-8f28-ef459c6269ec-config\") pod \"dnsmasq-dns-7d764c6d8c-5m899\" (UID: \"56be9971-eb5a-4b59-8f28-ef459c6269ec\") " pod="openstack/dnsmasq-dns-7d764c6d8c-5m899" Jan 28 07:07:39 crc kubenswrapper[4642]: I0128 07:07:39.988866 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/56be9971-eb5a-4b59-8f28-ef459c6269ec-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d764c6d8c-5m899\" (UID: \"56be9971-eb5a-4b59-8f28-ef459c6269ec\") " pod="openstack/dnsmasq-dns-7d764c6d8c-5m899" Jan 28 07:07:39 crc kubenswrapper[4642]: I0128 07:07:39.988957 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56be9971-eb5a-4b59-8f28-ef459c6269ec-ovsdbserver-nb\") pod \"dnsmasq-dns-7d764c6d8c-5m899\" (UID: \"56be9971-eb5a-4b59-8f28-ef459c6269ec\") " pod="openstack/dnsmasq-dns-7d764c6d8c-5m899" Jan 28 07:07:39 crc kubenswrapper[4642]: I0128 07:07:39.989099 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56be9971-eb5a-4b59-8f28-ef459c6269ec-ovsdbserver-sb\") pod \"dnsmasq-dns-7d764c6d8c-5m899\" (UID: \"56be9971-eb5a-4b59-8f28-ef459c6269ec\") " pod="openstack/dnsmasq-dns-7d764c6d8c-5m899" Jan 28 07:07:39 crc kubenswrapper[4642]: I0128 07:07:39.989239 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/56be9971-eb5a-4b59-8f28-ef459c6269ec-dns-swift-storage-0\") pod \"dnsmasq-dns-7d764c6d8c-5m899\" (UID: \"56be9971-eb5a-4b59-8f28-ef459c6269ec\") " pod="openstack/dnsmasq-dns-7d764c6d8c-5m899" Jan 28 07:07:39 crc kubenswrapper[4642]: I0128 07:07:39.989337 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56be9971-eb5a-4b59-8f28-ef459c6269ec-dns-svc\") pod \"dnsmasq-dns-7d764c6d8c-5m899\" (UID: \"56be9971-eb5a-4b59-8f28-ef459c6269ec\") " pod="openstack/dnsmasq-dns-7d764c6d8c-5m899" Jan 28 07:07:39 crc kubenswrapper[4642]: I0128 07:07:39.989582 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/56be9971-eb5a-4b59-8f28-ef459c6269ec-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d764c6d8c-5m899\" (UID: \"56be9971-eb5a-4b59-8f28-ef459c6269ec\") " pod="openstack/dnsmasq-dns-7d764c6d8c-5m899" Jan 28 07:07:39 crc kubenswrapper[4642]: I0128 07:07:39.989696 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56be9971-eb5a-4b59-8f28-ef459c6269ec-config\") pod \"dnsmasq-dns-7d764c6d8c-5m899\" (UID: \"56be9971-eb5a-4b59-8f28-ef459c6269ec\") " pod="openstack/dnsmasq-dns-7d764c6d8c-5m899" Jan 28 07:07:39 crc kubenswrapper[4642]: I0128 07:07:39.989746 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56be9971-eb5a-4b59-8f28-ef459c6269ec-ovsdbserver-nb\") pod \"dnsmasq-dns-7d764c6d8c-5m899\" (UID: \"56be9971-eb5a-4b59-8f28-ef459c6269ec\") " pod="openstack/dnsmasq-dns-7d764c6d8c-5m899" Jan 28 07:07:39 crc kubenswrapper[4642]: I0128 07:07:39.990239 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/56be9971-eb5a-4b59-8f28-ef459c6269ec-dns-swift-storage-0\") pod \"dnsmasq-dns-7d764c6d8c-5m899\" (UID: \"56be9971-eb5a-4b59-8f28-ef459c6269ec\") " pod="openstack/dnsmasq-dns-7d764c6d8c-5m899" Jan 28 07:07:40 crc kubenswrapper[4642]: I0128 07:07:40.003816 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prhmq\" (UniqueName: \"kubernetes.io/projected/56be9971-eb5a-4b59-8f28-ef459c6269ec-kube-api-access-prhmq\") pod \"dnsmasq-dns-7d764c6d8c-5m899\" (UID: \"56be9971-eb5a-4b59-8f28-ef459c6269ec\") " pod="openstack/dnsmasq-dns-7d764c6d8c-5m899" Jan 28 07:07:40 crc kubenswrapper[4642]: I0128 07:07:40.072351 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d764c6d8c-5m899" Jan 28 07:07:40 crc kubenswrapper[4642]: W0128 07:07:40.445238 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod56be9971_eb5a_4b59_8f28_ef459c6269ec.slice/crio-5ead8574c93ca7d2ea496444f06ea4dea3ab07679913fe92381f9a0e13be12e1 WatchSource:0}: Error finding container 5ead8574c93ca7d2ea496444f06ea4dea3ab07679913fe92381f9a0e13be12e1: Status 404 returned error can't find the container with id 5ead8574c93ca7d2ea496444f06ea4dea3ab07679913fe92381f9a0e13be12e1 Jan 28 07:07:40 crc kubenswrapper[4642]: I0128 07:07:40.445634 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d764c6d8c-5m899"] Jan 28 07:07:40 crc kubenswrapper[4642]: I0128 07:07:40.703716 4642 generic.go:334] "Generic (PLEG): container finished" podID="56be9971-eb5a-4b59-8f28-ef459c6269ec" containerID="43e88cfb1e4e5fa3926dfdb5a67ffc8cdf798be0b2b635fa5c6457b767a7c96e" exitCode=0 Jan 28 07:07:40 crc kubenswrapper[4642]: I0128 07:07:40.703816 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d764c6d8c-5m899" event={"ID":"56be9971-eb5a-4b59-8f28-ef459c6269ec","Type":"ContainerDied","Data":"43e88cfb1e4e5fa3926dfdb5a67ffc8cdf798be0b2b635fa5c6457b767a7c96e"} Jan 28 07:07:40 crc kubenswrapper[4642]: I0128 07:07:40.703910 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d764c6d8c-5m899" event={"ID":"56be9971-eb5a-4b59-8f28-ef459c6269ec","Type":"ContainerStarted","Data":"5ead8574c93ca7d2ea496444f06ea4dea3ab07679913fe92381f9a0e13be12e1"} Jan 28 07:07:41 crc kubenswrapper[4642]: I0128 07:07:41.712774 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d764c6d8c-5m899" event={"ID":"56be9971-eb5a-4b59-8f28-ef459c6269ec","Type":"ContainerStarted","Data":"47760792c76942cbc48a6ef7e16a905ec3066a045b68df2f51a339e8fc0b2f3f"} Jan 28 07:07:41 crc kubenswrapper[4642]: I0128 07:07:41.712973 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7d764c6d8c-5m899" Jan 28 07:07:41 crc kubenswrapper[4642]: I0128 07:07:41.730690 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7d764c6d8c-5m899" podStartSLOduration=2.730674911 podStartE2EDuration="2.730674911s" podCreationTimestamp="2026-01-28 07:07:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 07:07:41.72552322 +0000 UTC m=+1184.957612029" watchObservedRunningTime="2026-01-28 07:07:41.730674911 +0000 UTC m=+1184.962763719" Jan 28 07:07:45 crc kubenswrapper[4642]: I0128 07:07:45.073332 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7d764c6d8c-5m899" Jan 28 07:07:45 crc kubenswrapper[4642]: I0128 07:07:45.123029 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59775c5b57-vjbrx"] Jan 28 07:07:45 crc kubenswrapper[4642]: I0128 07:07:45.123283 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-59775c5b57-vjbrx" podUID="3966764e-5967-46c5-9b39-88fd738b8b0a" containerName="dnsmasq-dns" containerID="cri-o://59cc0a4217eb939f4a8cd6e6b5e46b169f231555287b65d48fdc408fa5414998" gracePeriod=10 Jan 28 07:07:45 crc kubenswrapper[4642]: I0128 07:07:45.280312 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-774c5cf667-7hfrh"] Jan 28 07:07:45 crc kubenswrapper[4642]: I0128 07:07:45.281768 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-774c5cf667-7hfrh" Jan 28 07:07:45 crc kubenswrapper[4642]: I0128 07:07:45.297234 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-774c5cf667-7hfrh"] Jan 28 07:07:45 crc kubenswrapper[4642]: I0128 07:07:45.375541 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/592db514-d1a6-421d-87f4-60ab08a05885-dns-svc\") pod \"dnsmasq-dns-774c5cf667-7hfrh\" (UID: \"592db514-d1a6-421d-87f4-60ab08a05885\") " pod="openstack/dnsmasq-dns-774c5cf667-7hfrh" Jan 28 07:07:45 crc kubenswrapper[4642]: I0128 07:07:45.375585 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/592db514-d1a6-421d-87f4-60ab08a05885-dns-swift-storage-0\") pod \"dnsmasq-dns-774c5cf667-7hfrh\" (UID: \"592db514-d1a6-421d-87f4-60ab08a05885\") " pod="openstack/dnsmasq-dns-774c5cf667-7hfrh" Jan 28 07:07:45 crc kubenswrapper[4642]: I0128 07:07:45.375620 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shglf\" (UniqueName: \"kubernetes.io/projected/592db514-d1a6-421d-87f4-60ab08a05885-kube-api-access-shglf\") pod \"dnsmasq-dns-774c5cf667-7hfrh\" (UID: \"592db514-d1a6-421d-87f4-60ab08a05885\") " pod="openstack/dnsmasq-dns-774c5cf667-7hfrh" Jan 28 07:07:45 crc kubenswrapper[4642]: I0128 07:07:45.375756 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/592db514-d1a6-421d-87f4-60ab08a05885-ovsdbserver-nb\") pod \"dnsmasq-dns-774c5cf667-7hfrh\" (UID: \"592db514-d1a6-421d-87f4-60ab08a05885\") " pod="openstack/dnsmasq-dns-774c5cf667-7hfrh" Jan 28 07:07:45 crc kubenswrapper[4642]: I0128 07:07:45.375879 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/592db514-d1a6-421d-87f4-60ab08a05885-config\") pod \"dnsmasq-dns-774c5cf667-7hfrh\" (UID: \"592db514-d1a6-421d-87f4-60ab08a05885\") " pod="openstack/dnsmasq-dns-774c5cf667-7hfrh" Jan 28 07:07:45 crc kubenswrapper[4642]: I0128 07:07:45.375929 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/592db514-d1a6-421d-87f4-60ab08a05885-openstack-edpm-ipam\") pod \"dnsmasq-dns-774c5cf667-7hfrh\" (UID: \"592db514-d1a6-421d-87f4-60ab08a05885\") " pod="openstack/dnsmasq-dns-774c5cf667-7hfrh" Jan 28 07:07:45 crc kubenswrapper[4642]: I0128 07:07:45.375996 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/592db514-d1a6-421d-87f4-60ab08a05885-ovsdbserver-sb\") pod \"dnsmasq-dns-774c5cf667-7hfrh\" (UID: \"592db514-d1a6-421d-87f4-60ab08a05885\") " pod="openstack/dnsmasq-dns-774c5cf667-7hfrh" Jan 28 07:07:45 crc kubenswrapper[4642]: I0128 07:07:45.477536 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/592db514-d1a6-421d-87f4-60ab08a05885-ovsdbserver-nb\") pod \"dnsmasq-dns-774c5cf667-7hfrh\" (UID: \"592db514-d1a6-421d-87f4-60ab08a05885\") " pod="openstack/dnsmasq-dns-774c5cf667-7hfrh" Jan 28 07:07:45 crc kubenswrapper[4642]: I0128 07:07:45.477814 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/592db514-d1a6-421d-87f4-60ab08a05885-config\") pod \"dnsmasq-dns-774c5cf667-7hfrh\" (UID: \"592db514-d1a6-421d-87f4-60ab08a05885\") " pod="openstack/dnsmasq-dns-774c5cf667-7hfrh" Jan 28 07:07:45 crc kubenswrapper[4642]: I0128 07:07:45.477846 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/592db514-d1a6-421d-87f4-60ab08a05885-openstack-edpm-ipam\") pod \"dnsmasq-dns-774c5cf667-7hfrh\" (UID: \"592db514-d1a6-421d-87f4-60ab08a05885\") " pod="openstack/dnsmasq-dns-774c5cf667-7hfrh" Jan 28 07:07:45 crc kubenswrapper[4642]: I0128 07:07:45.477935 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/592db514-d1a6-421d-87f4-60ab08a05885-ovsdbserver-sb\") pod \"dnsmasq-dns-774c5cf667-7hfrh\" (UID: \"592db514-d1a6-421d-87f4-60ab08a05885\") " pod="openstack/dnsmasq-dns-774c5cf667-7hfrh" Jan 28 07:07:45 crc kubenswrapper[4642]: I0128 07:07:45.478081 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/592db514-d1a6-421d-87f4-60ab08a05885-dns-svc\") pod \"dnsmasq-dns-774c5cf667-7hfrh\" (UID: \"592db514-d1a6-421d-87f4-60ab08a05885\") " pod="openstack/dnsmasq-dns-774c5cf667-7hfrh" Jan 28 07:07:45 crc kubenswrapper[4642]: I0128 07:07:45.478102 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/592db514-d1a6-421d-87f4-60ab08a05885-dns-swift-storage-0\") pod \"dnsmasq-dns-774c5cf667-7hfrh\" (UID: \"592db514-d1a6-421d-87f4-60ab08a05885\") " pod="openstack/dnsmasq-dns-774c5cf667-7hfrh" Jan 28 07:07:45 crc kubenswrapper[4642]: I0128 07:07:45.478134 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shglf\" (UniqueName: \"kubernetes.io/projected/592db514-d1a6-421d-87f4-60ab08a05885-kube-api-access-shglf\") pod \"dnsmasq-dns-774c5cf667-7hfrh\" (UID: \"592db514-d1a6-421d-87f4-60ab08a05885\") " pod="openstack/dnsmasq-dns-774c5cf667-7hfrh" Jan 28 07:07:45 crc kubenswrapper[4642]: I0128 07:07:45.478316 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/592db514-d1a6-421d-87f4-60ab08a05885-ovsdbserver-nb\") pod \"dnsmasq-dns-774c5cf667-7hfrh\" (UID: \"592db514-d1a6-421d-87f4-60ab08a05885\") " pod="openstack/dnsmasq-dns-774c5cf667-7hfrh" Jan 28 07:07:45 crc kubenswrapper[4642]: I0128 07:07:45.478575 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/592db514-d1a6-421d-87f4-60ab08a05885-config\") pod \"dnsmasq-dns-774c5cf667-7hfrh\" (UID: \"592db514-d1a6-421d-87f4-60ab08a05885\") " pod="openstack/dnsmasq-dns-774c5cf667-7hfrh" Jan 28 07:07:45 crc kubenswrapper[4642]: I0128 07:07:45.478627 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/592db514-d1a6-421d-87f4-60ab08a05885-openstack-edpm-ipam\") pod \"dnsmasq-dns-774c5cf667-7hfrh\" (UID: \"592db514-d1a6-421d-87f4-60ab08a05885\") " pod="openstack/dnsmasq-dns-774c5cf667-7hfrh" Jan 28 07:07:45 crc kubenswrapper[4642]: I0128 07:07:45.478737 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/592db514-d1a6-421d-87f4-60ab08a05885-ovsdbserver-sb\") pod \"dnsmasq-dns-774c5cf667-7hfrh\" (UID: \"592db514-d1a6-421d-87f4-60ab08a05885\") " pod="openstack/dnsmasq-dns-774c5cf667-7hfrh" Jan 28 07:07:45 crc kubenswrapper[4642]: I0128 07:07:45.478876 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/592db514-d1a6-421d-87f4-60ab08a05885-dns-svc\") pod \"dnsmasq-dns-774c5cf667-7hfrh\" (UID: \"592db514-d1a6-421d-87f4-60ab08a05885\") " pod="openstack/dnsmasq-dns-774c5cf667-7hfrh" Jan 28 07:07:45 crc kubenswrapper[4642]: I0128 07:07:45.479049 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/592db514-d1a6-421d-87f4-60ab08a05885-dns-swift-storage-0\") pod \"dnsmasq-dns-774c5cf667-7hfrh\" (UID: \"592db514-d1a6-421d-87f4-60ab08a05885\") " pod="openstack/dnsmasq-dns-774c5cf667-7hfrh" Jan 28 07:07:45 crc kubenswrapper[4642]: I0128 07:07:45.493178 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shglf\" (UniqueName: \"kubernetes.io/projected/592db514-d1a6-421d-87f4-60ab08a05885-kube-api-access-shglf\") pod \"dnsmasq-dns-774c5cf667-7hfrh\" (UID: \"592db514-d1a6-421d-87f4-60ab08a05885\") " pod="openstack/dnsmasq-dns-774c5cf667-7hfrh" Jan 28 07:07:45 crc kubenswrapper[4642]: I0128 07:07:45.534571 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59775c5b57-vjbrx" Jan 28 07:07:45 crc kubenswrapper[4642]: I0128 07:07:45.601690 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-774c5cf667-7hfrh" Jan 28 07:07:45 crc kubenswrapper[4642]: I0128 07:07:45.681304 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kczbf\" (UniqueName: \"kubernetes.io/projected/3966764e-5967-46c5-9b39-88fd738b8b0a-kube-api-access-kczbf\") pod \"3966764e-5967-46c5-9b39-88fd738b8b0a\" (UID: \"3966764e-5967-46c5-9b39-88fd738b8b0a\") " Jan 28 07:07:45 crc kubenswrapper[4642]: I0128 07:07:45.681372 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3966764e-5967-46c5-9b39-88fd738b8b0a-dns-swift-storage-0\") pod \"3966764e-5967-46c5-9b39-88fd738b8b0a\" (UID: \"3966764e-5967-46c5-9b39-88fd738b8b0a\") " Jan 28 07:07:45 crc kubenswrapper[4642]: I0128 07:07:45.681509 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3966764e-5967-46c5-9b39-88fd738b8b0a-dns-svc\") pod \"3966764e-5967-46c5-9b39-88fd738b8b0a\" (UID: \"3966764e-5967-46c5-9b39-88fd738b8b0a\") " Jan 28 07:07:45 crc kubenswrapper[4642]: I0128 07:07:45.681550 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3966764e-5967-46c5-9b39-88fd738b8b0a-ovsdbserver-sb\") pod \"3966764e-5967-46c5-9b39-88fd738b8b0a\" (UID: \"3966764e-5967-46c5-9b39-88fd738b8b0a\") " Jan 28 07:07:45 crc kubenswrapper[4642]: I0128 07:07:45.681568 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3966764e-5967-46c5-9b39-88fd738b8b0a-config\") pod \"3966764e-5967-46c5-9b39-88fd738b8b0a\" (UID: \"3966764e-5967-46c5-9b39-88fd738b8b0a\") " Jan 28 07:07:45 crc kubenswrapper[4642]: I0128 07:07:45.681645 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3966764e-5967-46c5-9b39-88fd738b8b0a-ovsdbserver-nb\") pod \"3966764e-5967-46c5-9b39-88fd738b8b0a\" (UID: \"3966764e-5967-46c5-9b39-88fd738b8b0a\") " Jan 28 07:07:45 crc kubenswrapper[4642]: I0128 07:07:45.684962 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3966764e-5967-46c5-9b39-88fd738b8b0a-kube-api-access-kczbf" (OuterVolumeSpecName: "kube-api-access-kczbf") pod "3966764e-5967-46c5-9b39-88fd738b8b0a" (UID: "3966764e-5967-46c5-9b39-88fd738b8b0a"). InnerVolumeSpecName "kube-api-access-kczbf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:07:45 crc kubenswrapper[4642]: I0128 07:07:45.719694 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3966764e-5967-46c5-9b39-88fd738b8b0a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3966764e-5967-46c5-9b39-88fd738b8b0a" (UID: "3966764e-5967-46c5-9b39-88fd738b8b0a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:07:45 crc kubenswrapper[4642]: I0128 07:07:45.720017 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3966764e-5967-46c5-9b39-88fd738b8b0a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3966764e-5967-46c5-9b39-88fd738b8b0a" (UID: "3966764e-5967-46c5-9b39-88fd738b8b0a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:07:45 crc kubenswrapper[4642]: I0128 07:07:45.723211 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3966764e-5967-46c5-9b39-88fd738b8b0a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3966764e-5967-46c5-9b39-88fd738b8b0a" (UID: "3966764e-5967-46c5-9b39-88fd738b8b0a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:07:45 crc kubenswrapper[4642]: I0128 07:07:45.731695 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3966764e-5967-46c5-9b39-88fd738b8b0a-config" (OuterVolumeSpecName: "config") pod "3966764e-5967-46c5-9b39-88fd738b8b0a" (UID: "3966764e-5967-46c5-9b39-88fd738b8b0a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:07:45 crc kubenswrapper[4642]: I0128 07:07:45.734788 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3966764e-5967-46c5-9b39-88fd738b8b0a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3966764e-5967-46c5-9b39-88fd738b8b0a" (UID: "3966764e-5967-46c5-9b39-88fd738b8b0a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:07:45 crc kubenswrapper[4642]: I0128 07:07:45.741609 4642 generic.go:334] "Generic (PLEG): container finished" podID="3966764e-5967-46c5-9b39-88fd738b8b0a" containerID="59cc0a4217eb939f4a8cd6e6b5e46b169f231555287b65d48fdc408fa5414998" exitCode=0 Jan 28 07:07:45 crc kubenswrapper[4642]: I0128 07:07:45.741645 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59775c5b57-vjbrx" event={"ID":"3966764e-5967-46c5-9b39-88fd738b8b0a","Type":"ContainerDied","Data":"59cc0a4217eb939f4a8cd6e6b5e46b169f231555287b65d48fdc408fa5414998"} Jan 28 07:07:45 crc kubenswrapper[4642]: I0128 07:07:45.741673 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59775c5b57-vjbrx" event={"ID":"3966764e-5967-46c5-9b39-88fd738b8b0a","Type":"ContainerDied","Data":"0bd27367e383ac4e50b5c414b55a0190baa3c4288037b53dc8ae32ae9a08ba73"} Jan 28 07:07:45 crc kubenswrapper[4642]: I0128 07:07:45.741691 4642 scope.go:117] "RemoveContainer" containerID="59cc0a4217eb939f4a8cd6e6b5e46b169f231555287b65d48fdc408fa5414998" Jan 28 07:07:45 crc kubenswrapper[4642]: I0128 07:07:45.741805 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59775c5b57-vjbrx" Jan 28 07:07:45 crc kubenswrapper[4642]: I0128 07:07:45.775245 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59775c5b57-vjbrx"] Jan 28 07:07:45 crc kubenswrapper[4642]: I0128 07:07:45.778548 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-59775c5b57-vjbrx"] Jan 28 07:07:45 crc kubenswrapper[4642]: I0128 07:07:45.780169 4642 scope.go:117] "RemoveContainer" containerID="74149a55718780f5f55078a148ba35fd2e6705ba2127a425c42a330759c89688" Jan 28 07:07:45 crc kubenswrapper[4642]: I0128 07:07:45.783427 4642 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3966764e-5967-46c5-9b39-88fd738b8b0a-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 07:07:45 crc kubenswrapper[4642]: I0128 07:07:45.783452 4642 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3966764e-5967-46c5-9b39-88fd738b8b0a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 07:07:45 crc kubenswrapper[4642]: I0128 07:07:45.783462 4642 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3966764e-5967-46c5-9b39-88fd738b8b0a-config\") on node \"crc\" DevicePath \"\"" Jan 28 07:07:45 crc kubenswrapper[4642]: I0128 07:07:45.783472 4642 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3966764e-5967-46c5-9b39-88fd738b8b0a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 07:07:45 crc kubenswrapper[4642]: I0128 07:07:45.783480 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kczbf\" (UniqueName: \"kubernetes.io/projected/3966764e-5967-46c5-9b39-88fd738b8b0a-kube-api-access-kczbf\") on node \"crc\" DevicePath \"\"" Jan 28 07:07:45 crc kubenswrapper[4642]: I0128 07:07:45.783488 4642 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3966764e-5967-46c5-9b39-88fd738b8b0a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 28 07:07:45 crc kubenswrapper[4642]: I0128 07:07:45.794407 4642 scope.go:117] "RemoveContainer" containerID="59cc0a4217eb939f4a8cd6e6b5e46b169f231555287b65d48fdc408fa5414998" Jan 28 07:07:45 crc kubenswrapper[4642]: E0128 07:07:45.794691 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59cc0a4217eb939f4a8cd6e6b5e46b169f231555287b65d48fdc408fa5414998\": container with ID starting with 59cc0a4217eb939f4a8cd6e6b5e46b169f231555287b65d48fdc408fa5414998 not found: ID does not exist" containerID="59cc0a4217eb939f4a8cd6e6b5e46b169f231555287b65d48fdc408fa5414998" Jan 28 07:07:45 crc kubenswrapper[4642]: I0128 07:07:45.794722 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59cc0a4217eb939f4a8cd6e6b5e46b169f231555287b65d48fdc408fa5414998"} err="failed to get container status \"59cc0a4217eb939f4a8cd6e6b5e46b169f231555287b65d48fdc408fa5414998\": rpc error: code = NotFound desc = could not find container \"59cc0a4217eb939f4a8cd6e6b5e46b169f231555287b65d48fdc408fa5414998\": container with ID starting with 59cc0a4217eb939f4a8cd6e6b5e46b169f231555287b65d48fdc408fa5414998 not found: ID does not exist" Jan 28 07:07:45 crc kubenswrapper[4642]: I0128 07:07:45.794743 4642 scope.go:117] "RemoveContainer" containerID="74149a55718780f5f55078a148ba35fd2e6705ba2127a425c42a330759c89688" Jan 28 07:07:45 crc kubenswrapper[4642]: E0128 07:07:45.795016 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74149a55718780f5f55078a148ba35fd2e6705ba2127a425c42a330759c89688\": container with ID starting with 74149a55718780f5f55078a148ba35fd2e6705ba2127a425c42a330759c89688 not found: ID does not exist" containerID="74149a55718780f5f55078a148ba35fd2e6705ba2127a425c42a330759c89688" Jan 28 07:07:45 crc kubenswrapper[4642]: I0128 07:07:45.795066 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74149a55718780f5f55078a148ba35fd2e6705ba2127a425c42a330759c89688"} err="failed to get container status \"74149a55718780f5f55078a148ba35fd2e6705ba2127a425c42a330759c89688\": rpc error: code = NotFound desc = could not find container \"74149a55718780f5f55078a148ba35fd2e6705ba2127a425c42a330759c89688\": container with ID starting with 74149a55718780f5f55078a148ba35fd2e6705ba2127a425c42a330759c89688 not found: ID does not exist" Jan 28 07:07:45 crc kubenswrapper[4642]: I0128 07:07:45.978367 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-774c5cf667-7hfrh"] Jan 28 07:07:46 crc kubenswrapper[4642]: I0128 07:07:46.751857 4642 generic.go:334] "Generic (PLEG): container finished" podID="592db514-d1a6-421d-87f4-60ab08a05885" containerID="bcdf87f6f6af6c044c59ba78bce9be3b0dc22d77cef50323fa101c9ebafe6f03" exitCode=0 Jan 28 07:07:46 crc kubenswrapper[4642]: I0128 07:07:46.751945 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-774c5cf667-7hfrh" event={"ID":"592db514-d1a6-421d-87f4-60ab08a05885","Type":"ContainerDied","Data":"bcdf87f6f6af6c044c59ba78bce9be3b0dc22d77cef50323fa101c9ebafe6f03"} Jan 28 07:07:46 crc kubenswrapper[4642]: I0128 07:07:46.752124 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-774c5cf667-7hfrh" event={"ID":"592db514-d1a6-421d-87f4-60ab08a05885","Type":"ContainerStarted","Data":"3c10119de45c454b0627e6b5e3e55c24b14166c78ab41e0ea17700400ee8ce8b"} Jan 28 07:07:47 crc kubenswrapper[4642]: I0128 07:07:47.107142 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3966764e-5967-46c5-9b39-88fd738b8b0a" path="/var/lib/kubelet/pods/3966764e-5967-46c5-9b39-88fd738b8b0a/volumes" Jan 28 07:07:47 crc kubenswrapper[4642]: I0128 07:07:47.760746 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-774c5cf667-7hfrh" event={"ID":"592db514-d1a6-421d-87f4-60ab08a05885","Type":"ContainerStarted","Data":"54f04a5a96375cfe611f74acfe1267bc884432377d4fd69e754c40c0cad83db9"} Jan 28 07:07:47 crc kubenswrapper[4642]: I0128 07:07:47.760891 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-774c5cf667-7hfrh" Jan 28 07:07:47 crc kubenswrapper[4642]: I0128 07:07:47.778822 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-774c5cf667-7hfrh" podStartSLOduration=2.77880739 podStartE2EDuration="2.77880739s" podCreationTimestamp="2026-01-28 07:07:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 07:07:47.773566302 +0000 UTC m=+1191.005655111" watchObservedRunningTime="2026-01-28 07:07:47.77880739 +0000 UTC m=+1191.010896200" Jan 28 07:07:55 crc kubenswrapper[4642]: I0128 07:07:55.603318 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-774c5cf667-7hfrh" Jan 28 07:07:55 crc kubenswrapper[4642]: I0128 07:07:55.672005 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d764c6d8c-5m899"] Jan 28 07:07:55 crc kubenswrapper[4642]: I0128 07:07:55.672283 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7d764c6d8c-5m899" podUID="56be9971-eb5a-4b59-8f28-ef459c6269ec" containerName="dnsmasq-dns" containerID="cri-o://47760792c76942cbc48a6ef7e16a905ec3066a045b68df2f51a339e8fc0b2f3f" gracePeriod=10 Jan 28 07:07:55 crc kubenswrapper[4642]: I0128 07:07:55.825205 4642 generic.go:334] "Generic (PLEG): container finished" podID="56be9971-eb5a-4b59-8f28-ef459c6269ec" containerID="47760792c76942cbc48a6ef7e16a905ec3066a045b68df2f51a339e8fc0b2f3f" exitCode=0 Jan 28 07:07:55 crc kubenswrapper[4642]: I0128 07:07:55.825253 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d764c6d8c-5m899" event={"ID":"56be9971-eb5a-4b59-8f28-ef459c6269ec","Type":"ContainerDied","Data":"47760792c76942cbc48a6ef7e16a905ec3066a045b68df2f51a339e8fc0b2f3f"} Jan 28 07:07:56 crc kubenswrapper[4642]: I0128 07:07:56.077359 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d764c6d8c-5m899" Jan 28 07:07:56 crc kubenswrapper[4642]: I0128 07:07:56.151466 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56be9971-eb5a-4b59-8f28-ef459c6269ec-ovsdbserver-nb\") pod \"56be9971-eb5a-4b59-8f28-ef459c6269ec\" (UID: \"56be9971-eb5a-4b59-8f28-ef459c6269ec\") " Jan 28 07:07:56 crc kubenswrapper[4642]: I0128 07:07:56.151557 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/56be9971-eb5a-4b59-8f28-ef459c6269ec-dns-swift-storage-0\") pod \"56be9971-eb5a-4b59-8f28-ef459c6269ec\" (UID: \"56be9971-eb5a-4b59-8f28-ef459c6269ec\") " Jan 28 07:07:56 crc kubenswrapper[4642]: I0128 07:07:56.151624 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prhmq\" (UniqueName: \"kubernetes.io/projected/56be9971-eb5a-4b59-8f28-ef459c6269ec-kube-api-access-prhmq\") pod \"56be9971-eb5a-4b59-8f28-ef459c6269ec\" (UID: \"56be9971-eb5a-4b59-8f28-ef459c6269ec\") " Jan 28 07:07:56 crc kubenswrapper[4642]: I0128 07:07:56.151645 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56be9971-eb5a-4b59-8f28-ef459c6269ec-ovsdbserver-sb\") pod \"56be9971-eb5a-4b59-8f28-ef459c6269ec\" (UID: \"56be9971-eb5a-4b59-8f28-ef459c6269ec\") " Jan 28 07:07:56 crc kubenswrapper[4642]: I0128 07:07:56.151685 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/56be9971-eb5a-4b59-8f28-ef459c6269ec-openstack-edpm-ipam\") pod \"56be9971-eb5a-4b59-8f28-ef459c6269ec\" (UID: \"56be9971-eb5a-4b59-8f28-ef459c6269ec\") " Jan 28 07:07:56 crc kubenswrapper[4642]: I0128 07:07:56.151806 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56be9971-eb5a-4b59-8f28-ef459c6269ec-dns-svc\") pod \"56be9971-eb5a-4b59-8f28-ef459c6269ec\" (UID: \"56be9971-eb5a-4b59-8f28-ef459c6269ec\") " Jan 28 07:07:56 crc kubenswrapper[4642]: I0128 07:07:56.151929 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56be9971-eb5a-4b59-8f28-ef459c6269ec-config\") pod \"56be9971-eb5a-4b59-8f28-ef459c6269ec\" (UID: \"56be9971-eb5a-4b59-8f28-ef459c6269ec\") " Jan 28 07:07:56 crc kubenswrapper[4642]: I0128 07:07:56.161177 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56be9971-eb5a-4b59-8f28-ef459c6269ec-kube-api-access-prhmq" (OuterVolumeSpecName: "kube-api-access-prhmq") pod "56be9971-eb5a-4b59-8f28-ef459c6269ec" (UID: "56be9971-eb5a-4b59-8f28-ef459c6269ec"). InnerVolumeSpecName "kube-api-access-prhmq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:07:56 crc kubenswrapper[4642]: I0128 07:07:56.188776 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56be9971-eb5a-4b59-8f28-ef459c6269ec-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "56be9971-eb5a-4b59-8f28-ef459c6269ec" (UID: "56be9971-eb5a-4b59-8f28-ef459c6269ec"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:07:56 crc kubenswrapper[4642]: I0128 07:07:56.189331 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56be9971-eb5a-4b59-8f28-ef459c6269ec-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "56be9971-eb5a-4b59-8f28-ef459c6269ec" (UID: "56be9971-eb5a-4b59-8f28-ef459c6269ec"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:07:56 crc kubenswrapper[4642]: I0128 07:07:56.192286 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56be9971-eb5a-4b59-8f28-ef459c6269ec-config" (OuterVolumeSpecName: "config") pod "56be9971-eb5a-4b59-8f28-ef459c6269ec" (UID: "56be9971-eb5a-4b59-8f28-ef459c6269ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:07:56 crc kubenswrapper[4642]: I0128 07:07:56.192499 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56be9971-eb5a-4b59-8f28-ef459c6269ec-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "56be9971-eb5a-4b59-8f28-ef459c6269ec" (UID: "56be9971-eb5a-4b59-8f28-ef459c6269ec"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:07:56 crc kubenswrapper[4642]: I0128 07:07:56.192517 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56be9971-eb5a-4b59-8f28-ef459c6269ec-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "56be9971-eb5a-4b59-8f28-ef459c6269ec" (UID: "56be9971-eb5a-4b59-8f28-ef459c6269ec"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:07:56 crc kubenswrapper[4642]: I0128 07:07:56.202620 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56be9971-eb5a-4b59-8f28-ef459c6269ec-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "56be9971-eb5a-4b59-8f28-ef459c6269ec" (UID: "56be9971-eb5a-4b59-8f28-ef459c6269ec"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:07:56 crc kubenswrapper[4642]: I0128 07:07:56.254357 4642 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56be9971-eb5a-4b59-8f28-ef459c6269ec-config\") on node \"crc\" DevicePath \"\"" Jan 28 07:07:56 crc kubenswrapper[4642]: I0128 07:07:56.254393 4642 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56be9971-eb5a-4b59-8f28-ef459c6269ec-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 07:07:56 crc kubenswrapper[4642]: I0128 07:07:56.254404 4642 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/56be9971-eb5a-4b59-8f28-ef459c6269ec-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 28 07:07:56 crc kubenswrapper[4642]: I0128 07:07:56.254413 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-prhmq\" (UniqueName: \"kubernetes.io/projected/56be9971-eb5a-4b59-8f28-ef459c6269ec-kube-api-access-prhmq\") on node \"crc\" DevicePath \"\"" Jan 28 07:07:56 crc kubenswrapper[4642]: I0128 07:07:56.254421 4642 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56be9971-eb5a-4b59-8f28-ef459c6269ec-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 07:07:56 crc kubenswrapper[4642]: I0128 07:07:56.254428 4642 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/56be9971-eb5a-4b59-8f28-ef459c6269ec-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 07:07:56 crc kubenswrapper[4642]: I0128 07:07:56.254436 4642 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56be9971-eb5a-4b59-8f28-ef459c6269ec-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 07:07:56 crc kubenswrapper[4642]: I0128 07:07:56.833860 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d764c6d8c-5m899" event={"ID":"56be9971-eb5a-4b59-8f28-ef459c6269ec","Type":"ContainerDied","Data":"5ead8574c93ca7d2ea496444f06ea4dea3ab07679913fe92381f9a0e13be12e1"} Jan 28 07:07:56 crc kubenswrapper[4642]: I0128 07:07:56.833920 4642 scope.go:117] "RemoveContainer" containerID="47760792c76942cbc48a6ef7e16a905ec3066a045b68df2f51a339e8fc0b2f3f" Jan 28 07:07:56 crc kubenswrapper[4642]: I0128 07:07:56.833921 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d764c6d8c-5m899" Jan 28 07:07:56 crc kubenswrapper[4642]: I0128 07:07:56.851705 4642 scope.go:117] "RemoveContainer" containerID="43e88cfb1e4e5fa3926dfdb5a67ffc8cdf798be0b2b635fa5c6457b767a7c96e" Jan 28 07:07:56 crc kubenswrapper[4642]: I0128 07:07:56.863330 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d764c6d8c-5m899"] Jan 28 07:07:56 crc kubenswrapper[4642]: I0128 07:07:56.869358 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d764c6d8c-5m899"] Jan 28 07:07:57 crc kubenswrapper[4642]: I0128 07:07:57.107911 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56be9971-eb5a-4b59-8f28-ef459c6269ec" path="/var/lib/kubelet/pods/56be9971-eb5a-4b59-8f28-ef459c6269ec/volumes" Jan 28 07:07:58 crc kubenswrapper[4642]: I0128 07:07:58.908604 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-92kzl"] Jan 28 07:07:58 crc kubenswrapper[4642]: E0128 07:07:58.909152 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3966764e-5967-46c5-9b39-88fd738b8b0a" containerName="init" Jan 28 07:07:58 crc kubenswrapper[4642]: I0128 07:07:58.909164 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="3966764e-5967-46c5-9b39-88fd738b8b0a" containerName="init" Jan 28 07:07:58 crc kubenswrapper[4642]: E0128 07:07:58.909178 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3966764e-5967-46c5-9b39-88fd738b8b0a" containerName="dnsmasq-dns" Jan 28 07:07:58 crc kubenswrapper[4642]: I0128 07:07:58.909199 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="3966764e-5967-46c5-9b39-88fd738b8b0a" containerName="dnsmasq-dns" Jan 28 07:07:58 crc kubenswrapper[4642]: E0128 07:07:58.909221 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56be9971-eb5a-4b59-8f28-ef459c6269ec" containerName="init" Jan 28 07:07:58 crc kubenswrapper[4642]: I0128 07:07:58.909227 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="56be9971-eb5a-4b59-8f28-ef459c6269ec" containerName="init" Jan 28 07:07:58 crc kubenswrapper[4642]: E0128 07:07:58.909237 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56be9971-eb5a-4b59-8f28-ef459c6269ec" containerName="dnsmasq-dns" Jan 28 07:07:58 crc kubenswrapper[4642]: I0128 07:07:58.909242 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="56be9971-eb5a-4b59-8f28-ef459c6269ec" containerName="dnsmasq-dns" Jan 28 07:07:58 crc kubenswrapper[4642]: I0128 07:07:58.909451 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="56be9971-eb5a-4b59-8f28-ef459c6269ec" containerName="dnsmasq-dns" Jan 28 07:07:58 crc kubenswrapper[4642]: I0128 07:07:58.909463 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="3966764e-5967-46c5-9b39-88fd738b8b0a" containerName="dnsmasq-dns" Jan 28 07:07:58 crc kubenswrapper[4642]: I0128 07:07:58.910018 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-92kzl" Jan 28 07:07:58 crc kubenswrapper[4642]: I0128 07:07:58.911679 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 07:07:58 crc kubenswrapper[4642]: I0128 07:07:58.912047 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r6m2m" Jan 28 07:07:58 crc kubenswrapper[4642]: I0128 07:07:58.912250 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 07:07:58 crc kubenswrapper[4642]: I0128 07:07:58.912410 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 07:07:58 crc kubenswrapper[4642]: I0128 07:07:58.917693 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-92kzl"] Jan 28 07:07:58 crc kubenswrapper[4642]: I0128 07:07:58.999415 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f8485ba5-af89-41de-82ac-61f80fdf4831-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-92kzl\" (UID: \"f8485ba5-af89-41de-82ac-61f80fdf4831\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-92kzl" Jan 28 07:07:58 crc kubenswrapper[4642]: I0128 07:07:58.999460 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8485ba5-af89-41de-82ac-61f80fdf4831-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-92kzl\" (UID: \"f8485ba5-af89-41de-82ac-61f80fdf4831\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-92kzl" Jan 28 07:07:58 crc kubenswrapper[4642]: I0128 07:07:58.999606 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f8485ba5-af89-41de-82ac-61f80fdf4831-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-92kzl\" (UID: \"f8485ba5-af89-41de-82ac-61f80fdf4831\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-92kzl" Jan 28 07:07:58 crc kubenswrapper[4642]: I0128 07:07:58.999700 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nffcl\" (UniqueName: \"kubernetes.io/projected/f8485ba5-af89-41de-82ac-61f80fdf4831-kube-api-access-nffcl\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-92kzl\" (UID: \"f8485ba5-af89-41de-82ac-61f80fdf4831\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-92kzl" Jan 28 07:07:59 crc kubenswrapper[4642]: I0128 07:07:59.101101 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f8485ba5-af89-41de-82ac-61f80fdf4831-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-92kzl\" (UID: \"f8485ba5-af89-41de-82ac-61f80fdf4831\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-92kzl" Jan 28 07:07:59 crc kubenswrapper[4642]: I0128 07:07:59.101174 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nffcl\" (UniqueName: \"kubernetes.io/projected/f8485ba5-af89-41de-82ac-61f80fdf4831-kube-api-access-nffcl\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-92kzl\" (UID: \"f8485ba5-af89-41de-82ac-61f80fdf4831\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-92kzl" Jan 28 07:07:59 crc kubenswrapper[4642]: I0128 07:07:59.101450 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f8485ba5-af89-41de-82ac-61f80fdf4831-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-92kzl\" (UID: \"f8485ba5-af89-41de-82ac-61f80fdf4831\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-92kzl" Jan 28 07:07:59 crc kubenswrapper[4642]: I0128 07:07:59.101487 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8485ba5-af89-41de-82ac-61f80fdf4831-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-92kzl\" (UID: \"f8485ba5-af89-41de-82ac-61f80fdf4831\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-92kzl" Jan 28 07:07:59 crc kubenswrapper[4642]: I0128 07:07:59.107831 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f8485ba5-af89-41de-82ac-61f80fdf4831-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-92kzl\" (UID: \"f8485ba5-af89-41de-82ac-61f80fdf4831\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-92kzl" Jan 28 07:07:59 crc kubenswrapper[4642]: I0128 07:07:59.107865 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f8485ba5-af89-41de-82ac-61f80fdf4831-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-92kzl\" (UID: \"f8485ba5-af89-41de-82ac-61f80fdf4831\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-92kzl" Jan 28 07:07:59 crc kubenswrapper[4642]: I0128 07:07:59.110403 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8485ba5-af89-41de-82ac-61f80fdf4831-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-92kzl\" (UID: \"f8485ba5-af89-41de-82ac-61f80fdf4831\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-92kzl" Jan 28 07:07:59 crc kubenswrapper[4642]: I0128 07:07:59.116594 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nffcl\" (UniqueName: \"kubernetes.io/projected/f8485ba5-af89-41de-82ac-61f80fdf4831-kube-api-access-nffcl\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-92kzl\" (UID: \"f8485ba5-af89-41de-82ac-61f80fdf4831\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-92kzl" Jan 28 07:07:59 crc kubenswrapper[4642]: I0128 07:07:59.229405 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-92kzl" Jan 28 07:07:59 crc kubenswrapper[4642]: I0128 07:07:59.674510 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-92kzl"] Jan 28 07:07:59 crc kubenswrapper[4642]: W0128 07:07:59.676015 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf8485ba5_af89_41de_82ac_61f80fdf4831.slice/crio-25bb63bdbe36a7164c4ba121c8404c4fdda1af0f427044b5ac519b6dbe655cc2 WatchSource:0}: Error finding container 25bb63bdbe36a7164c4ba121c8404c4fdda1af0f427044b5ac519b6dbe655cc2: Status 404 returned error can't find the container with id 25bb63bdbe36a7164c4ba121c8404c4fdda1af0f427044b5ac519b6dbe655cc2 Jan 28 07:07:59 crc kubenswrapper[4642]: I0128 07:07:59.678142 4642 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 07:07:59 crc kubenswrapper[4642]: I0128 07:07:59.858338 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-92kzl" event={"ID":"f8485ba5-af89-41de-82ac-61f80fdf4831","Type":"ContainerStarted","Data":"25bb63bdbe36a7164c4ba121c8404c4fdda1af0f427044b5ac519b6dbe655cc2"} Jan 28 07:08:06 crc kubenswrapper[4642]: I0128 07:08:06.909984 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-92kzl" event={"ID":"f8485ba5-af89-41de-82ac-61f80fdf4831","Type":"ContainerStarted","Data":"075c369b959ca2a29c99ed5898c07a10e56d53511276cd89f52ec189c2efc264"} Jan 28 07:08:06 crc kubenswrapper[4642]: I0128 07:08:06.928848 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-92kzl" podStartSLOduration=2.056978248 podStartE2EDuration="8.928831068s" podCreationTimestamp="2026-01-28 07:07:58 +0000 UTC" firstStartedPulling="2026-01-28 07:07:59.677870738 +0000 UTC m=+1202.909959547" lastFinishedPulling="2026-01-28 07:08:06.549723559 +0000 UTC m=+1209.781812367" observedRunningTime="2026-01-28 07:08:06.926862829 +0000 UTC m=+1210.158951638" watchObservedRunningTime="2026-01-28 07:08:06.928831068 +0000 UTC m=+1210.160919877" Jan 28 07:08:07 crc kubenswrapper[4642]: I0128 07:08:07.918720 4642 generic.go:334] "Generic (PLEG): container finished" podID="62614395-0b52-4d39-865d-c42587ac034b" containerID="e15e11a1bf6c44f634ff6bb3e3f518f98158c7841fb8fd6981ee7e3d720e42a3" exitCode=0 Jan 28 07:08:07 crc kubenswrapper[4642]: I0128 07:08:07.918804 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"62614395-0b52-4d39-865d-c42587ac034b","Type":"ContainerDied","Data":"e15e11a1bf6c44f634ff6bb3e3f518f98158c7841fb8fd6981ee7e3d720e42a3"} Jan 28 07:08:08 crc kubenswrapper[4642]: I0128 07:08:08.927388 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"62614395-0b52-4d39-865d-c42587ac034b","Type":"ContainerStarted","Data":"84afe6632ffc6dff19d9fc1ee589dca59ed2571a6052aececa14608a3686b944"} Jan 28 07:08:08 crc kubenswrapper[4642]: I0128 07:08:08.927936 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 28 07:08:08 crc kubenswrapper[4642]: I0128 07:08:08.951014 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=34.950991675 podStartE2EDuration="34.950991675s" podCreationTimestamp="2026-01-28 07:07:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 07:08:08.947296701 +0000 UTC m=+1212.179385510" watchObservedRunningTime="2026-01-28 07:08:08.950991675 +0000 UTC m=+1212.183080484" Jan 28 07:08:09 crc kubenswrapper[4642]: I0128 07:08:09.936475 4642 generic.go:334] "Generic (PLEG): container finished" podID="830d2eb5-3d8a-4b74-833e-758894985129" containerID="d4d710cb2b90d7c911233a5af5bdbd9c2de7328bf8f14ff71e247ccdd5b3d7f7" exitCode=0 Jan 28 07:08:09 crc kubenswrapper[4642]: I0128 07:08:09.936549 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"830d2eb5-3d8a-4b74-833e-758894985129","Type":"ContainerDied","Data":"d4d710cb2b90d7c911233a5af5bdbd9c2de7328bf8f14ff71e247ccdd5b3d7f7"} Jan 28 07:08:11 crc kubenswrapper[4642]: I0128 07:08:11.955706 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"830d2eb5-3d8a-4b74-833e-758894985129","Type":"ContainerStarted","Data":"3187c5018b1375e290c178291d446cc6ad627710ac784ccbebcddfcd51e007f4"} Jan 28 07:08:11 crc kubenswrapper[4642]: I0128 07:08:11.956245 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 28 07:08:11 crc kubenswrapper[4642]: I0128 07:08:11.976712 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.976691479 podStartE2EDuration="36.976691479s" podCreationTimestamp="2026-01-28 07:07:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 07:08:11.971845062 +0000 UTC m=+1215.203933871" watchObservedRunningTime="2026-01-28 07:08:11.976691479 +0000 UTC m=+1215.208780287" Jan 28 07:08:17 crc kubenswrapper[4642]: I0128 07:08:17.999313 4642 generic.go:334] "Generic (PLEG): container finished" podID="f8485ba5-af89-41de-82ac-61f80fdf4831" containerID="075c369b959ca2a29c99ed5898c07a10e56d53511276cd89f52ec189c2efc264" exitCode=0 Jan 28 07:08:17 crc kubenswrapper[4642]: I0128 07:08:17.999414 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-92kzl" event={"ID":"f8485ba5-af89-41de-82ac-61f80fdf4831","Type":"ContainerDied","Data":"075c369b959ca2a29c99ed5898c07a10e56d53511276cd89f52ec189c2efc264"} Jan 28 07:08:19 crc kubenswrapper[4642]: I0128 07:08:19.334243 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-92kzl" Jan 28 07:08:19 crc kubenswrapper[4642]: I0128 07:08:19.460929 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f8485ba5-af89-41de-82ac-61f80fdf4831-ssh-key-openstack-edpm-ipam\") pod \"f8485ba5-af89-41de-82ac-61f80fdf4831\" (UID: \"f8485ba5-af89-41de-82ac-61f80fdf4831\") " Jan 28 07:08:19 crc kubenswrapper[4642]: I0128 07:08:19.461020 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nffcl\" (UniqueName: \"kubernetes.io/projected/f8485ba5-af89-41de-82ac-61f80fdf4831-kube-api-access-nffcl\") pod \"f8485ba5-af89-41de-82ac-61f80fdf4831\" (UID: \"f8485ba5-af89-41de-82ac-61f80fdf4831\") " Jan 28 07:08:19 crc kubenswrapper[4642]: I0128 07:08:19.461098 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f8485ba5-af89-41de-82ac-61f80fdf4831-inventory\") pod \"f8485ba5-af89-41de-82ac-61f80fdf4831\" (UID: \"f8485ba5-af89-41de-82ac-61f80fdf4831\") " Jan 28 07:08:19 crc kubenswrapper[4642]: I0128 07:08:19.461278 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8485ba5-af89-41de-82ac-61f80fdf4831-repo-setup-combined-ca-bundle\") pod \"f8485ba5-af89-41de-82ac-61f80fdf4831\" (UID: \"f8485ba5-af89-41de-82ac-61f80fdf4831\") " Jan 28 07:08:19 crc kubenswrapper[4642]: I0128 07:08:19.465386 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8485ba5-af89-41de-82ac-61f80fdf4831-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "f8485ba5-af89-41de-82ac-61f80fdf4831" (UID: "f8485ba5-af89-41de-82ac-61f80fdf4831"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:08:19 crc kubenswrapper[4642]: I0128 07:08:19.465391 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8485ba5-af89-41de-82ac-61f80fdf4831-kube-api-access-nffcl" (OuterVolumeSpecName: "kube-api-access-nffcl") pod "f8485ba5-af89-41de-82ac-61f80fdf4831" (UID: "f8485ba5-af89-41de-82ac-61f80fdf4831"). InnerVolumeSpecName "kube-api-access-nffcl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:08:19 crc kubenswrapper[4642]: I0128 07:08:19.482125 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8485ba5-af89-41de-82ac-61f80fdf4831-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f8485ba5-af89-41de-82ac-61f80fdf4831" (UID: "f8485ba5-af89-41de-82ac-61f80fdf4831"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:08:19 crc kubenswrapper[4642]: I0128 07:08:19.482292 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8485ba5-af89-41de-82ac-61f80fdf4831-inventory" (OuterVolumeSpecName: "inventory") pod "f8485ba5-af89-41de-82ac-61f80fdf4831" (UID: "f8485ba5-af89-41de-82ac-61f80fdf4831"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:08:19 crc kubenswrapper[4642]: I0128 07:08:19.564002 4642 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8485ba5-af89-41de-82ac-61f80fdf4831-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 07:08:19 crc kubenswrapper[4642]: I0128 07:08:19.564029 4642 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f8485ba5-af89-41de-82ac-61f80fdf4831-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 07:08:19 crc kubenswrapper[4642]: I0128 07:08:19.564042 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nffcl\" (UniqueName: \"kubernetes.io/projected/f8485ba5-af89-41de-82ac-61f80fdf4831-kube-api-access-nffcl\") on node \"crc\" DevicePath \"\"" Jan 28 07:08:19 crc kubenswrapper[4642]: I0128 07:08:19.564051 4642 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f8485ba5-af89-41de-82ac-61f80fdf4831-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 07:08:20 crc kubenswrapper[4642]: I0128 07:08:20.014950 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-92kzl" event={"ID":"f8485ba5-af89-41de-82ac-61f80fdf4831","Type":"ContainerDied","Data":"25bb63bdbe36a7164c4ba121c8404c4fdda1af0f427044b5ac519b6dbe655cc2"} Jan 28 07:08:20 crc kubenswrapper[4642]: I0128 07:08:20.014988 4642 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25bb63bdbe36a7164c4ba121c8404c4fdda1af0f427044b5ac519b6dbe655cc2" Jan 28 07:08:20 crc kubenswrapper[4642]: I0128 07:08:20.015242 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-92kzl" Jan 28 07:08:20 crc kubenswrapper[4642]: I0128 07:08:20.068436 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-4xwd4"] Jan 28 07:08:20 crc kubenswrapper[4642]: E0128 07:08:20.068891 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8485ba5-af89-41de-82ac-61f80fdf4831" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 28 07:08:20 crc kubenswrapper[4642]: I0128 07:08:20.068910 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8485ba5-af89-41de-82ac-61f80fdf4831" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 28 07:08:20 crc kubenswrapper[4642]: I0128 07:08:20.069084 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8485ba5-af89-41de-82ac-61f80fdf4831" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 28 07:08:20 crc kubenswrapper[4642]: I0128 07:08:20.069719 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4xwd4" Jan 28 07:08:20 crc kubenswrapper[4642]: I0128 07:08:20.071542 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 07:08:20 crc kubenswrapper[4642]: I0128 07:08:20.071828 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 07:08:20 crc kubenswrapper[4642]: I0128 07:08:20.072490 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r6m2m" Jan 28 07:08:20 crc kubenswrapper[4642]: I0128 07:08:20.072770 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 07:08:20 crc kubenswrapper[4642]: I0128 07:08:20.077222 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-4xwd4"] Jan 28 07:08:20 crc kubenswrapper[4642]: I0128 07:08:20.172865 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhdpw\" (UniqueName: \"kubernetes.io/projected/0571eda4-a4be-4e57-93f6-b31928d2bdd3-kube-api-access-rhdpw\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-4xwd4\" (UID: \"0571eda4-a4be-4e57-93f6-b31928d2bdd3\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4xwd4" Jan 28 07:08:20 crc kubenswrapper[4642]: I0128 07:08:20.172924 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0571eda4-a4be-4e57-93f6-b31928d2bdd3-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-4xwd4\" (UID: \"0571eda4-a4be-4e57-93f6-b31928d2bdd3\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4xwd4" Jan 28 07:08:20 crc kubenswrapper[4642]: I0128 07:08:20.173001 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0571eda4-a4be-4e57-93f6-b31928d2bdd3-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-4xwd4\" (UID: \"0571eda4-a4be-4e57-93f6-b31928d2bdd3\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4xwd4" Jan 28 07:08:20 crc kubenswrapper[4642]: I0128 07:08:20.274678 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhdpw\" (UniqueName: \"kubernetes.io/projected/0571eda4-a4be-4e57-93f6-b31928d2bdd3-kube-api-access-rhdpw\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-4xwd4\" (UID: \"0571eda4-a4be-4e57-93f6-b31928d2bdd3\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4xwd4" Jan 28 07:08:20 crc kubenswrapper[4642]: I0128 07:08:20.274725 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0571eda4-a4be-4e57-93f6-b31928d2bdd3-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-4xwd4\" (UID: \"0571eda4-a4be-4e57-93f6-b31928d2bdd3\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4xwd4" Jan 28 07:08:20 crc kubenswrapper[4642]: I0128 07:08:20.274758 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0571eda4-a4be-4e57-93f6-b31928d2bdd3-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-4xwd4\" (UID: \"0571eda4-a4be-4e57-93f6-b31928d2bdd3\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4xwd4" Jan 28 07:08:20 crc kubenswrapper[4642]: I0128 07:08:20.277497 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0571eda4-a4be-4e57-93f6-b31928d2bdd3-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-4xwd4\" (UID: \"0571eda4-a4be-4e57-93f6-b31928d2bdd3\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4xwd4" Jan 28 07:08:20 crc kubenswrapper[4642]: I0128 07:08:20.277505 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0571eda4-a4be-4e57-93f6-b31928d2bdd3-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-4xwd4\" (UID: \"0571eda4-a4be-4e57-93f6-b31928d2bdd3\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4xwd4" Jan 28 07:08:20 crc kubenswrapper[4642]: I0128 07:08:20.288378 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhdpw\" (UniqueName: \"kubernetes.io/projected/0571eda4-a4be-4e57-93f6-b31928d2bdd3-kube-api-access-rhdpw\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-4xwd4\" (UID: \"0571eda4-a4be-4e57-93f6-b31928d2bdd3\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4xwd4" Jan 28 07:08:20 crc kubenswrapper[4642]: I0128 07:08:20.381082 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4xwd4" Jan 28 07:08:20 crc kubenswrapper[4642]: I0128 07:08:20.813953 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-4xwd4"] Jan 28 07:08:21 crc kubenswrapper[4642]: I0128 07:08:21.022819 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4xwd4" event={"ID":"0571eda4-a4be-4e57-93f6-b31928d2bdd3","Type":"ContainerStarted","Data":"c4006e06c10188c3c04a4c02161fc651363ba449a0f6e228744ffd61030982db"} Jan 28 07:08:22 crc kubenswrapper[4642]: I0128 07:08:22.032041 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4xwd4" event={"ID":"0571eda4-a4be-4e57-93f6-b31928d2bdd3","Type":"ContainerStarted","Data":"bfbca7ae0d4a3ec8a5b81e3582d4039063e6a29a68499c702e6ca274832586f1"} Jan 28 07:08:22 crc kubenswrapper[4642]: I0128 07:08:22.047972 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4xwd4" podStartSLOduration=1.572847157 podStartE2EDuration="2.047953098s" podCreationTimestamp="2026-01-28 07:08:20 +0000 UTC" firstStartedPulling="2026-01-28 07:08:20.818454801 +0000 UTC m=+1224.050543610" lastFinishedPulling="2026-01-28 07:08:21.293560742 +0000 UTC m=+1224.525649551" observedRunningTime="2026-01-28 07:08:22.042751284 +0000 UTC m=+1225.274840093" watchObservedRunningTime="2026-01-28 07:08:22.047953098 +0000 UTC m=+1225.280041907" Jan 28 07:08:24 crc kubenswrapper[4642]: I0128 07:08:24.047158 4642 generic.go:334] "Generic (PLEG): container finished" podID="0571eda4-a4be-4e57-93f6-b31928d2bdd3" containerID="bfbca7ae0d4a3ec8a5b81e3582d4039063e6a29a68499c702e6ca274832586f1" exitCode=0 Jan 28 07:08:24 crc kubenswrapper[4642]: I0128 07:08:24.047249 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4xwd4" event={"ID":"0571eda4-a4be-4e57-93f6-b31928d2bdd3","Type":"ContainerDied","Data":"bfbca7ae0d4a3ec8a5b81e3582d4039063e6a29a68499c702e6ca274832586f1"} Jan 28 07:08:25 crc kubenswrapper[4642]: I0128 07:08:25.003347 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 28 07:08:25 crc kubenswrapper[4642]: I0128 07:08:25.395978 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4xwd4" Jan 28 07:08:25 crc kubenswrapper[4642]: I0128 07:08:25.560801 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0571eda4-a4be-4e57-93f6-b31928d2bdd3-ssh-key-openstack-edpm-ipam\") pod \"0571eda4-a4be-4e57-93f6-b31928d2bdd3\" (UID: \"0571eda4-a4be-4e57-93f6-b31928d2bdd3\") " Jan 28 07:08:25 crc kubenswrapper[4642]: I0128 07:08:25.561158 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rhdpw\" (UniqueName: \"kubernetes.io/projected/0571eda4-a4be-4e57-93f6-b31928d2bdd3-kube-api-access-rhdpw\") pod \"0571eda4-a4be-4e57-93f6-b31928d2bdd3\" (UID: \"0571eda4-a4be-4e57-93f6-b31928d2bdd3\") " Jan 28 07:08:25 crc kubenswrapper[4642]: I0128 07:08:25.561197 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0571eda4-a4be-4e57-93f6-b31928d2bdd3-inventory\") pod \"0571eda4-a4be-4e57-93f6-b31928d2bdd3\" (UID: \"0571eda4-a4be-4e57-93f6-b31928d2bdd3\") " Jan 28 07:08:25 crc kubenswrapper[4642]: I0128 07:08:25.571890 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0571eda4-a4be-4e57-93f6-b31928d2bdd3-kube-api-access-rhdpw" (OuterVolumeSpecName: "kube-api-access-rhdpw") pod "0571eda4-a4be-4e57-93f6-b31928d2bdd3" (UID: "0571eda4-a4be-4e57-93f6-b31928d2bdd3"). InnerVolumeSpecName "kube-api-access-rhdpw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:08:25 crc kubenswrapper[4642]: I0128 07:08:25.584287 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0571eda4-a4be-4e57-93f6-b31928d2bdd3-inventory" (OuterVolumeSpecName: "inventory") pod "0571eda4-a4be-4e57-93f6-b31928d2bdd3" (UID: "0571eda4-a4be-4e57-93f6-b31928d2bdd3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:08:25 crc kubenswrapper[4642]: I0128 07:08:25.586784 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0571eda4-a4be-4e57-93f6-b31928d2bdd3-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0571eda4-a4be-4e57-93f6-b31928d2bdd3" (UID: "0571eda4-a4be-4e57-93f6-b31928d2bdd3"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:08:25 crc kubenswrapper[4642]: I0128 07:08:25.662969 4642 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0571eda4-a4be-4e57-93f6-b31928d2bdd3-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 07:08:25 crc kubenswrapper[4642]: I0128 07:08:25.662997 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rhdpw\" (UniqueName: \"kubernetes.io/projected/0571eda4-a4be-4e57-93f6-b31928d2bdd3-kube-api-access-rhdpw\") on node \"crc\" DevicePath \"\"" Jan 28 07:08:25 crc kubenswrapper[4642]: I0128 07:08:25.663007 4642 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0571eda4-a4be-4e57-93f6-b31928d2bdd3-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 07:08:26 crc kubenswrapper[4642]: I0128 07:08:26.079417 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4xwd4" event={"ID":"0571eda4-a4be-4e57-93f6-b31928d2bdd3","Type":"ContainerDied","Data":"c4006e06c10188c3c04a4c02161fc651363ba449a0f6e228744ffd61030982db"} Jan 28 07:08:26 crc kubenswrapper[4642]: I0128 07:08:26.079486 4642 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4006e06c10188c3c04a4c02161fc651363ba449a0f6e228744ffd61030982db" Jan 28 07:08:26 crc kubenswrapper[4642]: I0128 07:08:26.079521 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4xwd4" Jan 28 07:08:26 crc kubenswrapper[4642]: I0128 07:08:26.135665 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2624h"] Jan 28 07:08:26 crc kubenswrapper[4642]: E0128 07:08:26.136071 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0571eda4-a4be-4e57-93f6-b31928d2bdd3" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 28 07:08:26 crc kubenswrapper[4642]: I0128 07:08:26.136089 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="0571eda4-a4be-4e57-93f6-b31928d2bdd3" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 28 07:08:26 crc kubenswrapper[4642]: I0128 07:08:26.136316 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="0571eda4-a4be-4e57-93f6-b31928d2bdd3" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 28 07:08:26 crc kubenswrapper[4642]: I0128 07:08:26.136936 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2624h" Jan 28 07:08:26 crc kubenswrapper[4642]: I0128 07:08:26.139259 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 07:08:26 crc kubenswrapper[4642]: I0128 07:08:26.139513 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r6m2m" Jan 28 07:08:26 crc kubenswrapper[4642]: I0128 07:08:26.139633 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 07:08:26 crc kubenswrapper[4642]: I0128 07:08:26.142268 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 07:08:26 crc kubenswrapper[4642]: I0128 07:08:26.145016 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2624h"] Jan 28 07:08:26 crc kubenswrapper[4642]: I0128 07:08:26.277779 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/71799297-5b25-4d16-97d8-5cd3b6e9c52e-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2624h\" (UID: \"71799297-5b25-4d16-97d8-5cd3b6e9c52e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2624h" Jan 28 07:08:26 crc kubenswrapper[4642]: I0128 07:08:26.277870 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqgss\" (UniqueName: \"kubernetes.io/projected/71799297-5b25-4d16-97d8-5cd3b6e9c52e-kube-api-access-rqgss\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2624h\" (UID: \"71799297-5b25-4d16-97d8-5cd3b6e9c52e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2624h" Jan 28 07:08:26 crc kubenswrapper[4642]: I0128 07:08:26.277950 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71799297-5b25-4d16-97d8-5cd3b6e9c52e-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2624h\" (UID: \"71799297-5b25-4d16-97d8-5cd3b6e9c52e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2624h" Jan 28 07:08:26 crc kubenswrapper[4642]: I0128 07:08:26.278001 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/71799297-5b25-4d16-97d8-5cd3b6e9c52e-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2624h\" (UID: \"71799297-5b25-4d16-97d8-5cd3b6e9c52e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2624h" Jan 28 07:08:26 crc kubenswrapper[4642]: I0128 07:08:26.337094 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 28 07:08:26 crc kubenswrapper[4642]: I0128 07:08:26.379452 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/71799297-5b25-4d16-97d8-5cd3b6e9c52e-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2624h\" (UID: \"71799297-5b25-4d16-97d8-5cd3b6e9c52e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2624h" Jan 28 07:08:26 crc kubenswrapper[4642]: I0128 07:08:26.379518 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqgss\" (UniqueName: \"kubernetes.io/projected/71799297-5b25-4d16-97d8-5cd3b6e9c52e-kube-api-access-rqgss\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2624h\" (UID: \"71799297-5b25-4d16-97d8-5cd3b6e9c52e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2624h" Jan 28 07:08:26 crc kubenswrapper[4642]: I0128 07:08:26.379578 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71799297-5b25-4d16-97d8-5cd3b6e9c52e-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2624h\" (UID: \"71799297-5b25-4d16-97d8-5cd3b6e9c52e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2624h" Jan 28 07:08:26 crc kubenswrapper[4642]: I0128 07:08:26.379614 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/71799297-5b25-4d16-97d8-5cd3b6e9c52e-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2624h\" (UID: \"71799297-5b25-4d16-97d8-5cd3b6e9c52e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2624h" Jan 28 07:08:26 crc kubenswrapper[4642]: I0128 07:08:26.384574 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/71799297-5b25-4d16-97d8-5cd3b6e9c52e-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2624h\" (UID: \"71799297-5b25-4d16-97d8-5cd3b6e9c52e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2624h" Jan 28 07:08:26 crc kubenswrapper[4642]: I0128 07:08:26.384637 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/71799297-5b25-4d16-97d8-5cd3b6e9c52e-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2624h\" (UID: \"71799297-5b25-4d16-97d8-5cd3b6e9c52e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2624h" Jan 28 07:08:26 crc kubenswrapper[4642]: I0128 07:08:26.384926 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71799297-5b25-4d16-97d8-5cd3b6e9c52e-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2624h\" (UID: \"71799297-5b25-4d16-97d8-5cd3b6e9c52e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2624h" Jan 28 07:08:26 crc kubenswrapper[4642]: I0128 07:08:26.399727 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqgss\" (UniqueName: \"kubernetes.io/projected/71799297-5b25-4d16-97d8-5cd3b6e9c52e-kube-api-access-rqgss\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2624h\" (UID: \"71799297-5b25-4d16-97d8-5cd3b6e9c52e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2624h" Jan 28 07:08:26 crc kubenswrapper[4642]: I0128 07:08:26.452784 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2624h" Jan 28 07:08:26 crc kubenswrapper[4642]: W0128 07:08:26.907352 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod71799297_5b25_4d16_97d8_5cd3b6e9c52e.slice/crio-27eac1eb2278870c31c1772322b21cb7e5fcc9df3ca2f185507215768d45c13f WatchSource:0}: Error finding container 27eac1eb2278870c31c1772322b21cb7e5fcc9df3ca2f185507215768d45c13f: Status 404 returned error can't find the container with id 27eac1eb2278870c31c1772322b21cb7e5fcc9df3ca2f185507215768d45c13f Jan 28 07:08:26 crc kubenswrapper[4642]: I0128 07:08:26.907367 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2624h"] Jan 28 07:08:27 crc kubenswrapper[4642]: I0128 07:08:27.092431 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2624h" event={"ID":"71799297-5b25-4d16-97d8-5cd3b6e9c52e","Type":"ContainerStarted","Data":"27eac1eb2278870c31c1772322b21cb7e5fcc9df3ca2f185507215768d45c13f"} Jan 28 07:08:28 crc kubenswrapper[4642]: I0128 07:08:28.101109 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2624h" event={"ID":"71799297-5b25-4d16-97d8-5cd3b6e9c52e","Type":"ContainerStarted","Data":"a45b6f3518511150e0eb90123541549df939bcd1d8e9096de777afe2e97b949a"} Jan 28 07:08:28 crc kubenswrapper[4642]: I0128 07:08:28.119015 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2624h" podStartSLOduration=1.657428994 podStartE2EDuration="2.119000026s" podCreationTimestamp="2026-01-28 07:08:26 +0000 UTC" firstStartedPulling="2026-01-28 07:08:26.910033771 +0000 UTC m=+1230.142122570" lastFinishedPulling="2026-01-28 07:08:27.371604793 +0000 UTC m=+1230.603693602" observedRunningTime="2026-01-28 07:08:28.113702201 +0000 UTC m=+1231.345791010" watchObservedRunningTime="2026-01-28 07:08:28.119000026 +0000 UTC m=+1231.351088835" Jan 28 07:08:38 crc kubenswrapper[4642]: I0128 07:08:38.200018 4642 patch_prober.go:28] interesting pod/machine-config-daemon-hdsmf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 07:08:38 crc kubenswrapper[4642]: I0128 07:08:38.200506 4642 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 07:09:08 crc kubenswrapper[4642]: I0128 07:09:08.200118 4642 patch_prober.go:28] interesting pod/machine-config-daemon-hdsmf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 07:09:08 crc kubenswrapper[4642]: I0128 07:09:08.200523 4642 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 07:09:38 crc kubenswrapper[4642]: I0128 07:09:38.199414 4642 patch_prober.go:28] interesting pod/machine-config-daemon-hdsmf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 07:09:38 crc kubenswrapper[4642]: I0128 07:09:38.200165 4642 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 07:09:38 crc kubenswrapper[4642]: I0128 07:09:38.200236 4642 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" Jan 28 07:09:38 crc kubenswrapper[4642]: I0128 07:09:38.201214 4642 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"50c14e99bdc05e634d65f44bc9c435f3fa5f0310a7195243e717dcf863377f09"} pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 07:09:38 crc kubenswrapper[4642]: I0128 07:09:38.201291 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" containerName="machine-config-daemon" containerID="cri-o://50c14e99bdc05e634d65f44bc9c435f3fa5f0310a7195243e717dcf863377f09" gracePeriod=600 Jan 28 07:09:38 crc kubenswrapper[4642]: I0128 07:09:38.589509 4642 generic.go:334] "Generic (PLEG): container finished" podID="338ae955-434d-40bd-8519-580badf3e175" containerID="50c14e99bdc05e634d65f44bc9c435f3fa5f0310a7195243e717dcf863377f09" exitCode=0 Jan 28 07:09:38 crc kubenswrapper[4642]: I0128 07:09:38.589577 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" event={"ID":"338ae955-434d-40bd-8519-580badf3e175","Type":"ContainerDied","Data":"50c14e99bdc05e634d65f44bc9c435f3fa5f0310a7195243e717dcf863377f09"} Jan 28 07:09:38 crc kubenswrapper[4642]: I0128 07:09:38.589846 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" event={"ID":"338ae955-434d-40bd-8519-580badf3e175","Type":"ContainerStarted","Data":"c9b0ccd578b76f711793a52e4fbebb1998cf31023e773fda5a4c995750b44303"} Jan 28 07:09:38 crc kubenswrapper[4642]: I0128 07:09:38.589874 4642 scope.go:117] "RemoveContainer" containerID="141f548b8402c2028aeffc2bc9021f71ad46dc4f636bc6f8740c8315416f2bd3" Jan 28 07:10:02 crc kubenswrapper[4642]: I0128 07:10:02.033385 4642 scope.go:117] "RemoveContainer" containerID="4e2eb66f9a1e2cc53963e15f342b68a48faa3a94507bd022f623cf3e62a0a5ab" Jan 28 07:10:02 crc kubenswrapper[4642]: I0128 07:10:02.056700 4642 scope.go:117] "RemoveContainer" containerID="ca388336b7a530c569f20022ecdf0d4bd6ae9d866ffeaf2de9f3ec0865c0b1ba" Jan 28 07:10:02 crc kubenswrapper[4642]: I0128 07:10:02.105326 4642 scope.go:117] "RemoveContainer" containerID="393b5f0552af00e7059e84fdd61eb1629f86e0494dc5a0e0754cc1b9c95d9942" Jan 28 07:11:02 crc kubenswrapper[4642]: I0128 07:11:02.193762 4642 scope.go:117] "RemoveContainer" containerID="5f6fc8c6cfdde9eaa51ee3340fc2fb4f8003c31526b7645e86f40f84c39c5d65" Jan 28 07:11:02 crc kubenswrapper[4642]: I0128 07:11:02.211756 4642 scope.go:117] "RemoveContainer" containerID="0090957fad569ddbb7c7ff55a8fe348adcd9ee144812fbe6b714c0e5a35d0b81" Jan 28 07:11:16 crc kubenswrapper[4642]: I0128 07:11:16.914000 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-sqzjp"] Jan 28 07:11:16 crc kubenswrapper[4642]: I0128 07:11:16.916632 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sqzjp" Jan 28 07:11:16 crc kubenswrapper[4642]: I0128 07:11:16.920133 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sqzjp"] Jan 28 07:11:17 crc kubenswrapper[4642]: I0128 07:11:17.038846 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea8574db-37ac-4416-b888-00a545147dea-utilities\") pod \"redhat-marketplace-sqzjp\" (UID: \"ea8574db-37ac-4416-b888-00a545147dea\") " pod="openshift-marketplace/redhat-marketplace-sqzjp" Jan 28 07:11:17 crc kubenswrapper[4642]: I0128 07:11:17.038912 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5r9h\" (UniqueName: \"kubernetes.io/projected/ea8574db-37ac-4416-b888-00a545147dea-kube-api-access-c5r9h\") pod \"redhat-marketplace-sqzjp\" (UID: \"ea8574db-37ac-4416-b888-00a545147dea\") " pod="openshift-marketplace/redhat-marketplace-sqzjp" Jan 28 07:11:17 crc kubenswrapper[4642]: I0128 07:11:17.039207 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea8574db-37ac-4416-b888-00a545147dea-catalog-content\") pod \"redhat-marketplace-sqzjp\" (UID: \"ea8574db-37ac-4416-b888-00a545147dea\") " pod="openshift-marketplace/redhat-marketplace-sqzjp" Jan 28 07:11:17 crc kubenswrapper[4642]: I0128 07:11:17.141282 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5r9h\" (UniqueName: \"kubernetes.io/projected/ea8574db-37ac-4416-b888-00a545147dea-kube-api-access-c5r9h\") pod \"redhat-marketplace-sqzjp\" (UID: \"ea8574db-37ac-4416-b888-00a545147dea\") " pod="openshift-marketplace/redhat-marketplace-sqzjp" Jan 28 07:11:17 crc kubenswrapper[4642]: I0128 07:11:17.141402 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea8574db-37ac-4416-b888-00a545147dea-catalog-content\") pod \"redhat-marketplace-sqzjp\" (UID: \"ea8574db-37ac-4416-b888-00a545147dea\") " pod="openshift-marketplace/redhat-marketplace-sqzjp" Jan 28 07:11:17 crc kubenswrapper[4642]: I0128 07:11:17.141519 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea8574db-37ac-4416-b888-00a545147dea-utilities\") pod \"redhat-marketplace-sqzjp\" (UID: \"ea8574db-37ac-4416-b888-00a545147dea\") " pod="openshift-marketplace/redhat-marketplace-sqzjp" Jan 28 07:11:17 crc kubenswrapper[4642]: I0128 07:11:17.142587 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea8574db-37ac-4416-b888-00a545147dea-utilities\") pod \"redhat-marketplace-sqzjp\" (UID: \"ea8574db-37ac-4416-b888-00a545147dea\") " pod="openshift-marketplace/redhat-marketplace-sqzjp" Jan 28 07:11:17 crc kubenswrapper[4642]: I0128 07:11:17.142746 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea8574db-37ac-4416-b888-00a545147dea-catalog-content\") pod \"redhat-marketplace-sqzjp\" (UID: \"ea8574db-37ac-4416-b888-00a545147dea\") " pod="openshift-marketplace/redhat-marketplace-sqzjp" Jan 28 07:11:17 crc kubenswrapper[4642]: I0128 07:11:17.162995 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5r9h\" (UniqueName: \"kubernetes.io/projected/ea8574db-37ac-4416-b888-00a545147dea-kube-api-access-c5r9h\") pod \"redhat-marketplace-sqzjp\" (UID: \"ea8574db-37ac-4416-b888-00a545147dea\") " pod="openshift-marketplace/redhat-marketplace-sqzjp" Jan 28 07:11:17 crc kubenswrapper[4642]: I0128 07:11:17.232324 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sqzjp" Jan 28 07:11:17 crc kubenswrapper[4642]: I0128 07:11:17.625429 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sqzjp"] Jan 28 07:11:18 crc kubenswrapper[4642]: I0128 07:11:18.305120 4642 generic.go:334] "Generic (PLEG): container finished" podID="ea8574db-37ac-4416-b888-00a545147dea" containerID="cf15fe132b1b8bd4d18c7b3722b3db0896ff3d262ff02c1235956f4ea26e0f40" exitCode=0 Jan 28 07:11:18 crc kubenswrapper[4642]: I0128 07:11:18.305171 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sqzjp" event={"ID":"ea8574db-37ac-4416-b888-00a545147dea","Type":"ContainerDied","Data":"cf15fe132b1b8bd4d18c7b3722b3db0896ff3d262ff02c1235956f4ea26e0f40"} Jan 28 07:11:18 crc kubenswrapper[4642]: I0128 07:11:18.305238 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sqzjp" event={"ID":"ea8574db-37ac-4416-b888-00a545147dea","Type":"ContainerStarted","Data":"7564da0d61392e20c991a021b6ff5025b750dc1d5caa6f391e32c11bc20a5a81"} Jan 28 07:11:19 crc kubenswrapper[4642]: I0128 07:11:19.314898 4642 generic.go:334] "Generic (PLEG): container finished" podID="ea8574db-37ac-4416-b888-00a545147dea" containerID="1c6f9e6a21ea913a15cd8d5bf1568ababa29c9c4d468e46e681c9d0958c5d326" exitCode=0 Jan 28 07:11:19 crc kubenswrapper[4642]: I0128 07:11:19.315011 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sqzjp" event={"ID":"ea8574db-37ac-4416-b888-00a545147dea","Type":"ContainerDied","Data":"1c6f9e6a21ea913a15cd8d5bf1568ababa29c9c4d468e46e681c9d0958c5d326"} Jan 28 07:11:20 crc kubenswrapper[4642]: I0128 07:11:20.328037 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sqzjp" event={"ID":"ea8574db-37ac-4416-b888-00a545147dea","Type":"ContainerStarted","Data":"a954a37002c1d3d493bf8b2a327135c70effa1d49db712b5bafcf48daeaf2cbf"} Jan 28 07:11:20 crc kubenswrapper[4642]: I0128 07:11:20.347671 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-sqzjp" podStartSLOduration=2.842776294 podStartE2EDuration="4.34765765s" podCreationTimestamp="2026-01-28 07:11:16 +0000 UTC" firstStartedPulling="2026-01-28 07:11:18.307658286 +0000 UTC m=+1401.539747095" lastFinishedPulling="2026-01-28 07:11:19.812539642 +0000 UTC m=+1403.044628451" observedRunningTime="2026-01-28 07:11:20.342223542 +0000 UTC m=+1403.574312351" watchObservedRunningTime="2026-01-28 07:11:20.34765765 +0000 UTC m=+1403.579746459" Jan 28 07:11:27 crc kubenswrapper[4642]: I0128 07:11:27.232766 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-sqzjp" Jan 28 07:11:27 crc kubenswrapper[4642]: I0128 07:11:27.233251 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-sqzjp" Jan 28 07:11:27 crc kubenswrapper[4642]: I0128 07:11:27.266056 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-sqzjp" Jan 28 07:11:27 crc kubenswrapper[4642]: I0128 07:11:27.374974 4642 generic.go:334] "Generic (PLEG): container finished" podID="71799297-5b25-4d16-97d8-5cd3b6e9c52e" containerID="a45b6f3518511150e0eb90123541549df939bcd1d8e9096de777afe2e97b949a" exitCode=0 Jan 28 07:11:27 crc kubenswrapper[4642]: I0128 07:11:27.375063 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2624h" event={"ID":"71799297-5b25-4d16-97d8-5cd3b6e9c52e","Type":"ContainerDied","Data":"a45b6f3518511150e0eb90123541549df939bcd1d8e9096de777afe2e97b949a"} Jan 28 07:11:27 crc kubenswrapper[4642]: I0128 07:11:27.409931 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-sqzjp" Jan 28 07:11:27 crc kubenswrapper[4642]: I0128 07:11:27.496507 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sqzjp"] Jan 28 07:11:28 crc kubenswrapper[4642]: I0128 07:11:28.726870 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2624h" Jan 28 07:11:28 crc kubenswrapper[4642]: I0128 07:11:28.838021 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqgss\" (UniqueName: \"kubernetes.io/projected/71799297-5b25-4d16-97d8-5cd3b6e9c52e-kube-api-access-rqgss\") pod \"71799297-5b25-4d16-97d8-5cd3b6e9c52e\" (UID: \"71799297-5b25-4d16-97d8-5cd3b6e9c52e\") " Jan 28 07:11:28 crc kubenswrapper[4642]: I0128 07:11:28.838089 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71799297-5b25-4d16-97d8-5cd3b6e9c52e-bootstrap-combined-ca-bundle\") pod \"71799297-5b25-4d16-97d8-5cd3b6e9c52e\" (UID: \"71799297-5b25-4d16-97d8-5cd3b6e9c52e\") " Jan 28 07:11:28 crc kubenswrapper[4642]: I0128 07:11:28.838156 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/71799297-5b25-4d16-97d8-5cd3b6e9c52e-ssh-key-openstack-edpm-ipam\") pod \"71799297-5b25-4d16-97d8-5cd3b6e9c52e\" (UID: \"71799297-5b25-4d16-97d8-5cd3b6e9c52e\") " Jan 28 07:11:28 crc kubenswrapper[4642]: I0128 07:11:28.838272 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/71799297-5b25-4d16-97d8-5cd3b6e9c52e-inventory\") pod \"71799297-5b25-4d16-97d8-5cd3b6e9c52e\" (UID: \"71799297-5b25-4d16-97d8-5cd3b6e9c52e\") " Jan 28 07:11:28 crc kubenswrapper[4642]: I0128 07:11:28.844308 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71799297-5b25-4d16-97d8-5cd3b6e9c52e-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "71799297-5b25-4d16-97d8-5cd3b6e9c52e" (UID: "71799297-5b25-4d16-97d8-5cd3b6e9c52e"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:11:28 crc kubenswrapper[4642]: I0128 07:11:28.844361 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71799297-5b25-4d16-97d8-5cd3b6e9c52e-kube-api-access-rqgss" (OuterVolumeSpecName: "kube-api-access-rqgss") pod "71799297-5b25-4d16-97d8-5cd3b6e9c52e" (UID: "71799297-5b25-4d16-97d8-5cd3b6e9c52e"). InnerVolumeSpecName "kube-api-access-rqgss". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:11:28 crc kubenswrapper[4642]: I0128 07:11:28.863034 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71799297-5b25-4d16-97d8-5cd3b6e9c52e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "71799297-5b25-4d16-97d8-5cd3b6e9c52e" (UID: "71799297-5b25-4d16-97d8-5cd3b6e9c52e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:11:28 crc kubenswrapper[4642]: I0128 07:11:28.863394 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71799297-5b25-4d16-97d8-5cd3b6e9c52e-inventory" (OuterVolumeSpecName: "inventory") pod "71799297-5b25-4d16-97d8-5cd3b6e9c52e" (UID: "71799297-5b25-4d16-97d8-5cd3b6e9c52e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:11:28 crc kubenswrapper[4642]: I0128 07:11:28.940982 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rqgss\" (UniqueName: \"kubernetes.io/projected/71799297-5b25-4d16-97d8-5cd3b6e9c52e-kube-api-access-rqgss\") on node \"crc\" DevicePath \"\"" Jan 28 07:11:28 crc kubenswrapper[4642]: I0128 07:11:28.941237 4642 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71799297-5b25-4d16-97d8-5cd3b6e9c52e-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 07:11:28 crc kubenswrapper[4642]: I0128 07:11:28.941248 4642 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/71799297-5b25-4d16-97d8-5cd3b6e9c52e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 07:11:28 crc kubenswrapper[4642]: I0128 07:11:28.941257 4642 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/71799297-5b25-4d16-97d8-5cd3b6e9c52e-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 07:11:29 crc kubenswrapper[4642]: I0128 07:11:29.393993 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2624h" event={"ID":"71799297-5b25-4d16-97d8-5cd3b6e9c52e","Type":"ContainerDied","Data":"27eac1eb2278870c31c1772322b21cb7e5fcc9df3ca2f185507215768d45c13f"} Jan 28 07:11:29 crc kubenswrapper[4642]: I0128 07:11:29.394053 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2624h" Jan 28 07:11:29 crc kubenswrapper[4642]: I0128 07:11:29.394061 4642 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27eac1eb2278870c31c1772322b21cb7e5fcc9df3ca2f185507215768d45c13f" Jan 28 07:11:29 crc kubenswrapper[4642]: I0128 07:11:29.394139 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-sqzjp" podUID="ea8574db-37ac-4416-b888-00a545147dea" containerName="registry-server" containerID="cri-o://a954a37002c1d3d493bf8b2a327135c70effa1d49db712b5bafcf48daeaf2cbf" gracePeriod=2 Jan 28 07:11:29 crc kubenswrapper[4642]: I0128 07:11:29.453977 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5s2mm"] Jan 28 07:11:29 crc kubenswrapper[4642]: E0128 07:11:29.454319 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71799297-5b25-4d16-97d8-5cd3b6e9c52e" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 28 07:11:29 crc kubenswrapper[4642]: I0128 07:11:29.454336 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="71799297-5b25-4d16-97d8-5cd3b6e9c52e" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 28 07:11:29 crc kubenswrapper[4642]: I0128 07:11:29.454529 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="71799297-5b25-4d16-97d8-5cd3b6e9c52e" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 28 07:11:29 crc kubenswrapper[4642]: I0128 07:11:29.455086 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5s2mm" Jan 28 07:11:29 crc kubenswrapper[4642]: I0128 07:11:29.456428 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 07:11:29 crc kubenswrapper[4642]: I0128 07:11:29.456673 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 07:11:29 crc kubenswrapper[4642]: I0128 07:11:29.456888 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 07:11:29 crc kubenswrapper[4642]: I0128 07:11:29.461245 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r6m2m" Jan 28 07:11:29 crc kubenswrapper[4642]: I0128 07:11:29.464861 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5s2mm"] Jan 28 07:11:29 crc kubenswrapper[4642]: I0128 07:11:29.553861 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6wnn\" (UniqueName: \"kubernetes.io/projected/f6bd9dfb-a07e-4082-ab80-e7de0f582617-kube-api-access-b6wnn\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-5s2mm\" (UID: \"f6bd9dfb-a07e-4082-ab80-e7de0f582617\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5s2mm" Jan 28 07:11:29 crc kubenswrapper[4642]: I0128 07:11:29.553925 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f6bd9dfb-a07e-4082-ab80-e7de0f582617-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-5s2mm\" (UID: \"f6bd9dfb-a07e-4082-ab80-e7de0f582617\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5s2mm" Jan 28 07:11:29 crc kubenswrapper[4642]: I0128 07:11:29.553958 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f6bd9dfb-a07e-4082-ab80-e7de0f582617-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-5s2mm\" (UID: \"f6bd9dfb-a07e-4082-ab80-e7de0f582617\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5s2mm" Jan 28 07:11:29 crc kubenswrapper[4642]: I0128 07:11:29.656054 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6wnn\" (UniqueName: \"kubernetes.io/projected/f6bd9dfb-a07e-4082-ab80-e7de0f582617-kube-api-access-b6wnn\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-5s2mm\" (UID: \"f6bd9dfb-a07e-4082-ab80-e7de0f582617\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5s2mm" Jan 28 07:11:29 crc kubenswrapper[4642]: I0128 07:11:29.656134 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f6bd9dfb-a07e-4082-ab80-e7de0f582617-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-5s2mm\" (UID: \"f6bd9dfb-a07e-4082-ab80-e7de0f582617\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5s2mm" Jan 28 07:11:29 crc kubenswrapper[4642]: I0128 07:11:29.656163 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f6bd9dfb-a07e-4082-ab80-e7de0f582617-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-5s2mm\" (UID: \"f6bd9dfb-a07e-4082-ab80-e7de0f582617\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5s2mm" Jan 28 07:11:29 crc kubenswrapper[4642]: I0128 07:11:29.661363 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f6bd9dfb-a07e-4082-ab80-e7de0f582617-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-5s2mm\" (UID: \"f6bd9dfb-a07e-4082-ab80-e7de0f582617\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5s2mm" Jan 28 07:11:29 crc kubenswrapper[4642]: I0128 07:11:29.661377 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f6bd9dfb-a07e-4082-ab80-e7de0f582617-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-5s2mm\" (UID: \"f6bd9dfb-a07e-4082-ab80-e7de0f582617\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5s2mm" Jan 28 07:11:29 crc kubenswrapper[4642]: I0128 07:11:29.670964 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6wnn\" (UniqueName: \"kubernetes.io/projected/f6bd9dfb-a07e-4082-ab80-e7de0f582617-kube-api-access-b6wnn\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-5s2mm\" (UID: \"f6bd9dfb-a07e-4082-ab80-e7de0f582617\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5s2mm" Jan 28 07:11:29 crc kubenswrapper[4642]: I0128 07:11:29.749882 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sqzjp" Jan 28 07:11:29 crc kubenswrapper[4642]: I0128 07:11:29.757617 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c5r9h\" (UniqueName: \"kubernetes.io/projected/ea8574db-37ac-4416-b888-00a545147dea-kube-api-access-c5r9h\") pod \"ea8574db-37ac-4416-b888-00a545147dea\" (UID: \"ea8574db-37ac-4416-b888-00a545147dea\") " Jan 28 07:11:29 crc kubenswrapper[4642]: I0128 07:11:29.757676 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea8574db-37ac-4416-b888-00a545147dea-catalog-content\") pod \"ea8574db-37ac-4416-b888-00a545147dea\" (UID: \"ea8574db-37ac-4416-b888-00a545147dea\") " Jan 28 07:11:29 crc kubenswrapper[4642]: I0128 07:11:29.757698 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea8574db-37ac-4416-b888-00a545147dea-utilities\") pod \"ea8574db-37ac-4416-b888-00a545147dea\" (UID: \"ea8574db-37ac-4416-b888-00a545147dea\") " Jan 28 07:11:29 crc kubenswrapper[4642]: I0128 07:11:29.758353 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea8574db-37ac-4416-b888-00a545147dea-utilities" (OuterVolumeSpecName: "utilities") pod "ea8574db-37ac-4416-b888-00a545147dea" (UID: "ea8574db-37ac-4416-b888-00a545147dea"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 07:11:29 crc kubenswrapper[4642]: I0128 07:11:29.758584 4642 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea8574db-37ac-4416-b888-00a545147dea-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 07:11:29 crc kubenswrapper[4642]: I0128 07:11:29.762045 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea8574db-37ac-4416-b888-00a545147dea-kube-api-access-c5r9h" (OuterVolumeSpecName: "kube-api-access-c5r9h") pod "ea8574db-37ac-4416-b888-00a545147dea" (UID: "ea8574db-37ac-4416-b888-00a545147dea"). InnerVolumeSpecName "kube-api-access-c5r9h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:11:29 crc kubenswrapper[4642]: I0128 07:11:29.823616 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea8574db-37ac-4416-b888-00a545147dea-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ea8574db-37ac-4416-b888-00a545147dea" (UID: "ea8574db-37ac-4416-b888-00a545147dea"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 07:11:29 crc kubenswrapper[4642]: I0128 07:11:29.850716 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5s2mm" Jan 28 07:11:29 crc kubenswrapper[4642]: I0128 07:11:29.859094 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c5r9h\" (UniqueName: \"kubernetes.io/projected/ea8574db-37ac-4416-b888-00a545147dea-kube-api-access-c5r9h\") on node \"crc\" DevicePath \"\"" Jan 28 07:11:29 crc kubenswrapper[4642]: I0128 07:11:29.859130 4642 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea8574db-37ac-4416-b888-00a545147dea-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 07:11:29 crc kubenswrapper[4642]: I0128 07:11:29.905804 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wj7gx"] Jan 28 07:11:29 crc kubenswrapper[4642]: E0128 07:11:29.906225 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea8574db-37ac-4416-b888-00a545147dea" containerName="extract-content" Jan 28 07:11:29 crc kubenswrapper[4642]: I0128 07:11:29.906238 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea8574db-37ac-4416-b888-00a545147dea" containerName="extract-content" Jan 28 07:11:29 crc kubenswrapper[4642]: E0128 07:11:29.906259 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea8574db-37ac-4416-b888-00a545147dea" containerName="registry-server" Jan 28 07:11:29 crc kubenswrapper[4642]: I0128 07:11:29.906264 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea8574db-37ac-4416-b888-00a545147dea" containerName="registry-server" Jan 28 07:11:29 crc kubenswrapper[4642]: E0128 07:11:29.906288 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea8574db-37ac-4416-b888-00a545147dea" containerName="extract-utilities" Jan 28 07:11:29 crc kubenswrapper[4642]: I0128 07:11:29.906295 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea8574db-37ac-4416-b888-00a545147dea" containerName="extract-utilities" Jan 28 07:11:29 crc kubenswrapper[4642]: I0128 07:11:29.906464 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea8574db-37ac-4416-b888-00a545147dea" containerName="registry-server" Jan 28 07:11:29 crc kubenswrapper[4642]: I0128 07:11:29.909375 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wj7gx" Jan 28 07:11:29 crc kubenswrapper[4642]: I0128 07:11:29.918861 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wj7gx"] Jan 28 07:11:29 crc kubenswrapper[4642]: I0128 07:11:29.961374 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a1523f7-46d1-407a-bc84-9f7847f94e77-catalog-content\") pod \"redhat-operators-wj7gx\" (UID: \"7a1523f7-46d1-407a-bc84-9f7847f94e77\") " pod="openshift-marketplace/redhat-operators-wj7gx" Jan 28 07:11:29 crc kubenswrapper[4642]: I0128 07:11:29.961706 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngc4h\" (UniqueName: \"kubernetes.io/projected/7a1523f7-46d1-407a-bc84-9f7847f94e77-kube-api-access-ngc4h\") pod \"redhat-operators-wj7gx\" (UID: \"7a1523f7-46d1-407a-bc84-9f7847f94e77\") " pod="openshift-marketplace/redhat-operators-wj7gx" Jan 28 07:11:29 crc kubenswrapper[4642]: I0128 07:11:29.961729 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a1523f7-46d1-407a-bc84-9f7847f94e77-utilities\") pod \"redhat-operators-wj7gx\" (UID: \"7a1523f7-46d1-407a-bc84-9f7847f94e77\") " pod="openshift-marketplace/redhat-operators-wj7gx" Jan 28 07:11:30 crc kubenswrapper[4642]: I0128 07:11:30.063705 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a1523f7-46d1-407a-bc84-9f7847f94e77-catalog-content\") pod \"redhat-operators-wj7gx\" (UID: \"7a1523f7-46d1-407a-bc84-9f7847f94e77\") " pod="openshift-marketplace/redhat-operators-wj7gx" Jan 28 07:11:30 crc kubenswrapper[4642]: I0128 07:11:30.063850 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngc4h\" (UniqueName: \"kubernetes.io/projected/7a1523f7-46d1-407a-bc84-9f7847f94e77-kube-api-access-ngc4h\") pod \"redhat-operators-wj7gx\" (UID: \"7a1523f7-46d1-407a-bc84-9f7847f94e77\") " pod="openshift-marketplace/redhat-operators-wj7gx" Jan 28 07:11:30 crc kubenswrapper[4642]: I0128 07:11:30.063876 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a1523f7-46d1-407a-bc84-9f7847f94e77-utilities\") pod \"redhat-operators-wj7gx\" (UID: \"7a1523f7-46d1-407a-bc84-9f7847f94e77\") " pod="openshift-marketplace/redhat-operators-wj7gx" Jan 28 07:11:30 crc kubenswrapper[4642]: I0128 07:11:30.064243 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a1523f7-46d1-407a-bc84-9f7847f94e77-catalog-content\") pod \"redhat-operators-wj7gx\" (UID: \"7a1523f7-46d1-407a-bc84-9f7847f94e77\") " pod="openshift-marketplace/redhat-operators-wj7gx" Jan 28 07:11:30 crc kubenswrapper[4642]: I0128 07:11:30.064284 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a1523f7-46d1-407a-bc84-9f7847f94e77-utilities\") pod \"redhat-operators-wj7gx\" (UID: \"7a1523f7-46d1-407a-bc84-9f7847f94e77\") " pod="openshift-marketplace/redhat-operators-wj7gx" Jan 28 07:11:30 crc kubenswrapper[4642]: I0128 07:11:30.081048 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngc4h\" (UniqueName: \"kubernetes.io/projected/7a1523f7-46d1-407a-bc84-9f7847f94e77-kube-api-access-ngc4h\") pod \"redhat-operators-wj7gx\" (UID: \"7a1523f7-46d1-407a-bc84-9f7847f94e77\") " pod="openshift-marketplace/redhat-operators-wj7gx" Jan 28 07:11:30 crc kubenswrapper[4642]: I0128 07:11:30.260710 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wj7gx" Jan 28 07:11:30 crc kubenswrapper[4642]: I0128 07:11:30.350407 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5s2mm"] Jan 28 07:11:30 crc kubenswrapper[4642]: I0128 07:11:30.406331 4642 generic.go:334] "Generic (PLEG): container finished" podID="ea8574db-37ac-4416-b888-00a545147dea" containerID="a954a37002c1d3d493bf8b2a327135c70effa1d49db712b5bafcf48daeaf2cbf" exitCode=0 Jan 28 07:11:30 crc kubenswrapper[4642]: I0128 07:11:30.406410 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sqzjp" event={"ID":"ea8574db-37ac-4416-b888-00a545147dea","Type":"ContainerDied","Data":"a954a37002c1d3d493bf8b2a327135c70effa1d49db712b5bafcf48daeaf2cbf"} Jan 28 07:11:30 crc kubenswrapper[4642]: I0128 07:11:30.406463 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sqzjp" event={"ID":"ea8574db-37ac-4416-b888-00a545147dea","Type":"ContainerDied","Data":"7564da0d61392e20c991a021b6ff5025b750dc1d5caa6f391e32c11bc20a5a81"} Jan 28 07:11:30 crc kubenswrapper[4642]: I0128 07:11:30.406480 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sqzjp" Jan 28 07:11:30 crc kubenswrapper[4642]: I0128 07:11:30.406492 4642 scope.go:117] "RemoveContainer" containerID="a954a37002c1d3d493bf8b2a327135c70effa1d49db712b5bafcf48daeaf2cbf" Jan 28 07:11:30 crc kubenswrapper[4642]: I0128 07:11:30.410256 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5s2mm" event={"ID":"f6bd9dfb-a07e-4082-ab80-e7de0f582617","Type":"ContainerStarted","Data":"4ff6867c6317f55ae3d5a4e6539a11a69fdf292c42d5d73ad8a897df8a0e5658"} Jan 28 07:11:30 crc kubenswrapper[4642]: I0128 07:11:30.425657 4642 scope.go:117] "RemoveContainer" containerID="1c6f9e6a21ea913a15cd8d5bf1568ababa29c9c4d468e46e681c9d0958c5d326" Jan 28 07:11:30 crc kubenswrapper[4642]: I0128 07:11:30.440607 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sqzjp"] Jan 28 07:11:30 crc kubenswrapper[4642]: I0128 07:11:30.448624 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-sqzjp"] Jan 28 07:11:30 crc kubenswrapper[4642]: I0128 07:11:30.469482 4642 scope.go:117] "RemoveContainer" containerID="cf15fe132b1b8bd4d18c7b3722b3db0896ff3d262ff02c1235956f4ea26e0f40" Jan 28 07:11:30 crc kubenswrapper[4642]: I0128 07:11:30.487263 4642 scope.go:117] "RemoveContainer" containerID="a954a37002c1d3d493bf8b2a327135c70effa1d49db712b5bafcf48daeaf2cbf" Jan 28 07:11:30 crc kubenswrapper[4642]: E0128 07:11:30.487634 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a954a37002c1d3d493bf8b2a327135c70effa1d49db712b5bafcf48daeaf2cbf\": container with ID starting with a954a37002c1d3d493bf8b2a327135c70effa1d49db712b5bafcf48daeaf2cbf not found: ID does not exist" containerID="a954a37002c1d3d493bf8b2a327135c70effa1d49db712b5bafcf48daeaf2cbf" Jan 28 07:11:30 crc kubenswrapper[4642]: I0128 07:11:30.487676 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a954a37002c1d3d493bf8b2a327135c70effa1d49db712b5bafcf48daeaf2cbf"} err="failed to get container status \"a954a37002c1d3d493bf8b2a327135c70effa1d49db712b5bafcf48daeaf2cbf\": rpc error: code = NotFound desc = could not find container \"a954a37002c1d3d493bf8b2a327135c70effa1d49db712b5bafcf48daeaf2cbf\": container with ID starting with a954a37002c1d3d493bf8b2a327135c70effa1d49db712b5bafcf48daeaf2cbf not found: ID does not exist" Jan 28 07:11:30 crc kubenswrapper[4642]: I0128 07:11:30.487701 4642 scope.go:117] "RemoveContainer" containerID="1c6f9e6a21ea913a15cd8d5bf1568ababa29c9c4d468e46e681c9d0958c5d326" Jan 28 07:11:30 crc kubenswrapper[4642]: E0128 07:11:30.488036 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c6f9e6a21ea913a15cd8d5bf1568ababa29c9c4d468e46e681c9d0958c5d326\": container with ID starting with 1c6f9e6a21ea913a15cd8d5bf1568ababa29c9c4d468e46e681c9d0958c5d326 not found: ID does not exist" containerID="1c6f9e6a21ea913a15cd8d5bf1568ababa29c9c4d468e46e681c9d0958c5d326" Jan 28 07:11:30 crc kubenswrapper[4642]: I0128 07:11:30.488075 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c6f9e6a21ea913a15cd8d5bf1568ababa29c9c4d468e46e681c9d0958c5d326"} err="failed to get container status \"1c6f9e6a21ea913a15cd8d5bf1568ababa29c9c4d468e46e681c9d0958c5d326\": rpc error: code = NotFound desc = could not find container \"1c6f9e6a21ea913a15cd8d5bf1568ababa29c9c4d468e46e681c9d0958c5d326\": container with ID starting with 1c6f9e6a21ea913a15cd8d5bf1568ababa29c9c4d468e46e681c9d0958c5d326 not found: ID does not exist" Jan 28 07:11:30 crc kubenswrapper[4642]: I0128 07:11:30.488089 4642 scope.go:117] "RemoveContainer" containerID="cf15fe132b1b8bd4d18c7b3722b3db0896ff3d262ff02c1235956f4ea26e0f40" Jan 28 07:11:30 crc kubenswrapper[4642]: E0128 07:11:30.488382 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf15fe132b1b8bd4d18c7b3722b3db0896ff3d262ff02c1235956f4ea26e0f40\": container with ID starting with cf15fe132b1b8bd4d18c7b3722b3db0896ff3d262ff02c1235956f4ea26e0f40 not found: ID does not exist" containerID="cf15fe132b1b8bd4d18c7b3722b3db0896ff3d262ff02c1235956f4ea26e0f40" Jan 28 07:11:30 crc kubenswrapper[4642]: I0128 07:11:30.488408 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf15fe132b1b8bd4d18c7b3722b3db0896ff3d262ff02c1235956f4ea26e0f40"} err="failed to get container status \"cf15fe132b1b8bd4d18c7b3722b3db0896ff3d262ff02c1235956f4ea26e0f40\": rpc error: code = NotFound desc = could not find container \"cf15fe132b1b8bd4d18c7b3722b3db0896ff3d262ff02c1235956f4ea26e0f40\": container with ID starting with cf15fe132b1b8bd4d18c7b3722b3db0896ff3d262ff02c1235956f4ea26e0f40 not found: ID does not exist" Jan 28 07:11:30 crc kubenswrapper[4642]: I0128 07:11:30.653607 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wj7gx"] Jan 28 07:11:30 crc kubenswrapper[4642]: W0128 07:11:30.660383 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a1523f7_46d1_407a_bc84_9f7847f94e77.slice/crio-28487998363bdfea2bff8acf2407fb072eb0a85a0c11b6dc7dca287e8aa256e6 WatchSource:0}: Error finding container 28487998363bdfea2bff8acf2407fb072eb0a85a0c11b6dc7dca287e8aa256e6: Status 404 returned error can't find the container with id 28487998363bdfea2bff8acf2407fb072eb0a85a0c11b6dc7dca287e8aa256e6 Jan 28 07:11:31 crc kubenswrapper[4642]: I0128 07:11:31.108153 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea8574db-37ac-4416-b888-00a545147dea" path="/var/lib/kubelet/pods/ea8574db-37ac-4416-b888-00a545147dea/volumes" Jan 28 07:11:31 crc kubenswrapper[4642]: I0128 07:11:31.418041 4642 generic.go:334] "Generic (PLEG): container finished" podID="7a1523f7-46d1-407a-bc84-9f7847f94e77" containerID="ccc42481bfd6370b4706b6e95712d79763fdc87c40345904459afe2a86b50ecb" exitCode=0 Jan 28 07:11:31 crc kubenswrapper[4642]: I0128 07:11:31.418108 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wj7gx" event={"ID":"7a1523f7-46d1-407a-bc84-9f7847f94e77","Type":"ContainerDied","Data":"ccc42481bfd6370b4706b6e95712d79763fdc87c40345904459afe2a86b50ecb"} Jan 28 07:11:31 crc kubenswrapper[4642]: I0128 07:11:31.418137 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wj7gx" event={"ID":"7a1523f7-46d1-407a-bc84-9f7847f94e77","Type":"ContainerStarted","Data":"28487998363bdfea2bff8acf2407fb072eb0a85a0c11b6dc7dca287e8aa256e6"} Jan 28 07:11:31 crc kubenswrapper[4642]: I0128 07:11:31.421391 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5s2mm" event={"ID":"f6bd9dfb-a07e-4082-ab80-e7de0f582617","Type":"ContainerStarted","Data":"283a4815dd4e2c0d7415f68a0b2720aeddb2ef6edfeb6ddd0fb1c6096554a94f"} Jan 28 07:11:31 crc kubenswrapper[4642]: I0128 07:11:31.459068 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5s2mm" podStartSLOduration=1.890724369 podStartE2EDuration="2.459051789s" podCreationTimestamp="2026-01-28 07:11:29 +0000 UTC" firstStartedPulling="2026-01-28 07:11:30.363580236 +0000 UTC m=+1413.595669045" lastFinishedPulling="2026-01-28 07:11:30.931907656 +0000 UTC m=+1414.163996465" observedRunningTime="2026-01-28 07:11:31.452817316 +0000 UTC m=+1414.684906124" watchObservedRunningTime="2026-01-28 07:11:31.459051789 +0000 UTC m=+1414.691140599" Jan 28 07:11:32 crc kubenswrapper[4642]: I0128 07:11:32.431647 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wj7gx" event={"ID":"7a1523f7-46d1-407a-bc84-9f7847f94e77","Type":"ContainerStarted","Data":"458f80b605a7be6754121ba40edd03f1f5a86260fd394948e5234dd601b1dd12"} Jan 28 07:11:34 crc kubenswrapper[4642]: I0128 07:11:34.446795 4642 generic.go:334] "Generic (PLEG): container finished" podID="7a1523f7-46d1-407a-bc84-9f7847f94e77" containerID="458f80b605a7be6754121ba40edd03f1f5a86260fd394948e5234dd601b1dd12" exitCode=0 Jan 28 07:11:34 crc kubenswrapper[4642]: I0128 07:11:34.446829 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wj7gx" event={"ID":"7a1523f7-46d1-407a-bc84-9f7847f94e77","Type":"ContainerDied","Data":"458f80b605a7be6754121ba40edd03f1f5a86260fd394948e5234dd601b1dd12"} Jan 28 07:11:35 crc kubenswrapper[4642]: I0128 07:11:35.456339 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wj7gx" event={"ID":"7a1523f7-46d1-407a-bc84-9f7847f94e77","Type":"ContainerStarted","Data":"d4e051fcb390b1c36984c75ebc8b6556d1ba81b80544ec15bde3a67764d9e469"} Jan 28 07:11:35 crc kubenswrapper[4642]: I0128 07:11:35.473992 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wj7gx" podStartSLOduration=2.907332027 podStartE2EDuration="6.47395857s" podCreationTimestamp="2026-01-28 07:11:29 +0000 UTC" firstStartedPulling="2026-01-28 07:11:31.419677745 +0000 UTC m=+1414.651766555" lastFinishedPulling="2026-01-28 07:11:34.986304289 +0000 UTC m=+1418.218393098" observedRunningTime="2026-01-28 07:11:35.469455102 +0000 UTC m=+1418.701543911" watchObservedRunningTime="2026-01-28 07:11:35.47395857 +0000 UTC m=+1418.706047380" Jan 28 07:11:38 crc kubenswrapper[4642]: I0128 07:11:38.199457 4642 patch_prober.go:28] interesting pod/machine-config-daemon-hdsmf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 07:11:38 crc kubenswrapper[4642]: I0128 07:11:38.199714 4642 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 07:11:40 crc kubenswrapper[4642]: I0128 07:11:40.261555 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wj7gx" Jan 28 07:11:40 crc kubenswrapper[4642]: I0128 07:11:40.261909 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wj7gx" Jan 28 07:11:40 crc kubenswrapper[4642]: I0128 07:11:40.294271 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wj7gx" Jan 28 07:11:40 crc kubenswrapper[4642]: I0128 07:11:40.526557 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wj7gx" Jan 28 07:11:40 crc kubenswrapper[4642]: I0128 07:11:40.566153 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wj7gx"] Jan 28 07:11:42 crc kubenswrapper[4642]: I0128 07:11:42.505547 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wj7gx" podUID="7a1523f7-46d1-407a-bc84-9f7847f94e77" containerName="registry-server" containerID="cri-o://d4e051fcb390b1c36984c75ebc8b6556d1ba81b80544ec15bde3a67764d9e469" gracePeriod=2 Jan 28 07:11:42 crc kubenswrapper[4642]: I0128 07:11:42.871262 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wj7gx" Jan 28 07:11:42 crc kubenswrapper[4642]: I0128 07:11:42.884524 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a1523f7-46d1-407a-bc84-9f7847f94e77-utilities\") pod \"7a1523f7-46d1-407a-bc84-9f7847f94e77\" (UID: \"7a1523f7-46d1-407a-bc84-9f7847f94e77\") " Jan 28 07:11:42 crc kubenswrapper[4642]: I0128 07:11:42.884624 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngc4h\" (UniqueName: \"kubernetes.io/projected/7a1523f7-46d1-407a-bc84-9f7847f94e77-kube-api-access-ngc4h\") pod \"7a1523f7-46d1-407a-bc84-9f7847f94e77\" (UID: \"7a1523f7-46d1-407a-bc84-9f7847f94e77\") " Jan 28 07:11:42 crc kubenswrapper[4642]: I0128 07:11:42.885485 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a1523f7-46d1-407a-bc84-9f7847f94e77-utilities" (OuterVolumeSpecName: "utilities") pod "7a1523f7-46d1-407a-bc84-9f7847f94e77" (UID: "7a1523f7-46d1-407a-bc84-9f7847f94e77"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 07:11:42 crc kubenswrapper[4642]: I0128 07:11:42.886148 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a1523f7-46d1-407a-bc84-9f7847f94e77-catalog-content\") pod \"7a1523f7-46d1-407a-bc84-9f7847f94e77\" (UID: \"7a1523f7-46d1-407a-bc84-9f7847f94e77\") " Jan 28 07:11:42 crc kubenswrapper[4642]: I0128 07:11:42.888096 4642 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a1523f7-46d1-407a-bc84-9f7847f94e77-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 07:11:42 crc kubenswrapper[4642]: I0128 07:11:42.893296 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a1523f7-46d1-407a-bc84-9f7847f94e77-kube-api-access-ngc4h" (OuterVolumeSpecName: "kube-api-access-ngc4h") pod "7a1523f7-46d1-407a-bc84-9f7847f94e77" (UID: "7a1523f7-46d1-407a-bc84-9f7847f94e77"). InnerVolumeSpecName "kube-api-access-ngc4h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:11:42 crc kubenswrapper[4642]: I0128 07:11:42.976553 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a1523f7-46d1-407a-bc84-9f7847f94e77-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7a1523f7-46d1-407a-bc84-9f7847f94e77" (UID: "7a1523f7-46d1-407a-bc84-9f7847f94e77"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 07:11:42 crc kubenswrapper[4642]: I0128 07:11:42.990055 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngc4h\" (UniqueName: \"kubernetes.io/projected/7a1523f7-46d1-407a-bc84-9f7847f94e77-kube-api-access-ngc4h\") on node \"crc\" DevicePath \"\"" Jan 28 07:11:42 crc kubenswrapper[4642]: I0128 07:11:42.990088 4642 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a1523f7-46d1-407a-bc84-9f7847f94e77-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 07:11:43 crc kubenswrapper[4642]: I0128 07:11:43.515273 4642 generic.go:334] "Generic (PLEG): container finished" podID="7a1523f7-46d1-407a-bc84-9f7847f94e77" containerID="d4e051fcb390b1c36984c75ebc8b6556d1ba81b80544ec15bde3a67764d9e469" exitCode=0 Jan 28 07:11:43 crc kubenswrapper[4642]: I0128 07:11:43.515331 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wj7gx" Jan 28 07:11:43 crc kubenswrapper[4642]: I0128 07:11:43.515332 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wj7gx" event={"ID":"7a1523f7-46d1-407a-bc84-9f7847f94e77","Type":"ContainerDied","Data":"d4e051fcb390b1c36984c75ebc8b6556d1ba81b80544ec15bde3a67764d9e469"} Jan 28 07:11:43 crc kubenswrapper[4642]: I0128 07:11:43.515501 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wj7gx" event={"ID":"7a1523f7-46d1-407a-bc84-9f7847f94e77","Type":"ContainerDied","Data":"28487998363bdfea2bff8acf2407fb072eb0a85a0c11b6dc7dca287e8aa256e6"} Jan 28 07:11:43 crc kubenswrapper[4642]: I0128 07:11:43.515536 4642 scope.go:117] "RemoveContainer" containerID="d4e051fcb390b1c36984c75ebc8b6556d1ba81b80544ec15bde3a67764d9e469" Jan 28 07:11:43 crc kubenswrapper[4642]: I0128 07:11:43.532940 4642 scope.go:117] "RemoveContainer" containerID="458f80b605a7be6754121ba40edd03f1f5a86260fd394948e5234dd601b1dd12" Jan 28 07:11:43 crc kubenswrapper[4642]: I0128 07:11:43.536675 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wj7gx"] Jan 28 07:11:43 crc kubenswrapper[4642]: I0128 07:11:43.544087 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wj7gx"] Jan 28 07:11:43 crc kubenswrapper[4642]: I0128 07:11:43.551887 4642 scope.go:117] "RemoveContainer" containerID="ccc42481bfd6370b4706b6e95712d79763fdc87c40345904459afe2a86b50ecb" Jan 28 07:11:43 crc kubenswrapper[4642]: I0128 07:11:43.581967 4642 scope.go:117] "RemoveContainer" containerID="d4e051fcb390b1c36984c75ebc8b6556d1ba81b80544ec15bde3a67764d9e469" Jan 28 07:11:43 crc kubenswrapper[4642]: E0128 07:11:43.582394 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4e051fcb390b1c36984c75ebc8b6556d1ba81b80544ec15bde3a67764d9e469\": container with ID starting with d4e051fcb390b1c36984c75ebc8b6556d1ba81b80544ec15bde3a67764d9e469 not found: ID does not exist" containerID="d4e051fcb390b1c36984c75ebc8b6556d1ba81b80544ec15bde3a67764d9e469" Jan 28 07:11:43 crc kubenswrapper[4642]: I0128 07:11:43.582464 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4e051fcb390b1c36984c75ebc8b6556d1ba81b80544ec15bde3a67764d9e469"} err="failed to get container status \"d4e051fcb390b1c36984c75ebc8b6556d1ba81b80544ec15bde3a67764d9e469\": rpc error: code = NotFound desc = could not find container \"d4e051fcb390b1c36984c75ebc8b6556d1ba81b80544ec15bde3a67764d9e469\": container with ID starting with d4e051fcb390b1c36984c75ebc8b6556d1ba81b80544ec15bde3a67764d9e469 not found: ID does not exist" Jan 28 07:11:43 crc kubenswrapper[4642]: I0128 07:11:43.582496 4642 scope.go:117] "RemoveContainer" containerID="458f80b605a7be6754121ba40edd03f1f5a86260fd394948e5234dd601b1dd12" Jan 28 07:11:43 crc kubenswrapper[4642]: E0128 07:11:43.582832 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"458f80b605a7be6754121ba40edd03f1f5a86260fd394948e5234dd601b1dd12\": container with ID starting with 458f80b605a7be6754121ba40edd03f1f5a86260fd394948e5234dd601b1dd12 not found: ID does not exist" containerID="458f80b605a7be6754121ba40edd03f1f5a86260fd394948e5234dd601b1dd12" Jan 28 07:11:43 crc kubenswrapper[4642]: I0128 07:11:43.582864 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"458f80b605a7be6754121ba40edd03f1f5a86260fd394948e5234dd601b1dd12"} err="failed to get container status \"458f80b605a7be6754121ba40edd03f1f5a86260fd394948e5234dd601b1dd12\": rpc error: code = NotFound desc = could not find container \"458f80b605a7be6754121ba40edd03f1f5a86260fd394948e5234dd601b1dd12\": container with ID starting with 458f80b605a7be6754121ba40edd03f1f5a86260fd394948e5234dd601b1dd12 not found: ID does not exist" Jan 28 07:11:43 crc kubenswrapper[4642]: I0128 07:11:43.582888 4642 scope.go:117] "RemoveContainer" containerID="ccc42481bfd6370b4706b6e95712d79763fdc87c40345904459afe2a86b50ecb" Jan 28 07:11:43 crc kubenswrapper[4642]: E0128 07:11:43.583128 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ccc42481bfd6370b4706b6e95712d79763fdc87c40345904459afe2a86b50ecb\": container with ID starting with ccc42481bfd6370b4706b6e95712d79763fdc87c40345904459afe2a86b50ecb not found: ID does not exist" containerID="ccc42481bfd6370b4706b6e95712d79763fdc87c40345904459afe2a86b50ecb" Jan 28 07:11:43 crc kubenswrapper[4642]: I0128 07:11:43.583150 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccc42481bfd6370b4706b6e95712d79763fdc87c40345904459afe2a86b50ecb"} err="failed to get container status \"ccc42481bfd6370b4706b6e95712d79763fdc87c40345904459afe2a86b50ecb\": rpc error: code = NotFound desc = could not find container \"ccc42481bfd6370b4706b6e95712d79763fdc87c40345904459afe2a86b50ecb\": container with ID starting with ccc42481bfd6370b4706b6e95712d79763fdc87c40345904459afe2a86b50ecb not found: ID does not exist" Jan 28 07:11:45 crc kubenswrapper[4642]: I0128 07:11:45.116773 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a1523f7-46d1-407a-bc84-9f7847f94e77" path="/var/lib/kubelet/pods/7a1523f7-46d1-407a-bc84-9f7847f94e77/volumes" Jan 28 07:12:02 crc kubenswrapper[4642]: I0128 07:12:02.251570 4642 scope.go:117] "RemoveContainer" containerID="a075b8d5feaefb4d57b1c6a606d793ed73c1897b90c8fee4fb36cd4bc84d3aeb" Jan 28 07:12:08 crc kubenswrapper[4642]: I0128 07:12:08.199675 4642 patch_prober.go:28] interesting pod/machine-config-daemon-hdsmf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 07:12:08 crc kubenswrapper[4642]: I0128 07:12:08.200092 4642 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 07:12:38 crc kubenswrapper[4642]: I0128 07:12:38.199880 4642 patch_prober.go:28] interesting pod/machine-config-daemon-hdsmf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 07:12:38 crc kubenswrapper[4642]: I0128 07:12:38.200261 4642 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 07:12:38 crc kubenswrapper[4642]: I0128 07:12:38.200302 4642 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" Jan 28 07:12:38 crc kubenswrapper[4642]: I0128 07:12:38.201244 4642 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c9b0ccd578b76f711793a52e4fbebb1998cf31023e773fda5a4c995750b44303"} pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 07:12:38 crc kubenswrapper[4642]: I0128 07:12:38.201331 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" containerName="machine-config-daemon" containerID="cri-o://c9b0ccd578b76f711793a52e4fbebb1998cf31023e773fda5a4c995750b44303" gracePeriod=600 Jan 28 07:12:38 crc kubenswrapper[4642]: E0128 07:12:38.317435 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdsmf_openshift-machine-config-operator(338ae955-434d-40bd-8519-580badf3e175)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" Jan 28 07:12:38 crc kubenswrapper[4642]: I0128 07:12:38.926316 4642 generic.go:334] "Generic (PLEG): container finished" podID="338ae955-434d-40bd-8519-580badf3e175" containerID="c9b0ccd578b76f711793a52e4fbebb1998cf31023e773fda5a4c995750b44303" exitCode=0 Jan 28 07:12:38 crc kubenswrapper[4642]: I0128 07:12:38.926355 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" event={"ID":"338ae955-434d-40bd-8519-580badf3e175","Type":"ContainerDied","Data":"c9b0ccd578b76f711793a52e4fbebb1998cf31023e773fda5a4c995750b44303"} Jan 28 07:12:38 crc kubenswrapper[4642]: I0128 07:12:38.926389 4642 scope.go:117] "RemoveContainer" containerID="50c14e99bdc05e634d65f44bc9c435f3fa5f0310a7195243e717dcf863377f09" Jan 28 07:12:38 crc kubenswrapper[4642]: I0128 07:12:38.926954 4642 scope.go:117] "RemoveContainer" containerID="c9b0ccd578b76f711793a52e4fbebb1998cf31023e773fda5a4c995750b44303" Jan 28 07:12:38 crc kubenswrapper[4642]: E0128 07:12:38.927215 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdsmf_openshift-machine-config-operator(338ae955-434d-40bd-8519-580badf3e175)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" Jan 28 07:12:50 crc kubenswrapper[4642]: I0128 07:12:50.098564 4642 scope.go:117] "RemoveContainer" containerID="c9b0ccd578b76f711793a52e4fbebb1998cf31023e773fda5a4c995750b44303" Jan 28 07:12:50 crc kubenswrapper[4642]: E0128 07:12:50.099218 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdsmf_openshift-machine-config-operator(338ae955-434d-40bd-8519-580badf3e175)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" Jan 28 07:12:53 crc kubenswrapper[4642]: I0128 07:12:53.027571 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-k754x"] Jan 28 07:12:53 crc kubenswrapper[4642]: E0128 07:12:53.028122 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a1523f7-46d1-407a-bc84-9f7847f94e77" containerName="registry-server" Jan 28 07:12:53 crc kubenswrapper[4642]: I0128 07:12:53.028136 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a1523f7-46d1-407a-bc84-9f7847f94e77" containerName="registry-server" Jan 28 07:12:53 crc kubenswrapper[4642]: E0128 07:12:53.028147 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a1523f7-46d1-407a-bc84-9f7847f94e77" containerName="extract-content" Jan 28 07:12:53 crc kubenswrapper[4642]: I0128 07:12:53.028153 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a1523f7-46d1-407a-bc84-9f7847f94e77" containerName="extract-content" Jan 28 07:12:53 crc kubenswrapper[4642]: E0128 07:12:53.028170 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a1523f7-46d1-407a-bc84-9f7847f94e77" containerName="extract-utilities" Jan 28 07:12:53 crc kubenswrapper[4642]: I0128 07:12:53.028175 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a1523f7-46d1-407a-bc84-9f7847f94e77" containerName="extract-utilities" Jan 28 07:12:53 crc kubenswrapper[4642]: I0128 07:12:53.028379 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a1523f7-46d1-407a-bc84-9f7847f94e77" containerName="registry-server" Jan 28 07:12:53 crc kubenswrapper[4642]: I0128 07:12:53.029505 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k754x" Jan 28 07:12:53 crc kubenswrapper[4642]: I0128 07:12:53.034442 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-k754x"] Jan 28 07:12:53 crc kubenswrapper[4642]: I0128 07:12:53.098484 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msxvv\" (UniqueName: \"kubernetes.io/projected/530ae063-7096-44e9-b623-402fb7c1944a-kube-api-access-msxvv\") pod \"community-operators-k754x\" (UID: \"530ae063-7096-44e9-b623-402fb7c1944a\") " pod="openshift-marketplace/community-operators-k754x" Jan 28 07:12:53 crc kubenswrapper[4642]: I0128 07:12:53.099352 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/530ae063-7096-44e9-b623-402fb7c1944a-catalog-content\") pod \"community-operators-k754x\" (UID: \"530ae063-7096-44e9-b623-402fb7c1944a\") " pod="openshift-marketplace/community-operators-k754x" Jan 28 07:12:53 crc kubenswrapper[4642]: I0128 07:12:53.099579 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/530ae063-7096-44e9-b623-402fb7c1944a-utilities\") pod \"community-operators-k754x\" (UID: \"530ae063-7096-44e9-b623-402fb7c1944a\") " pod="openshift-marketplace/community-operators-k754x" Jan 28 07:12:53 crc kubenswrapper[4642]: I0128 07:12:53.201304 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/530ae063-7096-44e9-b623-402fb7c1944a-catalog-content\") pod \"community-operators-k754x\" (UID: \"530ae063-7096-44e9-b623-402fb7c1944a\") " pod="openshift-marketplace/community-operators-k754x" Jan 28 07:12:53 crc kubenswrapper[4642]: I0128 07:12:53.201366 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/530ae063-7096-44e9-b623-402fb7c1944a-utilities\") pod \"community-operators-k754x\" (UID: \"530ae063-7096-44e9-b623-402fb7c1944a\") " pod="openshift-marketplace/community-operators-k754x" Jan 28 07:12:53 crc kubenswrapper[4642]: I0128 07:12:53.201495 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msxvv\" (UniqueName: \"kubernetes.io/projected/530ae063-7096-44e9-b623-402fb7c1944a-kube-api-access-msxvv\") pod \"community-operators-k754x\" (UID: \"530ae063-7096-44e9-b623-402fb7c1944a\") " pod="openshift-marketplace/community-operators-k754x" Jan 28 07:12:53 crc kubenswrapper[4642]: I0128 07:12:53.201785 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/530ae063-7096-44e9-b623-402fb7c1944a-catalog-content\") pod \"community-operators-k754x\" (UID: \"530ae063-7096-44e9-b623-402fb7c1944a\") " pod="openshift-marketplace/community-operators-k754x" Jan 28 07:12:53 crc kubenswrapper[4642]: I0128 07:12:53.201798 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/530ae063-7096-44e9-b623-402fb7c1944a-utilities\") pod \"community-operators-k754x\" (UID: \"530ae063-7096-44e9-b623-402fb7c1944a\") " pod="openshift-marketplace/community-operators-k754x" Jan 28 07:12:53 crc kubenswrapper[4642]: I0128 07:12:53.215690 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msxvv\" (UniqueName: \"kubernetes.io/projected/530ae063-7096-44e9-b623-402fb7c1944a-kube-api-access-msxvv\") pod \"community-operators-k754x\" (UID: \"530ae063-7096-44e9-b623-402fb7c1944a\") " pod="openshift-marketplace/community-operators-k754x" Jan 28 07:12:53 crc kubenswrapper[4642]: I0128 07:12:53.345773 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k754x" Jan 28 07:12:53 crc kubenswrapper[4642]: I0128 07:12:53.717180 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-k754x"] Jan 28 07:12:54 crc kubenswrapper[4642]: I0128 07:12:54.023833 4642 generic.go:334] "Generic (PLEG): container finished" podID="530ae063-7096-44e9-b623-402fb7c1944a" containerID="c1e44caa8d8f17b104dba06152d91e628ec32abdce02691f8b0b961253b9d513" exitCode=0 Jan 28 07:12:54 crc kubenswrapper[4642]: I0128 07:12:54.023923 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k754x" event={"ID":"530ae063-7096-44e9-b623-402fb7c1944a","Type":"ContainerDied","Data":"c1e44caa8d8f17b104dba06152d91e628ec32abdce02691f8b0b961253b9d513"} Jan 28 07:12:54 crc kubenswrapper[4642]: I0128 07:12:54.024104 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k754x" event={"ID":"530ae063-7096-44e9-b623-402fb7c1944a","Type":"ContainerStarted","Data":"036637e5a1d547063588492d7513a1ef5e20065496b649b951961f8988ae161b"} Jan 28 07:12:55 crc kubenswrapper[4642]: I0128 07:12:55.032380 4642 generic.go:334] "Generic (PLEG): container finished" podID="530ae063-7096-44e9-b623-402fb7c1944a" containerID="ab3739781601954277d2e9f89f9acdb6d630ae582f1efe13560c6e36a2341e84" exitCode=0 Jan 28 07:12:55 crc kubenswrapper[4642]: I0128 07:12:55.032497 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k754x" event={"ID":"530ae063-7096-44e9-b623-402fb7c1944a","Type":"ContainerDied","Data":"ab3739781601954277d2e9f89f9acdb6d630ae582f1efe13560c6e36a2341e84"} Jan 28 07:12:56 crc kubenswrapper[4642]: I0128 07:12:56.040508 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k754x" event={"ID":"530ae063-7096-44e9-b623-402fb7c1944a","Type":"ContainerStarted","Data":"c70dc5c7d6bad575d788a80bd2d298bfc12ac62cf07b000dbff6fd21daf5977b"} Jan 28 07:12:56 crc kubenswrapper[4642]: I0128 07:12:56.051811 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-k754x" podStartSLOduration=1.461034621 podStartE2EDuration="3.051797538s" podCreationTimestamp="2026-01-28 07:12:53 +0000 UTC" firstStartedPulling="2026-01-28 07:12:54.025504909 +0000 UTC m=+1497.257593719" lastFinishedPulling="2026-01-28 07:12:55.616267827 +0000 UTC m=+1498.848356636" observedRunningTime="2026-01-28 07:12:56.051573908 +0000 UTC m=+1499.283662717" watchObservedRunningTime="2026-01-28 07:12:56.051797538 +0000 UTC m=+1499.283886347" Jan 28 07:13:02 crc kubenswrapper[4642]: I0128 07:13:02.311048 4642 scope.go:117] "RemoveContainer" containerID="129ec0622f1eed4bd86fd6f642ea8bf1d9bc0dead6da5d1e928290ce9f026f40" Jan 28 07:13:02 crc kubenswrapper[4642]: I0128 07:13:02.330447 4642 scope.go:117] "RemoveContainer" containerID="5ca8d221107a797165bcbbe394753e29d5cef4cb1e8fdf395528377cf1de7395" Jan 28 07:13:03 crc kubenswrapper[4642]: I0128 07:13:03.098457 4642 scope.go:117] "RemoveContainer" containerID="c9b0ccd578b76f711793a52e4fbebb1998cf31023e773fda5a4c995750b44303" Jan 28 07:13:03 crc kubenswrapper[4642]: E0128 07:13:03.098886 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdsmf_openshift-machine-config-operator(338ae955-434d-40bd-8519-580badf3e175)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" Jan 28 07:13:03 crc kubenswrapper[4642]: I0128 07:13:03.346772 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-k754x" Jan 28 07:13:03 crc kubenswrapper[4642]: I0128 07:13:03.346812 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-k754x" Jan 28 07:13:03 crc kubenswrapper[4642]: I0128 07:13:03.382180 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-k754x" Jan 28 07:13:04 crc kubenswrapper[4642]: I0128 07:13:04.124556 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-k754x" Jan 28 07:13:04 crc kubenswrapper[4642]: I0128 07:13:04.160788 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-k754x"] Jan 28 07:13:06 crc kubenswrapper[4642]: I0128 07:13:06.107661 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-k754x" podUID="530ae063-7096-44e9-b623-402fb7c1944a" containerName="registry-server" containerID="cri-o://c70dc5c7d6bad575d788a80bd2d298bfc12ac62cf07b000dbff6fd21daf5977b" gracePeriod=2 Jan 28 07:13:06 crc kubenswrapper[4642]: I0128 07:13:06.468563 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k754x" Jan 28 07:13:06 crc kubenswrapper[4642]: I0128 07:13:06.605918 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-msxvv\" (UniqueName: \"kubernetes.io/projected/530ae063-7096-44e9-b623-402fb7c1944a-kube-api-access-msxvv\") pod \"530ae063-7096-44e9-b623-402fb7c1944a\" (UID: \"530ae063-7096-44e9-b623-402fb7c1944a\") " Jan 28 07:13:06 crc kubenswrapper[4642]: I0128 07:13:06.605978 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/530ae063-7096-44e9-b623-402fb7c1944a-utilities\") pod \"530ae063-7096-44e9-b623-402fb7c1944a\" (UID: \"530ae063-7096-44e9-b623-402fb7c1944a\") " Jan 28 07:13:06 crc kubenswrapper[4642]: I0128 07:13:06.606148 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/530ae063-7096-44e9-b623-402fb7c1944a-catalog-content\") pod \"530ae063-7096-44e9-b623-402fb7c1944a\" (UID: \"530ae063-7096-44e9-b623-402fb7c1944a\") " Jan 28 07:13:06 crc kubenswrapper[4642]: I0128 07:13:06.606645 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/530ae063-7096-44e9-b623-402fb7c1944a-utilities" (OuterVolumeSpecName: "utilities") pod "530ae063-7096-44e9-b623-402fb7c1944a" (UID: "530ae063-7096-44e9-b623-402fb7c1944a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 07:13:06 crc kubenswrapper[4642]: I0128 07:13:06.606747 4642 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/530ae063-7096-44e9-b623-402fb7c1944a-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 07:13:06 crc kubenswrapper[4642]: I0128 07:13:06.610038 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/530ae063-7096-44e9-b623-402fb7c1944a-kube-api-access-msxvv" (OuterVolumeSpecName: "kube-api-access-msxvv") pod "530ae063-7096-44e9-b623-402fb7c1944a" (UID: "530ae063-7096-44e9-b623-402fb7c1944a"). InnerVolumeSpecName "kube-api-access-msxvv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:13:06 crc kubenswrapper[4642]: I0128 07:13:06.646380 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/530ae063-7096-44e9-b623-402fb7c1944a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "530ae063-7096-44e9-b623-402fb7c1944a" (UID: "530ae063-7096-44e9-b623-402fb7c1944a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 07:13:06 crc kubenswrapper[4642]: I0128 07:13:06.708340 4642 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/530ae063-7096-44e9-b623-402fb7c1944a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 07:13:06 crc kubenswrapper[4642]: I0128 07:13:06.708364 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-msxvv\" (UniqueName: \"kubernetes.io/projected/530ae063-7096-44e9-b623-402fb7c1944a-kube-api-access-msxvv\") on node \"crc\" DevicePath \"\"" Jan 28 07:13:07 crc kubenswrapper[4642]: I0128 07:13:07.125177 4642 generic.go:334] "Generic (PLEG): container finished" podID="530ae063-7096-44e9-b623-402fb7c1944a" containerID="c70dc5c7d6bad575d788a80bd2d298bfc12ac62cf07b000dbff6fd21daf5977b" exitCode=0 Jan 28 07:13:07 crc kubenswrapper[4642]: I0128 07:13:07.125246 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k754x" event={"ID":"530ae063-7096-44e9-b623-402fb7c1944a","Type":"ContainerDied","Data":"c70dc5c7d6bad575d788a80bd2d298bfc12ac62cf07b000dbff6fd21daf5977b"} Jan 28 07:13:07 crc kubenswrapper[4642]: I0128 07:13:07.125925 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k754x" event={"ID":"530ae063-7096-44e9-b623-402fb7c1944a","Type":"ContainerDied","Data":"036637e5a1d547063588492d7513a1ef5e20065496b649b951961f8988ae161b"} Jan 28 07:13:07 crc kubenswrapper[4642]: I0128 07:13:07.125987 4642 scope.go:117] "RemoveContainer" containerID="c70dc5c7d6bad575d788a80bd2d298bfc12ac62cf07b000dbff6fd21daf5977b" Jan 28 07:13:07 crc kubenswrapper[4642]: I0128 07:13:07.125268 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k754x" Jan 28 07:13:07 crc kubenswrapper[4642]: I0128 07:13:07.145979 4642 scope.go:117] "RemoveContainer" containerID="ab3739781601954277d2e9f89f9acdb6d630ae582f1efe13560c6e36a2341e84" Jan 28 07:13:07 crc kubenswrapper[4642]: I0128 07:13:07.148681 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-k754x"] Jan 28 07:13:07 crc kubenswrapper[4642]: I0128 07:13:07.156993 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-k754x"] Jan 28 07:13:07 crc kubenswrapper[4642]: I0128 07:13:07.173660 4642 scope.go:117] "RemoveContainer" containerID="c1e44caa8d8f17b104dba06152d91e628ec32abdce02691f8b0b961253b9d513" Jan 28 07:13:07 crc kubenswrapper[4642]: I0128 07:13:07.191594 4642 scope.go:117] "RemoveContainer" containerID="c70dc5c7d6bad575d788a80bd2d298bfc12ac62cf07b000dbff6fd21daf5977b" Jan 28 07:13:07 crc kubenswrapper[4642]: E0128 07:13:07.191937 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c70dc5c7d6bad575d788a80bd2d298bfc12ac62cf07b000dbff6fd21daf5977b\": container with ID starting with c70dc5c7d6bad575d788a80bd2d298bfc12ac62cf07b000dbff6fd21daf5977b not found: ID does not exist" containerID="c70dc5c7d6bad575d788a80bd2d298bfc12ac62cf07b000dbff6fd21daf5977b" Jan 28 07:13:07 crc kubenswrapper[4642]: I0128 07:13:07.191969 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c70dc5c7d6bad575d788a80bd2d298bfc12ac62cf07b000dbff6fd21daf5977b"} err="failed to get container status \"c70dc5c7d6bad575d788a80bd2d298bfc12ac62cf07b000dbff6fd21daf5977b\": rpc error: code = NotFound desc = could not find container \"c70dc5c7d6bad575d788a80bd2d298bfc12ac62cf07b000dbff6fd21daf5977b\": container with ID starting with c70dc5c7d6bad575d788a80bd2d298bfc12ac62cf07b000dbff6fd21daf5977b not found: ID does not exist" Jan 28 07:13:07 crc kubenswrapper[4642]: I0128 07:13:07.192014 4642 scope.go:117] "RemoveContainer" containerID="ab3739781601954277d2e9f89f9acdb6d630ae582f1efe13560c6e36a2341e84" Jan 28 07:13:07 crc kubenswrapper[4642]: E0128 07:13:07.192306 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab3739781601954277d2e9f89f9acdb6d630ae582f1efe13560c6e36a2341e84\": container with ID starting with ab3739781601954277d2e9f89f9acdb6d630ae582f1efe13560c6e36a2341e84 not found: ID does not exist" containerID="ab3739781601954277d2e9f89f9acdb6d630ae582f1efe13560c6e36a2341e84" Jan 28 07:13:07 crc kubenswrapper[4642]: I0128 07:13:07.192342 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab3739781601954277d2e9f89f9acdb6d630ae582f1efe13560c6e36a2341e84"} err="failed to get container status \"ab3739781601954277d2e9f89f9acdb6d630ae582f1efe13560c6e36a2341e84\": rpc error: code = NotFound desc = could not find container \"ab3739781601954277d2e9f89f9acdb6d630ae582f1efe13560c6e36a2341e84\": container with ID starting with ab3739781601954277d2e9f89f9acdb6d630ae582f1efe13560c6e36a2341e84 not found: ID does not exist" Jan 28 07:13:07 crc kubenswrapper[4642]: I0128 07:13:07.192365 4642 scope.go:117] "RemoveContainer" containerID="c1e44caa8d8f17b104dba06152d91e628ec32abdce02691f8b0b961253b9d513" Jan 28 07:13:07 crc kubenswrapper[4642]: E0128 07:13:07.192669 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1e44caa8d8f17b104dba06152d91e628ec32abdce02691f8b0b961253b9d513\": container with ID starting with c1e44caa8d8f17b104dba06152d91e628ec32abdce02691f8b0b961253b9d513 not found: ID does not exist" containerID="c1e44caa8d8f17b104dba06152d91e628ec32abdce02691f8b0b961253b9d513" Jan 28 07:13:07 crc kubenswrapper[4642]: I0128 07:13:07.192694 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1e44caa8d8f17b104dba06152d91e628ec32abdce02691f8b0b961253b9d513"} err="failed to get container status \"c1e44caa8d8f17b104dba06152d91e628ec32abdce02691f8b0b961253b9d513\": rpc error: code = NotFound desc = could not find container \"c1e44caa8d8f17b104dba06152d91e628ec32abdce02691f8b0b961253b9d513\": container with ID starting with c1e44caa8d8f17b104dba06152d91e628ec32abdce02691f8b0b961253b9d513 not found: ID does not exist" Jan 28 07:13:09 crc kubenswrapper[4642]: I0128 07:13:09.106989 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="530ae063-7096-44e9-b623-402fb7c1944a" path="/var/lib/kubelet/pods/530ae063-7096-44e9-b623-402fb7c1944a/volumes" Jan 28 07:13:17 crc kubenswrapper[4642]: I0128 07:13:17.035810 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-98zhc"] Jan 28 07:13:17 crc kubenswrapper[4642]: I0128 07:13:17.042762 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-64da-account-create-update-z85gw"] Jan 28 07:13:17 crc kubenswrapper[4642]: I0128 07:13:17.049565 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-98zhc"] Jan 28 07:13:17 crc kubenswrapper[4642]: I0128 07:13:17.054928 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-64da-account-create-update-z85gw"] Jan 28 07:13:17 crc kubenswrapper[4642]: I0128 07:13:17.107075 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7e90a78-c529-4572-83af-92513b7ce545" path="/var/lib/kubelet/pods/b7e90a78-c529-4572-83af-92513b7ce545/volumes" Jan 28 07:13:17 crc kubenswrapper[4642]: I0128 07:13:17.107666 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c10693d6-4ced-4a1b-a821-887fac229b90" path="/var/lib/kubelet/pods/c10693d6-4ced-4a1b-a821-887fac229b90/volumes" Jan 28 07:13:18 crc kubenswrapper[4642]: I0128 07:13:18.023081 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-dlrtb"] Jan 28 07:13:18 crc kubenswrapper[4642]: I0128 07:13:18.035430 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-703f-account-create-update-qw47f"] Jan 28 07:13:18 crc kubenswrapper[4642]: I0128 07:13:18.042127 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-703f-account-create-update-qw47f"] Jan 28 07:13:18 crc kubenswrapper[4642]: I0128 07:13:18.047304 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-dlrtb"] Jan 28 07:13:18 crc kubenswrapper[4642]: I0128 07:13:18.052634 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-b6eb-account-create-update-4q9gp"] Jan 28 07:13:18 crc kubenswrapper[4642]: I0128 07:13:18.057664 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-b6eb-account-create-update-4q9gp"] Jan 28 07:13:18 crc kubenswrapper[4642]: I0128 07:13:18.098568 4642 scope.go:117] "RemoveContainer" containerID="c9b0ccd578b76f711793a52e4fbebb1998cf31023e773fda5a4c995750b44303" Jan 28 07:13:18 crc kubenswrapper[4642]: E0128 07:13:18.098776 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdsmf_openshift-machine-config-operator(338ae955-434d-40bd-8519-580badf3e175)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" Jan 28 07:13:18 crc kubenswrapper[4642]: I0128 07:13:18.220851 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-c7295"] Jan 28 07:13:18 crc kubenswrapper[4642]: E0128 07:13:18.221144 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="530ae063-7096-44e9-b623-402fb7c1944a" containerName="extract-content" Jan 28 07:13:18 crc kubenswrapper[4642]: I0128 07:13:18.221155 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="530ae063-7096-44e9-b623-402fb7c1944a" containerName="extract-content" Jan 28 07:13:18 crc kubenswrapper[4642]: E0128 07:13:18.221176 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="530ae063-7096-44e9-b623-402fb7c1944a" containerName="extract-utilities" Jan 28 07:13:18 crc kubenswrapper[4642]: I0128 07:13:18.221403 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="530ae063-7096-44e9-b623-402fb7c1944a" containerName="extract-utilities" Jan 28 07:13:18 crc kubenswrapper[4642]: E0128 07:13:18.221427 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="530ae063-7096-44e9-b623-402fb7c1944a" containerName="registry-server" Jan 28 07:13:18 crc kubenswrapper[4642]: I0128 07:13:18.221434 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="530ae063-7096-44e9-b623-402fb7c1944a" containerName="registry-server" Jan 28 07:13:18 crc kubenswrapper[4642]: I0128 07:13:18.221602 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="530ae063-7096-44e9-b623-402fb7c1944a" containerName="registry-server" Jan 28 07:13:18 crc kubenswrapper[4642]: I0128 07:13:18.222813 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c7295" Jan 28 07:13:18 crc kubenswrapper[4642]: I0128 07:13:18.228505 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-c7295"] Jan 28 07:13:18 crc kubenswrapper[4642]: I0128 07:13:18.304009 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gq6hg\" (UniqueName: \"kubernetes.io/projected/9f6b1b43-ba6b-46f8-82a1-246894e019b8-kube-api-access-gq6hg\") pod \"certified-operators-c7295\" (UID: \"9f6b1b43-ba6b-46f8-82a1-246894e019b8\") " pod="openshift-marketplace/certified-operators-c7295" Jan 28 07:13:18 crc kubenswrapper[4642]: I0128 07:13:18.304055 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f6b1b43-ba6b-46f8-82a1-246894e019b8-utilities\") pod \"certified-operators-c7295\" (UID: \"9f6b1b43-ba6b-46f8-82a1-246894e019b8\") " pod="openshift-marketplace/certified-operators-c7295" Jan 28 07:13:18 crc kubenswrapper[4642]: I0128 07:13:18.304096 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f6b1b43-ba6b-46f8-82a1-246894e019b8-catalog-content\") pod \"certified-operators-c7295\" (UID: \"9f6b1b43-ba6b-46f8-82a1-246894e019b8\") " pod="openshift-marketplace/certified-operators-c7295" Jan 28 07:13:18 crc kubenswrapper[4642]: I0128 07:13:18.406755 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gq6hg\" (UniqueName: \"kubernetes.io/projected/9f6b1b43-ba6b-46f8-82a1-246894e019b8-kube-api-access-gq6hg\") pod \"certified-operators-c7295\" (UID: \"9f6b1b43-ba6b-46f8-82a1-246894e019b8\") " pod="openshift-marketplace/certified-operators-c7295" Jan 28 07:13:18 crc kubenswrapper[4642]: I0128 07:13:18.406819 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f6b1b43-ba6b-46f8-82a1-246894e019b8-utilities\") pod \"certified-operators-c7295\" (UID: \"9f6b1b43-ba6b-46f8-82a1-246894e019b8\") " pod="openshift-marketplace/certified-operators-c7295" Jan 28 07:13:18 crc kubenswrapper[4642]: I0128 07:13:18.406886 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f6b1b43-ba6b-46f8-82a1-246894e019b8-catalog-content\") pod \"certified-operators-c7295\" (UID: \"9f6b1b43-ba6b-46f8-82a1-246894e019b8\") " pod="openshift-marketplace/certified-operators-c7295" Jan 28 07:13:18 crc kubenswrapper[4642]: I0128 07:13:18.407229 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f6b1b43-ba6b-46f8-82a1-246894e019b8-utilities\") pod \"certified-operators-c7295\" (UID: \"9f6b1b43-ba6b-46f8-82a1-246894e019b8\") " pod="openshift-marketplace/certified-operators-c7295" Jan 28 07:13:18 crc kubenswrapper[4642]: I0128 07:13:18.407293 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f6b1b43-ba6b-46f8-82a1-246894e019b8-catalog-content\") pod \"certified-operators-c7295\" (UID: \"9f6b1b43-ba6b-46f8-82a1-246894e019b8\") " pod="openshift-marketplace/certified-operators-c7295" Jan 28 07:13:18 crc kubenswrapper[4642]: I0128 07:13:18.429106 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gq6hg\" (UniqueName: \"kubernetes.io/projected/9f6b1b43-ba6b-46f8-82a1-246894e019b8-kube-api-access-gq6hg\") pod \"certified-operators-c7295\" (UID: \"9f6b1b43-ba6b-46f8-82a1-246894e019b8\") " pod="openshift-marketplace/certified-operators-c7295" Jan 28 07:13:18 crc kubenswrapper[4642]: I0128 07:13:18.541619 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c7295" Jan 28 07:13:18 crc kubenswrapper[4642]: I0128 07:13:18.974626 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-c7295"] Jan 28 07:13:19 crc kubenswrapper[4642]: I0128 07:13:19.030552 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-rg5pj"] Jan 28 07:13:19 crc kubenswrapper[4642]: I0128 07:13:19.037534 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-rg5pj"] Jan 28 07:13:19 crc kubenswrapper[4642]: I0128 07:13:19.107075 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bbb7547-8266-45d7-9198-1f6fd3a4418b" path="/var/lib/kubelet/pods/4bbb7547-8266-45d7-9198-1f6fd3a4418b/volumes" Jan 28 07:13:19 crc kubenswrapper[4642]: I0128 07:13:19.107751 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95ef778a-9202-43a6-b1b0-fc0938621c71" path="/var/lib/kubelet/pods/95ef778a-9202-43a6-b1b0-fc0938621c71/volumes" Jan 28 07:13:19 crc kubenswrapper[4642]: I0128 07:13:19.109381 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ecdc58f-cfc4-47ac-959c-34336d6c2e36" path="/var/lib/kubelet/pods/9ecdc58f-cfc4-47ac-959c-34336d6c2e36/volumes" Jan 28 07:13:19 crc kubenswrapper[4642]: I0128 07:13:19.109872 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c37e9b0e-970a-44db-bc75-0782625ab2a2" path="/var/lib/kubelet/pods/c37e9b0e-970a-44db-bc75-0782625ab2a2/volumes" Jan 28 07:13:19 crc kubenswrapper[4642]: I0128 07:13:19.210603 4642 generic.go:334] "Generic (PLEG): container finished" podID="9f6b1b43-ba6b-46f8-82a1-246894e019b8" containerID="eef0ed0efc4e7b3940ec63e4448aed2bcd1bf92710797d3fc9534815d5493e77" exitCode=0 Jan 28 07:13:19 crc kubenswrapper[4642]: I0128 07:13:19.210654 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c7295" event={"ID":"9f6b1b43-ba6b-46f8-82a1-246894e019b8","Type":"ContainerDied","Data":"eef0ed0efc4e7b3940ec63e4448aed2bcd1bf92710797d3fc9534815d5493e77"} Jan 28 07:13:19 crc kubenswrapper[4642]: I0128 07:13:19.210700 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c7295" event={"ID":"9f6b1b43-ba6b-46f8-82a1-246894e019b8","Type":"ContainerStarted","Data":"6123e07dd618492bc26bb5af3338baf0d5c01d467401b71dc86e3728750c408e"} Jan 28 07:13:19 crc kubenswrapper[4642]: I0128 07:13:19.213207 4642 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 07:13:20 crc kubenswrapper[4642]: I0128 07:13:20.219544 4642 generic.go:334] "Generic (PLEG): container finished" podID="9f6b1b43-ba6b-46f8-82a1-246894e019b8" containerID="2e9fce0a5787410fa3a6f40fb95c11a3b159e773a8d3cb7ce2416ac274d6ed1d" exitCode=0 Jan 28 07:13:20 crc kubenswrapper[4642]: I0128 07:13:20.219608 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c7295" event={"ID":"9f6b1b43-ba6b-46f8-82a1-246894e019b8","Type":"ContainerDied","Data":"2e9fce0a5787410fa3a6f40fb95c11a3b159e773a8d3cb7ce2416ac274d6ed1d"} Jan 28 07:13:21 crc kubenswrapper[4642]: I0128 07:13:21.228866 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c7295" event={"ID":"9f6b1b43-ba6b-46f8-82a1-246894e019b8","Type":"ContainerStarted","Data":"fee64265c7a1fee0a1ef43af367ce31e9c8fd86e9e3a916bbbc85bd10429cf3b"} Jan 28 07:13:21 crc kubenswrapper[4642]: I0128 07:13:21.245208 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-c7295" podStartSLOduration=1.640390788 podStartE2EDuration="3.245181544s" podCreationTimestamp="2026-01-28 07:13:18 +0000 UTC" firstStartedPulling="2026-01-28 07:13:19.212954605 +0000 UTC m=+1522.445043415" lastFinishedPulling="2026-01-28 07:13:20.817745362 +0000 UTC m=+1524.049834171" observedRunningTime="2026-01-28 07:13:21.241234221 +0000 UTC m=+1524.473323030" watchObservedRunningTime="2026-01-28 07:13:21.245181544 +0000 UTC m=+1524.477270353" Jan 28 07:13:28 crc kubenswrapper[4642]: I0128 07:13:28.542162 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-c7295" Jan 28 07:13:28 crc kubenswrapper[4642]: I0128 07:13:28.542586 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-c7295" Jan 28 07:13:28 crc kubenswrapper[4642]: I0128 07:13:28.574097 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-c7295" Jan 28 07:13:29 crc kubenswrapper[4642]: I0128 07:13:29.313715 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-c7295" Jan 28 07:13:29 crc kubenswrapper[4642]: I0128 07:13:29.347716 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-c7295"] Jan 28 07:13:31 crc kubenswrapper[4642]: I0128 07:13:31.294611 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-c7295" podUID="9f6b1b43-ba6b-46f8-82a1-246894e019b8" containerName="registry-server" containerID="cri-o://fee64265c7a1fee0a1ef43af367ce31e9c8fd86e9e3a916bbbc85bd10429cf3b" gracePeriod=2 Jan 28 07:13:31 crc kubenswrapper[4642]: I0128 07:13:31.646979 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c7295" Jan 28 07:13:31 crc kubenswrapper[4642]: I0128 07:13:31.833800 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f6b1b43-ba6b-46f8-82a1-246894e019b8-catalog-content\") pod \"9f6b1b43-ba6b-46f8-82a1-246894e019b8\" (UID: \"9f6b1b43-ba6b-46f8-82a1-246894e019b8\") " Jan 28 07:13:31 crc kubenswrapper[4642]: I0128 07:13:31.834063 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f6b1b43-ba6b-46f8-82a1-246894e019b8-utilities\") pod \"9f6b1b43-ba6b-46f8-82a1-246894e019b8\" (UID: \"9f6b1b43-ba6b-46f8-82a1-246894e019b8\") " Jan 28 07:13:31 crc kubenswrapper[4642]: I0128 07:13:31.834283 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gq6hg\" (UniqueName: \"kubernetes.io/projected/9f6b1b43-ba6b-46f8-82a1-246894e019b8-kube-api-access-gq6hg\") pod \"9f6b1b43-ba6b-46f8-82a1-246894e019b8\" (UID: \"9f6b1b43-ba6b-46f8-82a1-246894e019b8\") " Jan 28 07:13:31 crc kubenswrapper[4642]: I0128 07:13:31.834804 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f6b1b43-ba6b-46f8-82a1-246894e019b8-utilities" (OuterVolumeSpecName: "utilities") pod "9f6b1b43-ba6b-46f8-82a1-246894e019b8" (UID: "9f6b1b43-ba6b-46f8-82a1-246894e019b8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 07:13:31 crc kubenswrapper[4642]: I0128 07:13:31.838462 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f6b1b43-ba6b-46f8-82a1-246894e019b8-kube-api-access-gq6hg" (OuterVolumeSpecName: "kube-api-access-gq6hg") pod "9f6b1b43-ba6b-46f8-82a1-246894e019b8" (UID: "9f6b1b43-ba6b-46f8-82a1-246894e019b8"). InnerVolumeSpecName "kube-api-access-gq6hg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:13:31 crc kubenswrapper[4642]: I0128 07:13:31.865393 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f6b1b43-ba6b-46f8-82a1-246894e019b8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9f6b1b43-ba6b-46f8-82a1-246894e019b8" (UID: "9f6b1b43-ba6b-46f8-82a1-246894e019b8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 07:13:31 crc kubenswrapper[4642]: I0128 07:13:31.936411 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gq6hg\" (UniqueName: \"kubernetes.io/projected/9f6b1b43-ba6b-46f8-82a1-246894e019b8-kube-api-access-gq6hg\") on node \"crc\" DevicePath \"\"" Jan 28 07:13:31 crc kubenswrapper[4642]: I0128 07:13:31.936434 4642 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f6b1b43-ba6b-46f8-82a1-246894e019b8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 07:13:31 crc kubenswrapper[4642]: I0128 07:13:31.936442 4642 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f6b1b43-ba6b-46f8-82a1-246894e019b8-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 07:13:32 crc kubenswrapper[4642]: I0128 07:13:32.303501 4642 generic.go:334] "Generic (PLEG): container finished" podID="9f6b1b43-ba6b-46f8-82a1-246894e019b8" containerID="fee64265c7a1fee0a1ef43af367ce31e9c8fd86e9e3a916bbbc85bd10429cf3b" exitCode=0 Jan 28 07:13:32 crc kubenswrapper[4642]: I0128 07:13:32.303542 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c7295" event={"ID":"9f6b1b43-ba6b-46f8-82a1-246894e019b8","Type":"ContainerDied","Data":"fee64265c7a1fee0a1ef43af367ce31e9c8fd86e9e3a916bbbc85bd10429cf3b"} Jan 28 07:13:32 crc kubenswrapper[4642]: I0128 07:13:32.303570 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c7295" event={"ID":"9f6b1b43-ba6b-46f8-82a1-246894e019b8","Type":"ContainerDied","Data":"6123e07dd618492bc26bb5af3338baf0d5c01d467401b71dc86e3728750c408e"} Jan 28 07:13:32 crc kubenswrapper[4642]: I0128 07:13:32.303590 4642 scope.go:117] "RemoveContainer" containerID="fee64265c7a1fee0a1ef43af367ce31e9c8fd86e9e3a916bbbc85bd10429cf3b" Jan 28 07:13:32 crc kubenswrapper[4642]: I0128 07:13:32.303767 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c7295" Jan 28 07:13:32 crc kubenswrapper[4642]: I0128 07:13:32.319896 4642 scope.go:117] "RemoveContainer" containerID="2e9fce0a5787410fa3a6f40fb95c11a3b159e773a8d3cb7ce2416ac274d6ed1d" Jan 28 07:13:32 crc kubenswrapper[4642]: I0128 07:13:32.328894 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-c7295"] Jan 28 07:13:32 crc kubenswrapper[4642]: I0128 07:13:32.334615 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-c7295"] Jan 28 07:13:32 crc kubenswrapper[4642]: I0128 07:13:32.338853 4642 scope.go:117] "RemoveContainer" containerID="eef0ed0efc4e7b3940ec63e4448aed2bcd1bf92710797d3fc9534815d5493e77" Jan 28 07:13:32 crc kubenswrapper[4642]: I0128 07:13:32.372717 4642 scope.go:117] "RemoveContainer" containerID="fee64265c7a1fee0a1ef43af367ce31e9c8fd86e9e3a916bbbc85bd10429cf3b" Jan 28 07:13:32 crc kubenswrapper[4642]: E0128 07:13:32.373050 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fee64265c7a1fee0a1ef43af367ce31e9c8fd86e9e3a916bbbc85bd10429cf3b\": container with ID starting with fee64265c7a1fee0a1ef43af367ce31e9c8fd86e9e3a916bbbc85bd10429cf3b not found: ID does not exist" containerID="fee64265c7a1fee0a1ef43af367ce31e9c8fd86e9e3a916bbbc85bd10429cf3b" Jan 28 07:13:32 crc kubenswrapper[4642]: I0128 07:13:32.373091 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fee64265c7a1fee0a1ef43af367ce31e9c8fd86e9e3a916bbbc85bd10429cf3b"} err="failed to get container status \"fee64265c7a1fee0a1ef43af367ce31e9c8fd86e9e3a916bbbc85bd10429cf3b\": rpc error: code = NotFound desc = could not find container \"fee64265c7a1fee0a1ef43af367ce31e9c8fd86e9e3a916bbbc85bd10429cf3b\": container with ID starting with fee64265c7a1fee0a1ef43af367ce31e9c8fd86e9e3a916bbbc85bd10429cf3b not found: ID does not exist" Jan 28 07:13:32 crc kubenswrapper[4642]: I0128 07:13:32.373117 4642 scope.go:117] "RemoveContainer" containerID="2e9fce0a5787410fa3a6f40fb95c11a3b159e773a8d3cb7ce2416ac274d6ed1d" Jan 28 07:13:32 crc kubenswrapper[4642]: E0128 07:13:32.373436 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e9fce0a5787410fa3a6f40fb95c11a3b159e773a8d3cb7ce2416ac274d6ed1d\": container with ID starting with 2e9fce0a5787410fa3a6f40fb95c11a3b159e773a8d3cb7ce2416ac274d6ed1d not found: ID does not exist" containerID="2e9fce0a5787410fa3a6f40fb95c11a3b159e773a8d3cb7ce2416ac274d6ed1d" Jan 28 07:13:32 crc kubenswrapper[4642]: I0128 07:13:32.373484 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e9fce0a5787410fa3a6f40fb95c11a3b159e773a8d3cb7ce2416ac274d6ed1d"} err="failed to get container status \"2e9fce0a5787410fa3a6f40fb95c11a3b159e773a8d3cb7ce2416ac274d6ed1d\": rpc error: code = NotFound desc = could not find container \"2e9fce0a5787410fa3a6f40fb95c11a3b159e773a8d3cb7ce2416ac274d6ed1d\": container with ID starting with 2e9fce0a5787410fa3a6f40fb95c11a3b159e773a8d3cb7ce2416ac274d6ed1d not found: ID does not exist" Jan 28 07:13:32 crc kubenswrapper[4642]: I0128 07:13:32.373509 4642 scope.go:117] "RemoveContainer" containerID="eef0ed0efc4e7b3940ec63e4448aed2bcd1bf92710797d3fc9534815d5493e77" Jan 28 07:13:32 crc kubenswrapper[4642]: E0128 07:13:32.373739 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eef0ed0efc4e7b3940ec63e4448aed2bcd1bf92710797d3fc9534815d5493e77\": container with ID starting with eef0ed0efc4e7b3940ec63e4448aed2bcd1bf92710797d3fc9534815d5493e77 not found: ID does not exist" containerID="eef0ed0efc4e7b3940ec63e4448aed2bcd1bf92710797d3fc9534815d5493e77" Jan 28 07:13:32 crc kubenswrapper[4642]: I0128 07:13:32.373763 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eef0ed0efc4e7b3940ec63e4448aed2bcd1bf92710797d3fc9534815d5493e77"} err="failed to get container status \"eef0ed0efc4e7b3940ec63e4448aed2bcd1bf92710797d3fc9534815d5493e77\": rpc error: code = NotFound desc = could not find container \"eef0ed0efc4e7b3940ec63e4448aed2bcd1bf92710797d3fc9534815d5493e77\": container with ID starting with eef0ed0efc4e7b3940ec63e4448aed2bcd1bf92710797d3fc9534815d5493e77 not found: ID does not exist" Jan 28 07:13:33 crc kubenswrapper[4642]: I0128 07:13:33.098880 4642 scope.go:117] "RemoveContainer" containerID="c9b0ccd578b76f711793a52e4fbebb1998cf31023e773fda5a4c995750b44303" Jan 28 07:13:33 crc kubenswrapper[4642]: E0128 07:13:33.099142 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdsmf_openshift-machine-config-operator(338ae955-434d-40bd-8519-580badf3e175)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" Jan 28 07:13:33 crc kubenswrapper[4642]: I0128 07:13:33.106823 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f6b1b43-ba6b-46f8-82a1-246894e019b8" path="/var/lib/kubelet/pods/9f6b1b43-ba6b-46f8-82a1-246894e019b8/volumes" Jan 28 07:13:36 crc kubenswrapper[4642]: I0128 07:13:36.029033 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-6m6vs"] Jan 28 07:13:36 crc kubenswrapper[4642]: I0128 07:13:36.035224 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-6m6vs"] Jan 28 07:13:37 crc kubenswrapper[4642]: I0128 07:13:37.106206 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="adfe7625-9f97-437f-bf74-86d67336b34f" path="/var/lib/kubelet/pods/adfe7625-9f97-437f-bf74-86d67336b34f/volumes" Jan 28 07:13:45 crc kubenswrapper[4642]: I0128 07:13:45.021072 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-97hmf"] Jan 28 07:13:45 crc kubenswrapper[4642]: I0128 07:13:45.028401 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-97hmf"] Jan 28 07:13:45 crc kubenswrapper[4642]: I0128 07:13:45.106313 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="096310a2-8b6e-436b-9b34-6263ac3806b6" path="/var/lib/kubelet/pods/096310a2-8b6e-436b-9b34-6263ac3806b6/volumes" Jan 28 07:13:48 crc kubenswrapper[4642]: I0128 07:13:48.098154 4642 scope.go:117] "RemoveContainer" containerID="c9b0ccd578b76f711793a52e4fbebb1998cf31023e773fda5a4c995750b44303" Jan 28 07:13:48 crc kubenswrapper[4642]: E0128 07:13:48.098822 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdsmf_openshift-machine-config-operator(338ae955-434d-40bd-8519-580badf3e175)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" Jan 28 07:13:52 crc kubenswrapper[4642]: I0128 07:13:52.021929 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-vn7qt"] Jan 28 07:13:52 crc kubenswrapper[4642]: I0128 07:13:52.029970 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-vn7qt"] Jan 28 07:13:53 crc kubenswrapper[4642]: I0128 07:13:53.106396 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c47afaa6-b39b-4af7-aaee-d47fa6e7932a" path="/var/lib/kubelet/pods/c47afaa6-b39b-4af7-aaee-d47fa6e7932a/volumes" Jan 28 07:13:55 crc kubenswrapper[4642]: I0128 07:13:55.024494 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-3a33-account-create-update-xs5t5"] Jan 28 07:13:55 crc kubenswrapper[4642]: I0128 07:13:55.031728 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-jl29g"] Jan 28 07:13:55 crc kubenswrapper[4642]: I0128 07:13:55.038926 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-93c0-account-create-update-jwrtw"] Jan 28 07:13:55 crc kubenswrapper[4642]: I0128 07:13:55.044266 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5126-account-create-update-gt9kz"] Jan 28 07:13:55 crc kubenswrapper[4642]: I0128 07:13:55.049384 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-3a33-account-create-update-xs5t5"] Jan 28 07:13:55 crc kubenswrapper[4642]: I0128 07:13:55.056587 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-jl29g"] Jan 28 07:13:55 crc kubenswrapper[4642]: I0128 07:13:55.062011 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-93c0-account-create-update-jwrtw"] Jan 28 07:13:55 crc kubenswrapper[4642]: I0128 07:13:55.066714 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5126-account-create-update-gt9kz"] Jan 28 07:13:55 crc kubenswrapper[4642]: I0128 07:13:55.106743 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3987de36-92f6-4d2b-b4d6-42ab67c7525f" path="/var/lib/kubelet/pods/3987de36-92f6-4d2b-b4d6-42ab67c7525f/volumes" Jan 28 07:13:55 crc kubenswrapper[4642]: I0128 07:13:55.107529 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61bc8d83-3e11-4a38-9df5-f1c3f391540f" path="/var/lib/kubelet/pods/61bc8d83-3e11-4a38-9df5-f1c3f391540f/volumes" Jan 28 07:13:55 crc kubenswrapper[4642]: I0128 07:13:55.108090 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="806827f7-19db-4656-89ed-9d2253ecbf67" path="/var/lib/kubelet/pods/806827f7-19db-4656-89ed-9d2253ecbf67/volumes" Jan 28 07:13:55 crc kubenswrapper[4642]: I0128 07:13:55.108672 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="910e369b-6c77-43c1-95de-f125a1813bc6" path="/var/lib/kubelet/pods/910e369b-6c77-43c1-95de-f125a1813bc6/volumes" Jan 28 07:13:58 crc kubenswrapper[4642]: I0128 07:13:58.025167 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-gh7r2"] Jan 28 07:13:58 crc kubenswrapper[4642]: I0128 07:13:58.034068 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-gh7r2"] Jan 28 07:13:59 crc kubenswrapper[4642]: I0128 07:13:59.098334 4642 scope.go:117] "RemoveContainer" containerID="c9b0ccd578b76f711793a52e4fbebb1998cf31023e773fda5a4c995750b44303" Jan 28 07:13:59 crc kubenswrapper[4642]: E0128 07:13:59.098613 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdsmf_openshift-machine-config-operator(338ae955-434d-40bd-8519-580badf3e175)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" Jan 28 07:13:59 crc kubenswrapper[4642]: I0128 07:13:59.106104 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="995eb4f3-f6c8-4fb3-a991-8777f7b645cb" path="/var/lib/kubelet/pods/995eb4f3-f6c8-4fb3-a991-8777f7b645cb/volumes" Jan 28 07:14:01 crc kubenswrapper[4642]: I0128 07:14:01.019915 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-kxtxh"] Jan 28 07:14:01 crc kubenswrapper[4642]: I0128 07:14:01.042860 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-kxtxh"] Jan 28 07:14:01 crc kubenswrapper[4642]: I0128 07:14:01.106443 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="455075b1-b6a1-4b44-b1a7-fc86ed3fc8e0" path="/var/lib/kubelet/pods/455075b1-b6a1-4b44-b1a7-fc86ed3fc8e0/volumes" Jan 28 07:14:02 crc kubenswrapper[4642]: I0128 07:14:02.375832 4642 scope.go:117] "RemoveContainer" containerID="255b576e9feec0de26dbc9a69f13edcab06e7cfe5df136a1809c418f4bbe3287" Jan 28 07:14:02 crc kubenswrapper[4642]: I0128 07:14:02.395311 4642 scope.go:117] "RemoveContainer" containerID="f0b14c61ae9adc600cff3c4c95c1e7be6419108332a6b145c55c936a93d4ea86" Jan 28 07:14:02 crc kubenswrapper[4642]: I0128 07:14:02.438926 4642 scope.go:117] "RemoveContainer" containerID="3350b7651fdc3065210fdccdcf5f7c3b46f91c5ead66442b9642524a0c5bcda2" Jan 28 07:14:02 crc kubenswrapper[4642]: I0128 07:14:02.457213 4642 scope.go:117] "RemoveContainer" containerID="38e8b7b27d4683fe6c07bef8e86d9255eb5962e071a0583ce41df11189fef666" Jan 28 07:14:02 crc kubenswrapper[4642]: I0128 07:14:02.484212 4642 scope.go:117] "RemoveContainer" containerID="7ae425e8f94a4f63672e4cfa8a6c900a3dadacfa6cfee846ce69ea39fe6d4a77" Jan 28 07:14:02 crc kubenswrapper[4642]: I0128 07:14:02.507450 4642 scope.go:117] "RemoveContainer" containerID="b8385e6027184f93c993c5dc6836d52a30c77447640e2428806bcc3cfce26f4d" Jan 28 07:14:02 crc kubenswrapper[4642]: I0128 07:14:02.537348 4642 scope.go:117] "RemoveContainer" containerID="eb960a0316c91acff06525b7c0f6749f8951c001c383be27618cda1cbfa5a48b" Jan 28 07:14:02 crc kubenswrapper[4642]: I0128 07:14:02.554679 4642 scope.go:117] "RemoveContainer" containerID="f734b7bb98c92bd5a4bb422c0d89a59844110a34b4c30e57d5142dfcfda108fa" Jan 28 07:14:02 crc kubenswrapper[4642]: I0128 07:14:02.570149 4642 scope.go:117] "RemoveContainer" containerID="2a9544aee5df4a6addb8b1bb0192adcdc25a5b6392d2e625f467814f2e942c03" Jan 28 07:14:02 crc kubenswrapper[4642]: I0128 07:14:02.584171 4642 scope.go:117] "RemoveContainer" containerID="c0580a3436a26f97b7dfdeb5c1cc58a5444133baa07d8d45a8dbec558df6bf0a" Jan 28 07:14:02 crc kubenswrapper[4642]: I0128 07:14:02.598821 4642 scope.go:117] "RemoveContainer" containerID="ee5d93f01fa5bb07bc234a89d646ecd0e5b52c4158e76237be7999a9b57da439" Jan 28 07:14:02 crc kubenswrapper[4642]: I0128 07:14:02.625974 4642 scope.go:117] "RemoveContainer" containerID="b2259952f305589b13f158320ae6f333bcd9218bef4a783ee41ca75cb770b361" Jan 28 07:14:02 crc kubenswrapper[4642]: I0128 07:14:02.641843 4642 scope.go:117] "RemoveContainer" containerID="7d77c7ac7f5ff552436038a3f0848ec7bfc69bf52dcac07c423e4b3060cf797f" Jan 28 07:14:02 crc kubenswrapper[4642]: I0128 07:14:02.659139 4642 scope.go:117] "RemoveContainer" containerID="41ac7ba403bc3a29df1cde08ca227a70a61a69d3668bf8d4ebb8ba754652c629" Jan 28 07:14:02 crc kubenswrapper[4642]: I0128 07:14:02.678470 4642 scope.go:117] "RemoveContainer" containerID="a6c5166a99a3d5e78a7716afdd8aad77f381dfbc0490021d064ed6b5ee6f781d" Jan 28 07:14:10 crc kubenswrapper[4642]: I0128 07:14:10.028693 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-ds4qn"] Jan 28 07:14:10 crc kubenswrapper[4642]: I0128 07:14:10.051337 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-ds4qn"] Jan 28 07:14:11 crc kubenswrapper[4642]: I0128 07:14:11.107061 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1bc0b13-73e3-41ef-a7a5-c5da7462234e" path="/var/lib/kubelet/pods/b1bc0b13-73e3-41ef-a7a5-c5da7462234e/volumes" Jan 28 07:14:12 crc kubenswrapper[4642]: I0128 07:14:12.098439 4642 scope.go:117] "RemoveContainer" containerID="c9b0ccd578b76f711793a52e4fbebb1998cf31023e773fda5a4c995750b44303" Jan 28 07:14:12 crc kubenswrapper[4642]: E0128 07:14:12.098921 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdsmf_openshift-machine-config-operator(338ae955-434d-40bd-8519-580badf3e175)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" Jan 28 07:14:14 crc kubenswrapper[4642]: I0128 07:14:14.020775 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-8s4pp"] Jan 28 07:14:14 crc kubenswrapper[4642]: I0128 07:14:14.027990 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-8s4pp"] Jan 28 07:14:15 crc kubenswrapper[4642]: I0128 07:14:15.022149 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-vtvtp"] Jan 28 07:14:15 crc kubenswrapper[4642]: I0128 07:14:15.028279 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-vtvtp"] Jan 28 07:14:15 crc kubenswrapper[4642]: I0128 07:14:15.106378 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="914420d1-9b01-49b9-962d-405cc170a061" path="/var/lib/kubelet/pods/914420d1-9b01-49b9-962d-405cc170a061/volumes" Jan 28 07:14:15 crc kubenswrapper[4642]: I0128 07:14:15.107011 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e57d0d61-2e5c-4cd6-be95-3e3c6f102f7e" path="/var/lib/kubelet/pods/e57d0d61-2e5c-4cd6-be95-3e3c6f102f7e/volumes" Jan 28 07:14:16 crc kubenswrapper[4642]: I0128 07:14:16.021224 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-cpmkw"] Jan 28 07:14:16 crc kubenswrapper[4642]: I0128 07:14:16.027752 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-cpmkw"] Jan 28 07:14:17 crc kubenswrapper[4642]: I0128 07:14:17.106056 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fae0170c-94f4-4e36-b99d-8d183fb5b6e1" path="/var/lib/kubelet/pods/fae0170c-94f4-4e36-b99d-8d183fb5b6e1/volumes" Jan 28 07:14:27 crc kubenswrapper[4642]: I0128 07:14:27.103967 4642 scope.go:117] "RemoveContainer" containerID="c9b0ccd578b76f711793a52e4fbebb1998cf31023e773fda5a4c995750b44303" Jan 28 07:14:27 crc kubenswrapper[4642]: E0128 07:14:27.104641 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdsmf_openshift-machine-config-operator(338ae955-434d-40bd-8519-580badf3e175)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" Jan 28 07:14:37 crc kubenswrapper[4642]: I0128 07:14:37.735137 4642 generic.go:334] "Generic (PLEG): container finished" podID="f6bd9dfb-a07e-4082-ab80-e7de0f582617" containerID="283a4815dd4e2c0d7415f68a0b2720aeddb2ef6edfeb6ddd0fb1c6096554a94f" exitCode=0 Jan 28 07:14:37 crc kubenswrapper[4642]: I0128 07:14:37.735328 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5s2mm" event={"ID":"f6bd9dfb-a07e-4082-ab80-e7de0f582617","Type":"ContainerDied","Data":"283a4815dd4e2c0d7415f68a0b2720aeddb2ef6edfeb6ddd0fb1c6096554a94f"} Jan 28 07:14:39 crc kubenswrapper[4642]: I0128 07:14:39.048019 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5s2mm" Jan 28 07:14:39 crc kubenswrapper[4642]: I0128 07:14:39.098561 4642 scope.go:117] "RemoveContainer" containerID="c9b0ccd578b76f711793a52e4fbebb1998cf31023e773fda5a4c995750b44303" Jan 28 07:14:39 crc kubenswrapper[4642]: E0128 07:14:39.098799 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdsmf_openshift-machine-config-operator(338ae955-434d-40bd-8519-580badf3e175)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" Jan 28 07:14:39 crc kubenswrapper[4642]: I0128 07:14:39.141922 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6wnn\" (UniqueName: \"kubernetes.io/projected/f6bd9dfb-a07e-4082-ab80-e7de0f582617-kube-api-access-b6wnn\") pod \"f6bd9dfb-a07e-4082-ab80-e7de0f582617\" (UID: \"f6bd9dfb-a07e-4082-ab80-e7de0f582617\") " Jan 28 07:14:39 crc kubenswrapper[4642]: I0128 07:14:39.142024 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f6bd9dfb-a07e-4082-ab80-e7de0f582617-ssh-key-openstack-edpm-ipam\") pod \"f6bd9dfb-a07e-4082-ab80-e7de0f582617\" (UID: \"f6bd9dfb-a07e-4082-ab80-e7de0f582617\") " Jan 28 07:14:39 crc kubenswrapper[4642]: I0128 07:14:39.142361 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f6bd9dfb-a07e-4082-ab80-e7de0f582617-inventory\") pod \"f6bd9dfb-a07e-4082-ab80-e7de0f582617\" (UID: \"f6bd9dfb-a07e-4082-ab80-e7de0f582617\") " Jan 28 07:14:39 crc kubenswrapper[4642]: I0128 07:14:39.146448 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6bd9dfb-a07e-4082-ab80-e7de0f582617-kube-api-access-b6wnn" (OuterVolumeSpecName: "kube-api-access-b6wnn") pod "f6bd9dfb-a07e-4082-ab80-e7de0f582617" (UID: "f6bd9dfb-a07e-4082-ab80-e7de0f582617"). InnerVolumeSpecName "kube-api-access-b6wnn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:14:39 crc kubenswrapper[4642]: I0128 07:14:39.162827 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6bd9dfb-a07e-4082-ab80-e7de0f582617-inventory" (OuterVolumeSpecName: "inventory") pod "f6bd9dfb-a07e-4082-ab80-e7de0f582617" (UID: "f6bd9dfb-a07e-4082-ab80-e7de0f582617"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:14:39 crc kubenswrapper[4642]: I0128 07:14:39.163145 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6bd9dfb-a07e-4082-ab80-e7de0f582617-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f6bd9dfb-a07e-4082-ab80-e7de0f582617" (UID: "f6bd9dfb-a07e-4082-ab80-e7de0f582617"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:14:39 crc kubenswrapper[4642]: I0128 07:14:39.244009 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6wnn\" (UniqueName: \"kubernetes.io/projected/f6bd9dfb-a07e-4082-ab80-e7de0f582617-kube-api-access-b6wnn\") on node \"crc\" DevicePath \"\"" Jan 28 07:14:39 crc kubenswrapper[4642]: I0128 07:14:39.244035 4642 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f6bd9dfb-a07e-4082-ab80-e7de0f582617-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 07:14:39 crc kubenswrapper[4642]: I0128 07:14:39.244044 4642 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f6bd9dfb-a07e-4082-ab80-e7de0f582617-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 07:14:39 crc kubenswrapper[4642]: I0128 07:14:39.750412 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5s2mm" event={"ID":"f6bd9dfb-a07e-4082-ab80-e7de0f582617","Type":"ContainerDied","Data":"4ff6867c6317f55ae3d5a4e6539a11a69fdf292c42d5d73ad8a897df8a0e5658"} Jan 28 07:14:39 crc kubenswrapper[4642]: I0128 07:14:39.750451 4642 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ff6867c6317f55ae3d5a4e6539a11a69fdf292c42d5d73ad8a897df8a0e5658" Jan 28 07:14:39 crc kubenswrapper[4642]: I0128 07:14:39.750459 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5s2mm" Jan 28 07:14:39 crc kubenswrapper[4642]: I0128 07:14:39.811521 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xqnkj"] Jan 28 07:14:39 crc kubenswrapper[4642]: E0128 07:14:39.811978 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f6b1b43-ba6b-46f8-82a1-246894e019b8" containerName="extract-utilities" Jan 28 07:14:39 crc kubenswrapper[4642]: I0128 07:14:39.811991 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f6b1b43-ba6b-46f8-82a1-246894e019b8" containerName="extract-utilities" Jan 28 07:14:39 crc kubenswrapper[4642]: E0128 07:14:39.812006 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f6b1b43-ba6b-46f8-82a1-246894e019b8" containerName="registry-server" Jan 28 07:14:39 crc kubenswrapper[4642]: I0128 07:14:39.812013 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f6b1b43-ba6b-46f8-82a1-246894e019b8" containerName="registry-server" Jan 28 07:14:39 crc kubenswrapper[4642]: E0128 07:14:39.812026 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6bd9dfb-a07e-4082-ab80-e7de0f582617" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 28 07:14:39 crc kubenswrapper[4642]: I0128 07:14:39.812034 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6bd9dfb-a07e-4082-ab80-e7de0f582617" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 28 07:14:39 crc kubenswrapper[4642]: E0128 07:14:39.812050 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f6b1b43-ba6b-46f8-82a1-246894e019b8" containerName="extract-content" Jan 28 07:14:39 crc kubenswrapper[4642]: I0128 07:14:39.812055 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f6b1b43-ba6b-46f8-82a1-246894e019b8" containerName="extract-content" Jan 28 07:14:39 crc kubenswrapper[4642]: I0128 07:14:39.812244 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f6b1b43-ba6b-46f8-82a1-246894e019b8" containerName="registry-server" Jan 28 07:14:39 crc kubenswrapper[4642]: I0128 07:14:39.812265 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6bd9dfb-a07e-4082-ab80-e7de0f582617" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 28 07:14:39 crc kubenswrapper[4642]: I0128 07:14:39.812903 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xqnkj" Jan 28 07:14:39 crc kubenswrapper[4642]: I0128 07:14:39.814597 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 07:14:39 crc kubenswrapper[4642]: I0128 07:14:39.814911 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r6m2m" Jan 28 07:14:39 crc kubenswrapper[4642]: I0128 07:14:39.814990 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 07:14:39 crc kubenswrapper[4642]: I0128 07:14:39.814992 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 07:14:39 crc kubenswrapper[4642]: I0128 07:14:39.820036 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xqnkj"] Jan 28 07:14:39 crc kubenswrapper[4642]: I0128 07:14:39.954578 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a99427f8-9376-4fe9-81ed-cfe6740f4581-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-xqnkj\" (UID: \"a99427f8-9376-4fe9-81ed-cfe6740f4581\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xqnkj" Jan 28 07:14:39 crc kubenswrapper[4642]: I0128 07:14:39.954630 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a99427f8-9376-4fe9-81ed-cfe6740f4581-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-xqnkj\" (UID: \"a99427f8-9376-4fe9-81ed-cfe6740f4581\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xqnkj" Jan 28 07:14:39 crc kubenswrapper[4642]: I0128 07:14:39.954733 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzfzt\" (UniqueName: \"kubernetes.io/projected/a99427f8-9376-4fe9-81ed-cfe6740f4581-kube-api-access-mzfzt\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-xqnkj\" (UID: \"a99427f8-9376-4fe9-81ed-cfe6740f4581\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xqnkj" Jan 28 07:14:40 crc kubenswrapper[4642]: I0128 07:14:40.056394 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a99427f8-9376-4fe9-81ed-cfe6740f4581-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-xqnkj\" (UID: \"a99427f8-9376-4fe9-81ed-cfe6740f4581\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xqnkj" Jan 28 07:14:40 crc kubenswrapper[4642]: I0128 07:14:40.056436 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a99427f8-9376-4fe9-81ed-cfe6740f4581-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-xqnkj\" (UID: \"a99427f8-9376-4fe9-81ed-cfe6740f4581\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xqnkj" Jan 28 07:14:40 crc kubenswrapper[4642]: I0128 07:14:40.056502 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzfzt\" (UniqueName: \"kubernetes.io/projected/a99427f8-9376-4fe9-81ed-cfe6740f4581-kube-api-access-mzfzt\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-xqnkj\" (UID: \"a99427f8-9376-4fe9-81ed-cfe6740f4581\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xqnkj" Jan 28 07:14:40 crc kubenswrapper[4642]: I0128 07:14:40.060071 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a99427f8-9376-4fe9-81ed-cfe6740f4581-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-xqnkj\" (UID: \"a99427f8-9376-4fe9-81ed-cfe6740f4581\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xqnkj" Jan 28 07:14:40 crc kubenswrapper[4642]: I0128 07:14:40.060865 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a99427f8-9376-4fe9-81ed-cfe6740f4581-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-xqnkj\" (UID: \"a99427f8-9376-4fe9-81ed-cfe6740f4581\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xqnkj" Jan 28 07:14:40 crc kubenswrapper[4642]: I0128 07:14:40.071670 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzfzt\" (UniqueName: \"kubernetes.io/projected/a99427f8-9376-4fe9-81ed-cfe6740f4581-kube-api-access-mzfzt\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-xqnkj\" (UID: \"a99427f8-9376-4fe9-81ed-cfe6740f4581\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xqnkj" Jan 28 07:14:40 crc kubenswrapper[4642]: I0128 07:14:40.134824 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xqnkj" Jan 28 07:14:40 crc kubenswrapper[4642]: I0128 07:14:40.549577 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xqnkj"] Jan 28 07:14:40 crc kubenswrapper[4642]: W0128 07:14:40.553602 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda99427f8_9376_4fe9_81ed_cfe6740f4581.slice/crio-6313d3faae91dfff243a9d7d034f2db690cc349034f2d1e9ec2b9105b24bdec3 WatchSource:0}: Error finding container 6313d3faae91dfff243a9d7d034f2db690cc349034f2d1e9ec2b9105b24bdec3: Status 404 returned error can't find the container with id 6313d3faae91dfff243a9d7d034f2db690cc349034f2d1e9ec2b9105b24bdec3 Jan 28 07:14:40 crc kubenswrapper[4642]: I0128 07:14:40.758251 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xqnkj" event={"ID":"a99427f8-9376-4fe9-81ed-cfe6740f4581","Type":"ContainerStarted","Data":"6313d3faae91dfff243a9d7d034f2db690cc349034f2d1e9ec2b9105b24bdec3"} Jan 28 07:14:41 crc kubenswrapper[4642]: I0128 07:14:41.766709 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xqnkj" event={"ID":"a99427f8-9376-4fe9-81ed-cfe6740f4581","Type":"ContainerStarted","Data":"a2539b829e1698d31c5e9329f22dd31af6e17a582c6c7d1949ad4c64198a78e6"} Jan 28 07:14:41 crc kubenswrapper[4642]: I0128 07:14:41.783617 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xqnkj" podStartSLOduration=2.23348398 podStartE2EDuration="2.783599423s" podCreationTimestamp="2026-01-28 07:14:39 +0000 UTC" firstStartedPulling="2026-01-28 07:14:40.556050095 +0000 UTC m=+1603.788138903" lastFinishedPulling="2026-01-28 07:14:41.106165536 +0000 UTC m=+1604.338254346" observedRunningTime="2026-01-28 07:14:41.776797522 +0000 UTC m=+1605.008886331" watchObservedRunningTime="2026-01-28 07:14:41.783599423 +0000 UTC m=+1605.015688232" Jan 28 07:14:51 crc kubenswrapper[4642]: I0128 07:14:51.098963 4642 scope.go:117] "RemoveContainer" containerID="c9b0ccd578b76f711793a52e4fbebb1998cf31023e773fda5a4c995750b44303" Jan 28 07:14:51 crc kubenswrapper[4642]: E0128 07:14:51.099675 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdsmf_openshift-machine-config-operator(338ae955-434d-40bd-8519-580badf3e175)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" Jan 28 07:15:00 crc kubenswrapper[4642]: I0128 07:15:00.132461 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493075-g6njm"] Jan 28 07:15:00 crc kubenswrapper[4642]: I0128 07:15:00.133929 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493075-g6njm" Jan 28 07:15:00 crc kubenswrapper[4642]: I0128 07:15:00.136735 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 28 07:15:00 crc kubenswrapper[4642]: I0128 07:15:00.137070 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 28 07:15:00 crc kubenswrapper[4642]: I0128 07:15:00.147384 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493075-g6njm"] Jan 28 07:15:00 crc kubenswrapper[4642]: I0128 07:15:00.301285 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5b35b199-6341-41dc-81e2-afc74b3ecf75-secret-volume\") pod \"collect-profiles-29493075-g6njm\" (UID: \"5b35b199-6341-41dc-81e2-afc74b3ecf75\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493075-g6njm" Jan 28 07:15:00 crc kubenswrapper[4642]: I0128 07:15:00.301505 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbmw7\" (UniqueName: \"kubernetes.io/projected/5b35b199-6341-41dc-81e2-afc74b3ecf75-kube-api-access-sbmw7\") pod \"collect-profiles-29493075-g6njm\" (UID: \"5b35b199-6341-41dc-81e2-afc74b3ecf75\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493075-g6njm" Jan 28 07:15:00 crc kubenswrapper[4642]: I0128 07:15:00.301545 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5b35b199-6341-41dc-81e2-afc74b3ecf75-config-volume\") pod \"collect-profiles-29493075-g6njm\" (UID: \"5b35b199-6341-41dc-81e2-afc74b3ecf75\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493075-g6njm" Jan 28 07:15:00 crc kubenswrapper[4642]: I0128 07:15:00.402748 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbmw7\" (UniqueName: \"kubernetes.io/projected/5b35b199-6341-41dc-81e2-afc74b3ecf75-kube-api-access-sbmw7\") pod \"collect-profiles-29493075-g6njm\" (UID: \"5b35b199-6341-41dc-81e2-afc74b3ecf75\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493075-g6njm" Jan 28 07:15:00 crc kubenswrapper[4642]: I0128 07:15:00.402954 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5b35b199-6341-41dc-81e2-afc74b3ecf75-config-volume\") pod \"collect-profiles-29493075-g6njm\" (UID: \"5b35b199-6341-41dc-81e2-afc74b3ecf75\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493075-g6njm" Jan 28 07:15:00 crc kubenswrapper[4642]: I0128 07:15:00.403146 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5b35b199-6341-41dc-81e2-afc74b3ecf75-secret-volume\") pod \"collect-profiles-29493075-g6njm\" (UID: \"5b35b199-6341-41dc-81e2-afc74b3ecf75\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493075-g6njm" Jan 28 07:15:00 crc kubenswrapper[4642]: I0128 07:15:00.404112 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5b35b199-6341-41dc-81e2-afc74b3ecf75-config-volume\") pod \"collect-profiles-29493075-g6njm\" (UID: \"5b35b199-6341-41dc-81e2-afc74b3ecf75\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493075-g6njm" Jan 28 07:15:00 crc kubenswrapper[4642]: I0128 07:15:00.409339 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5b35b199-6341-41dc-81e2-afc74b3ecf75-secret-volume\") pod \"collect-profiles-29493075-g6njm\" (UID: \"5b35b199-6341-41dc-81e2-afc74b3ecf75\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493075-g6njm" Jan 28 07:15:00 crc kubenswrapper[4642]: I0128 07:15:00.415809 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbmw7\" (UniqueName: \"kubernetes.io/projected/5b35b199-6341-41dc-81e2-afc74b3ecf75-kube-api-access-sbmw7\") pod \"collect-profiles-29493075-g6njm\" (UID: \"5b35b199-6341-41dc-81e2-afc74b3ecf75\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493075-g6njm" Jan 28 07:15:00 crc kubenswrapper[4642]: I0128 07:15:00.449956 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493075-g6njm" Jan 28 07:15:00 crc kubenswrapper[4642]: I0128 07:15:00.806003 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493075-g6njm"] Jan 28 07:15:00 crc kubenswrapper[4642]: W0128 07:15:00.812900 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b35b199_6341_41dc_81e2_afc74b3ecf75.slice/crio-6ed3fc27da7e9d8cc74615fa9da687546d2271c0177ef60d72385df154045939 WatchSource:0}: Error finding container 6ed3fc27da7e9d8cc74615fa9da687546d2271c0177ef60d72385df154045939: Status 404 returned error can't find the container with id 6ed3fc27da7e9d8cc74615fa9da687546d2271c0177ef60d72385df154045939 Jan 28 07:15:00 crc kubenswrapper[4642]: I0128 07:15:00.886144 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493075-g6njm" event={"ID":"5b35b199-6341-41dc-81e2-afc74b3ecf75","Type":"ContainerStarted","Data":"6ed3fc27da7e9d8cc74615fa9da687546d2271c0177ef60d72385df154045939"} Jan 28 07:15:01 crc kubenswrapper[4642]: I0128 07:15:01.895027 4642 generic.go:334] "Generic (PLEG): container finished" podID="5b35b199-6341-41dc-81e2-afc74b3ecf75" containerID="fdbb82803530fbd9c47406375ac3b500ed6c40cf60e4ffa81da6c05f2e8eaa4a" exitCode=0 Jan 28 07:15:01 crc kubenswrapper[4642]: I0128 07:15:01.895089 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493075-g6njm" event={"ID":"5b35b199-6341-41dc-81e2-afc74b3ecf75","Type":"ContainerDied","Data":"fdbb82803530fbd9c47406375ac3b500ed6c40cf60e4ffa81da6c05f2e8eaa4a"} Jan 28 07:15:02 crc kubenswrapper[4642]: I0128 07:15:02.098545 4642 scope.go:117] "RemoveContainer" containerID="c9b0ccd578b76f711793a52e4fbebb1998cf31023e773fda5a4c995750b44303" Jan 28 07:15:02 crc kubenswrapper[4642]: E0128 07:15:02.098759 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdsmf_openshift-machine-config-operator(338ae955-434d-40bd-8519-580badf3e175)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" Jan 28 07:15:02 crc kubenswrapper[4642]: I0128 07:15:02.890834 4642 scope.go:117] "RemoveContainer" containerID="a302f4c04cd9e0202f6bed62be7fa466b5300d3d7ce24cbcb7fc8bbf566f124b" Jan 28 07:15:02 crc kubenswrapper[4642]: I0128 07:15:02.926467 4642 scope.go:117] "RemoveContainer" containerID="4e4263ac1dc87a98ebb025c676004fd7e691a0a0b2599f225402e05a3235f616" Jan 28 07:15:02 crc kubenswrapper[4642]: I0128 07:15:02.973591 4642 scope.go:117] "RemoveContainer" containerID="ee9f1b4e9320eb073e04d76be36b698bc5d17488f1ccb1870e6b94381f2890ad" Jan 28 07:15:03 crc kubenswrapper[4642]: I0128 07:15:03.001251 4642 scope.go:117] "RemoveContainer" containerID="8f3bfb87982d1d73c0039a593d9c0a279acc73e1ec0d3418564012febbf9c609" Jan 28 07:15:03 crc kubenswrapper[4642]: I0128 07:15:03.130736 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493075-g6njm" Jan 28 07:15:03 crc kubenswrapper[4642]: I0128 07:15:03.256141 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbmw7\" (UniqueName: \"kubernetes.io/projected/5b35b199-6341-41dc-81e2-afc74b3ecf75-kube-api-access-sbmw7\") pod \"5b35b199-6341-41dc-81e2-afc74b3ecf75\" (UID: \"5b35b199-6341-41dc-81e2-afc74b3ecf75\") " Jan 28 07:15:03 crc kubenswrapper[4642]: I0128 07:15:03.256269 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5b35b199-6341-41dc-81e2-afc74b3ecf75-secret-volume\") pod \"5b35b199-6341-41dc-81e2-afc74b3ecf75\" (UID: \"5b35b199-6341-41dc-81e2-afc74b3ecf75\") " Jan 28 07:15:03 crc kubenswrapper[4642]: I0128 07:15:03.256298 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5b35b199-6341-41dc-81e2-afc74b3ecf75-config-volume\") pod \"5b35b199-6341-41dc-81e2-afc74b3ecf75\" (UID: \"5b35b199-6341-41dc-81e2-afc74b3ecf75\") " Jan 28 07:15:03 crc kubenswrapper[4642]: I0128 07:15:03.257132 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b35b199-6341-41dc-81e2-afc74b3ecf75-config-volume" (OuterVolumeSpecName: "config-volume") pod "5b35b199-6341-41dc-81e2-afc74b3ecf75" (UID: "5b35b199-6341-41dc-81e2-afc74b3ecf75"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:15:03 crc kubenswrapper[4642]: I0128 07:15:03.261923 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b35b199-6341-41dc-81e2-afc74b3ecf75-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "5b35b199-6341-41dc-81e2-afc74b3ecf75" (UID: "5b35b199-6341-41dc-81e2-afc74b3ecf75"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:15:03 crc kubenswrapper[4642]: I0128 07:15:03.261950 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b35b199-6341-41dc-81e2-afc74b3ecf75-kube-api-access-sbmw7" (OuterVolumeSpecName: "kube-api-access-sbmw7") pod "5b35b199-6341-41dc-81e2-afc74b3ecf75" (UID: "5b35b199-6341-41dc-81e2-afc74b3ecf75"). InnerVolumeSpecName "kube-api-access-sbmw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:15:03 crc kubenswrapper[4642]: I0128 07:15:03.359210 4642 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5b35b199-6341-41dc-81e2-afc74b3ecf75-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 28 07:15:03 crc kubenswrapper[4642]: I0128 07:15:03.359236 4642 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5b35b199-6341-41dc-81e2-afc74b3ecf75-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 07:15:03 crc kubenswrapper[4642]: I0128 07:15:03.359246 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sbmw7\" (UniqueName: \"kubernetes.io/projected/5b35b199-6341-41dc-81e2-afc74b3ecf75-kube-api-access-sbmw7\") on node \"crc\" DevicePath \"\"" Jan 28 07:15:03 crc kubenswrapper[4642]: I0128 07:15:03.910158 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493075-g6njm" event={"ID":"5b35b199-6341-41dc-81e2-afc74b3ecf75","Type":"ContainerDied","Data":"6ed3fc27da7e9d8cc74615fa9da687546d2271c0177ef60d72385df154045939"} Jan 28 07:15:03 crc kubenswrapper[4642]: I0128 07:15:03.910214 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493075-g6njm" Jan 28 07:15:03 crc kubenswrapper[4642]: I0128 07:15:03.910230 4642 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ed3fc27da7e9d8cc74615fa9da687546d2271c0177ef60d72385df154045939" Jan 28 07:15:06 crc kubenswrapper[4642]: I0128 07:15:06.037221 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-68q8l"] Jan 28 07:15:06 crc kubenswrapper[4642]: I0128 07:15:06.043118 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-68q8l"] Jan 28 07:15:07 crc kubenswrapper[4642]: I0128 07:15:07.019971 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-655lk"] Jan 28 07:15:07 crc kubenswrapper[4642]: I0128 07:15:07.026368 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-655lk"] Jan 28 07:15:07 crc kubenswrapper[4642]: I0128 07:15:07.106815 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be3cadcb-de4f-48dd-aae9-c2ff2ca38571" path="/var/lib/kubelet/pods/be3cadcb-de4f-48dd-aae9-c2ff2ca38571/volumes" Jan 28 07:15:07 crc kubenswrapper[4642]: I0128 07:15:07.107380 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6aa333d-da9b-4f35-86a3-b88b6fe94ebc" path="/var/lib/kubelet/pods/e6aa333d-da9b-4f35-86a3-b88b6fe94ebc/volumes" Jan 28 07:15:08 crc kubenswrapper[4642]: I0128 07:15:08.025913 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-2f19-account-create-update-x86m7"] Jan 28 07:15:08 crc kubenswrapper[4642]: I0128 07:15:08.032665 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-9752-account-create-update-vw6jg"] Jan 28 07:15:08 crc kubenswrapper[4642]: I0128 07:15:08.039167 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-40e8-account-create-update-mlk24"] Jan 28 07:15:08 crc kubenswrapper[4642]: I0128 07:15:08.044642 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-d7p52"] Jan 28 07:15:08 crc kubenswrapper[4642]: I0128 07:15:08.049339 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-40e8-account-create-update-mlk24"] Jan 28 07:15:08 crc kubenswrapper[4642]: I0128 07:15:08.054628 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-2f19-account-create-update-x86m7"] Jan 28 07:15:08 crc kubenswrapper[4642]: I0128 07:15:08.059519 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-bk4gf"] Jan 28 07:15:08 crc kubenswrapper[4642]: I0128 07:15:08.064255 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-9752-account-create-update-vw6jg"] Jan 28 07:15:08 crc kubenswrapper[4642]: I0128 07:15:08.068864 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-d7p52"] Jan 28 07:15:08 crc kubenswrapper[4642]: I0128 07:15:08.073505 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-bk4gf"] Jan 28 07:15:09 crc kubenswrapper[4642]: I0128 07:15:09.106756 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35f9d883-d85a-4ed8-907b-c4e79d737c18" path="/var/lib/kubelet/pods/35f9d883-d85a-4ed8-907b-c4e79d737c18/volumes" Jan 28 07:15:09 crc kubenswrapper[4642]: I0128 07:15:09.107561 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41c8e5f3-dd67-49ed-97f1-c8eac8e3e8dd" path="/var/lib/kubelet/pods/41c8e5f3-dd67-49ed-97f1-c8eac8e3e8dd/volumes" Jan 28 07:15:09 crc kubenswrapper[4642]: I0128 07:15:09.108052 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48cf664c-6221-49e7-8eda-bd3b5f059cbe" path="/var/lib/kubelet/pods/48cf664c-6221-49e7-8eda-bd3b5f059cbe/volumes" Jan 28 07:15:09 crc kubenswrapper[4642]: I0128 07:15:09.108586 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ef3a51e-a476-442b-baf1-5650caa83c8e" path="/var/lib/kubelet/pods/9ef3a51e-a476-442b-baf1-5650caa83c8e/volumes" Jan 28 07:15:09 crc kubenswrapper[4642]: I0128 07:15:09.109457 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3d9b18d-9328-43f4-85d4-cf05abf0ac5f" path="/var/lib/kubelet/pods/c3d9b18d-9328-43f4-85d4-cf05abf0ac5f/volumes" Jan 28 07:15:13 crc kubenswrapper[4642]: I0128 07:15:13.098264 4642 scope.go:117] "RemoveContainer" containerID="c9b0ccd578b76f711793a52e4fbebb1998cf31023e773fda5a4c995750b44303" Jan 28 07:15:13 crc kubenswrapper[4642]: E0128 07:15:13.098782 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdsmf_openshift-machine-config-operator(338ae955-434d-40bd-8519-580badf3e175)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" Jan 28 07:15:28 crc kubenswrapper[4642]: I0128 07:15:28.098420 4642 scope.go:117] "RemoveContainer" containerID="c9b0ccd578b76f711793a52e4fbebb1998cf31023e773fda5a4c995750b44303" Jan 28 07:15:28 crc kubenswrapper[4642]: E0128 07:15:28.098965 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdsmf_openshift-machine-config-operator(338ae955-434d-40bd-8519-580badf3e175)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" Jan 28 07:15:33 crc kubenswrapper[4642]: I0128 07:15:33.090588 4642 generic.go:334] "Generic (PLEG): container finished" podID="a99427f8-9376-4fe9-81ed-cfe6740f4581" containerID="a2539b829e1698d31c5e9329f22dd31af6e17a582c6c7d1949ad4c64198a78e6" exitCode=0 Jan 28 07:15:33 crc kubenswrapper[4642]: I0128 07:15:33.090666 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xqnkj" event={"ID":"a99427f8-9376-4fe9-81ed-cfe6740f4581","Type":"ContainerDied","Data":"a2539b829e1698d31c5e9329f22dd31af6e17a582c6c7d1949ad4c64198a78e6"} Jan 28 07:15:34 crc kubenswrapper[4642]: I0128 07:15:34.387752 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xqnkj" Jan 28 07:15:34 crc kubenswrapper[4642]: I0128 07:15:34.437702 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mzfzt\" (UniqueName: \"kubernetes.io/projected/a99427f8-9376-4fe9-81ed-cfe6740f4581-kube-api-access-mzfzt\") pod \"a99427f8-9376-4fe9-81ed-cfe6740f4581\" (UID: \"a99427f8-9376-4fe9-81ed-cfe6740f4581\") " Jan 28 07:15:34 crc kubenswrapper[4642]: I0128 07:15:34.437770 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a99427f8-9376-4fe9-81ed-cfe6740f4581-ssh-key-openstack-edpm-ipam\") pod \"a99427f8-9376-4fe9-81ed-cfe6740f4581\" (UID: \"a99427f8-9376-4fe9-81ed-cfe6740f4581\") " Jan 28 07:15:34 crc kubenswrapper[4642]: I0128 07:15:34.437976 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a99427f8-9376-4fe9-81ed-cfe6740f4581-inventory\") pod \"a99427f8-9376-4fe9-81ed-cfe6740f4581\" (UID: \"a99427f8-9376-4fe9-81ed-cfe6740f4581\") " Jan 28 07:15:34 crc kubenswrapper[4642]: I0128 07:15:34.442358 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a99427f8-9376-4fe9-81ed-cfe6740f4581-kube-api-access-mzfzt" (OuterVolumeSpecName: "kube-api-access-mzfzt") pod "a99427f8-9376-4fe9-81ed-cfe6740f4581" (UID: "a99427f8-9376-4fe9-81ed-cfe6740f4581"). InnerVolumeSpecName "kube-api-access-mzfzt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:15:34 crc kubenswrapper[4642]: I0128 07:15:34.458268 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a99427f8-9376-4fe9-81ed-cfe6740f4581-inventory" (OuterVolumeSpecName: "inventory") pod "a99427f8-9376-4fe9-81ed-cfe6740f4581" (UID: "a99427f8-9376-4fe9-81ed-cfe6740f4581"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:15:34 crc kubenswrapper[4642]: I0128 07:15:34.460961 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a99427f8-9376-4fe9-81ed-cfe6740f4581-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a99427f8-9376-4fe9-81ed-cfe6740f4581" (UID: "a99427f8-9376-4fe9-81ed-cfe6740f4581"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:15:34 crc kubenswrapper[4642]: I0128 07:15:34.540708 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mzfzt\" (UniqueName: \"kubernetes.io/projected/a99427f8-9376-4fe9-81ed-cfe6740f4581-kube-api-access-mzfzt\") on node \"crc\" DevicePath \"\"" Jan 28 07:15:34 crc kubenswrapper[4642]: I0128 07:15:34.540742 4642 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a99427f8-9376-4fe9-81ed-cfe6740f4581-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 07:15:34 crc kubenswrapper[4642]: I0128 07:15:34.540754 4642 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a99427f8-9376-4fe9-81ed-cfe6740f4581-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 07:15:35 crc kubenswrapper[4642]: I0128 07:15:35.103997 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xqnkj" Jan 28 07:15:35 crc kubenswrapper[4642]: I0128 07:15:35.105204 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xqnkj" event={"ID":"a99427f8-9376-4fe9-81ed-cfe6740f4581","Type":"ContainerDied","Data":"6313d3faae91dfff243a9d7d034f2db690cc349034f2d1e9ec2b9105b24bdec3"} Jan 28 07:15:35 crc kubenswrapper[4642]: I0128 07:15:35.105235 4642 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6313d3faae91dfff243a9d7d034f2db690cc349034f2d1e9ec2b9105b24bdec3" Jan 28 07:15:35 crc kubenswrapper[4642]: I0128 07:15:35.159840 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-h2225"] Jan 28 07:15:35 crc kubenswrapper[4642]: E0128 07:15:35.160236 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a99427f8-9376-4fe9-81ed-cfe6740f4581" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 28 07:15:35 crc kubenswrapper[4642]: I0128 07:15:35.160255 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="a99427f8-9376-4fe9-81ed-cfe6740f4581" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 28 07:15:35 crc kubenswrapper[4642]: E0128 07:15:35.160269 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b35b199-6341-41dc-81e2-afc74b3ecf75" containerName="collect-profiles" Jan 28 07:15:35 crc kubenswrapper[4642]: I0128 07:15:35.160275 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b35b199-6341-41dc-81e2-afc74b3ecf75" containerName="collect-profiles" Jan 28 07:15:35 crc kubenswrapper[4642]: I0128 07:15:35.160531 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="a99427f8-9376-4fe9-81ed-cfe6740f4581" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 28 07:15:35 crc kubenswrapper[4642]: I0128 07:15:35.160551 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b35b199-6341-41dc-81e2-afc74b3ecf75" containerName="collect-profiles" Jan 28 07:15:35 crc kubenswrapper[4642]: I0128 07:15:35.161065 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-h2225" Jan 28 07:15:35 crc kubenswrapper[4642]: I0128 07:15:35.162549 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r6m2m" Jan 28 07:15:35 crc kubenswrapper[4642]: I0128 07:15:35.162721 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 07:15:35 crc kubenswrapper[4642]: I0128 07:15:35.162980 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 07:15:35 crc kubenswrapper[4642]: I0128 07:15:35.163523 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 07:15:35 crc kubenswrapper[4642]: I0128 07:15:35.174889 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-h2225"] Jan 28 07:15:35 crc kubenswrapper[4642]: I0128 07:15:35.251689 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55lj4\" (UniqueName: \"kubernetes.io/projected/8032b59a-f024-4eb0-93d7-d26a77889a96-kube-api-access-55lj4\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-h2225\" (UID: \"8032b59a-f024-4eb0-93d7-d26a77889a96\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-h2225" Jan 28 07:15:35 crc kubenswrapper[4642]: I0128 07:15:35.251854 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8032b59a-f024-4eb0-93d7-d26a77889a96-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-h2225\" (UID: \"8032b59a-f024-4eb0-93d7-d26a77889a96\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-h2225" Jan 28 07:15:35 crc kubenswrapper[4642]: I0128 07:15:35.251884 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8032b59a-f024-4eb0-93d7-d26a77889a96-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-h2225\" (UID: \"8032b59a-f024-4eb0-93d7-d26a77889a96\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-h2225" Jan 28 07:15:35 crc kubenswrapper[4642]: I0128 07:15:35.353527 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55lj4\" (UniqueName: \"kubernetes.io/projected/8032b59a-f024-4eb0-93d7-d26a77889a96-kube-api-access-55lj4\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-h2225\" (UID: \"8032b59a-f024-4eb0-93d7-d26a77889a96\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-h2225" Jan 28 07:15:35 crc kubenswrapper[4642]: I0128 07:15:35.353597 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8032b59a-f024-4eb0-93d7-d26a77889a96-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-h2225\" (UID: \"8032b59a-f024-4eb0-93d7-d26a77889a96\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-h2225" Jan 28 07:15:35 crc kubenswrapper[4642]: I0128 07:15:35.353645 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8032b59a-f024-4eb0-93d7-d26a77889a96-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-h2225\" (UID: \"8032b59a-f024-4eb0-93d7-d26a77889a96\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-h2225" Jan 28 07:15:35 crc kubenswrapper[4642]: I0128 07:15:35.356752 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8032b59a-f024-4eb0-93d7-d26a77889a96-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-h2225\" (UID: \"8032b59a-f024-4eb0-93d7-d26a77889a96\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-h2225" Jan 28 07:15:35 crc kubenswrapper[4642]: I0128 07:15:35.357256 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8032b59a-f024-4eb0-93d7-d26a77889a96-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-h2225\" (UID: \"8032b59a-f024-4eb0-93d7-d26a77889a96\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-h2225" Jan 28 07:15:35 crc kubenswrapper[4642]: I0128 07:15:35.367428 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55lj4\" (UniqueName: \"kubernetes.io/projected/8032b59a-f024-4eb0-93d7-d26a77889a96-kube-api-access-55lj4\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-h2225\" (UID: \"8032b59a-f024-4eb0-93d7-d26a77889a96\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-h2225" Jan 28 07:15:35 crc kubenswrapper[4642]: I0128 07:15:35.473775 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-h2225" Jan 28 07:15:35 crc kubenswrapper[4642]: I0128 07:15:35.894231 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-h2225"] Jan 28 07:15:36 crc kubenswrapper[4642]: I0128 07:15:36.111736 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-h2225" event={"ID":"8032b59a-f024-4eb0-93d7-d26a77889a96","Type":"ContainerStarted","Data":"c63c9b7a34f1d81238f93a7c5c6491e4f1d6e6a4fe4dadebdff88a2edef92b5e"} Jan 28 07:15:37 crc kubenswrapper[4642]: I0128 07:15:37.119782 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-h2225" event={"ID":"8032b59a-f024-4eb0-93d7-d26a77889a96","Type":"ContainerStarted","Data":"7de6f63884c25f1052aa79b51b40e41947a074b231a34d040d9891fb8578bd0e"} Jan 28 07:15:37 crc kubenswrapper[4642]: I0128 07:15:37.134495 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-h2225" podStartSLOduration=1.660434918 podStartE2EDuration="2.134481866s" podCreationTimestamp="2026-01-28 07:15:35 +0000 UTC" firstStartedPulling="2026-01-28 07:15:35.896720936 +0000 UTC m=+1659.128809745" lastFinishedPulling="2026-01-28 07:15:36.370767885 +0000 UTC m=+1659.602856693" observedRunningTime="2026-01-28 07:15:37.129868104 +0000 UTC m=+1660.361956913" watchObservedRunningTime="2026-01-28 07:15:37.134481866 +0000 UTC m=+1660.366570664" Jan 28 07:15:40 crc kubenswrapper[4642]: I0128 07:15:40.100993 4642 scope.go:117] "RemoveContainer" containerID="c9b0ccd578b76f711793a52e4fbebb1998cf31023e773fda5a4c995750b44303" Jan 28 07:15:40 crc kubenswrapper[4642]: E0128 07:15:40.106540 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdsmf_openshift-machine-config-operator(338ae955-434d-40bd-8519-580badf3e175)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" Jan 28 07:15:40 crc kubenswrapper[4642]: I0128 07:15:40.140420 4642 generic.go:334] "Generic (PLEG): container finished" podID="8032b59a-f024-4eb0-93d7-d26a77889a96" containerID="7de6f63884c25f1052aa79b51b40e41947a074b231a34d040d9891fb8578bd0e" exitCode=0 Jan 28 07:15:40 crc kubenswrapper[4642]: I0128 07:15:40.140478 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-h2225" event={"ID":"8032b59a-f024-4eb0-93d7-d26a77889a96","Type":"ContainerDied","Data":"7de6f63884c25f1052aa79b51b40e41947a074b231a34d040d9891fb8578bd0e"} Jan 28 07:15:41 crc kubenswrapper[4642]: I0128 07:15:41.436499 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-h2225" Jan 28 07:15:41 crc kubenswrapper[4642]: I0128 07:15:41.552253 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8032b59a-f024-4eb0-93d7-d26a77889a96-ssh-key-openstack-edpm-ipam\") pod \"8032b59a-f024-4eb0-93d7-d26a77889a96\" (UID: \"8032b59a-f024-4eb0-93d7-d26a77889a96\") " Jan 28 07:15:41 crc kubenswrapper[4642]: I0128 07:15:41.552308 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8032b59a-f024-4eb0-93d7-d26a77889a96-inventory\") pod \"8032b59a-f024-4eb0-93d7-d26a77889a96\" (UID: \"8032b59a-f024-4eb0-93d7-d26a77889a96\") " Jan 28 07:15:41 crc kubenswrapper[4642]: I0128 07:15:41.552338 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55lj4\" (UniqueName: \"kubernetes.io/projected/8032b59a-f024-4eb0-93d7-d26a77889a96-kube-api-access-55lj4\") pod \"8032b59a-f024-4eb0-93d7-d26a77889a96\" (UID: \"8032b59a-f024-4eb0-93d7-d26a77889a96\") " Jan 28 07:15:41 crc kubenswrapper[4642]: I0128 07:15:41.556753 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8032b59a-f024-4eb0-93d7-d26a77889a96-kube-api-access-55lj4" (OuterVolumeSpecName: "kube-api-access-55lj4") pod "8032b59a-f024-4eb0-93d7-d26a77889a96" (UID: "8032b59a-f024-4eb0-93d7-d26a77889a96"). InnerVolumeSpecName "kube-api-access-55lj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:15:41 crc kubenswrapper[4642]: I0128 07:15:41.571998 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8032b59a-f024-4eb0-93d7-d26a77889a96-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8032b59a-f024-4eb0-93d7-d26a77889a96" (UID: "8032b59a-f024-4eb0-93d7-d26a77889a96"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:15:41 crc kubenswrapper[4642]: I0128 07:15:41.572526 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8032b59a-f024-4eb0-93d7-d26a77889a96-inventory" (OuterVolumeSpecName: "inventory") pod "8032b59a-f024-4eb0-93d7-d26a77889a96" (UID: "8032b59a-f024-4eb0-93d7-d26a77889a96"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:15:41 crc kubenswrapper[4642]: I0128 07:15:41.655156 4642 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8032b59a-f024-4eb0-93d7-d26a77889a96-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 07:15:41 crc kubenswrapper[4642]: I0128 07:15:41.655297 4642 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8032b59a-f024-4eb0-93d7-d26a77889a96-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 07:15:41 crc kubenswrapper[4642]: I0128 07:15:41.655378 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55lj4\" (UniqueName: \"kubernetes.io/projected/8032b59a-f024-4eb0-93d7-d26a77889a96-kube-api-access-55lj4\") on node \"crc\" DevicePath \"\"" Jan 28 07:15:42 crc kubenswrapper[4642]: I0128 07:15:42.154550 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-h2225" event={"ID":"8032b59a-f024-4eb0-93d7-d26a77889a96","Type":"ContainerDied","Data":"c63c9b7a34f1d81238f93a7c5c6491e4f1d6e6a4fe4dadebdff88a2edef92b5e"} Jan 28 07:15:42 crc kubenswrapper[4642]: I0128 07:15:42.154588 4642 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c63c9b7a34f1d81238f93a7c5c6491e4f1d6e6a4fe4dadebdff88a2edef92b5e" Jan 28 07:15:42 crc kubenswrapper[4642]: I0128 07:15:42.154596 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-h2225" Jan 28 07:15:42 crc kubenswrapper[4642]: I0128 07:15:42.209566 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-rtjpp"] Jan 28 07:15:42 crc kubenswrapper[4642]: E0128 07:15:42.209943 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8032b59a-f024-4eb0-93d7-d26a77889a96" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 28 07:15:42 crc kubenswrapper[4642]: I0128 07:15:42.209963 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="8032b59a-f024-4eb0-93d7-d26a77889a96" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 28 07:15:42 crc kubenswrapper[4642]: I0128 07:15:42.210153 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="8032b59a-f024-4eb0-93d7-d26a77889a96" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 28 07:15:42 crc kubenswrapper[4642]: I0128 07:15:42.210754 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rtjpp" Jan 28 07:15:42 crc kubenswrapper[4642]: I0128 07:15:42.213499 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 07:15:42 crc kubenswrapper[4642]: I0128 07:15:42.213499 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 07:15:42 crc kubenswrapper[4642]: I0128 07:15:42.213865 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 07:15:42 crc kubenswrapper[4642]: I0128 07:15:42.213906 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r6m2m" Jan 28 07:15:42 crc kubenswrapper[4642]: I0128 07:15:42.220306 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-rtjpp"] Jan 28 07:15:42 crc kubenswrapper[4642]: I0128 07:15:42.366659 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8ce13f76-42f1-46cb-a43b-cdb1acaf6cd8-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rtjpp\" (UID: \"8ce13f76-42f1-46cb-a43b-cdb1acaf6cd8\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rtjpp" Jan 28 07:15:42 crc kubenswrapper[4642]: I0128 07:15:42.366704 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2t8f\" (UniqueName: \"kubernetes.io/projected/8ce13f76-42f1-46cb-a43b-cdb1acaf6cd8-kube-api-access-p2t8f\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rtjpp\" (UID: \"8ce13f76-42f1-46cb-a43b-cdb1acaf6cd8\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rtjpp" Jan 28 07:15:42 crc kubenswrapper[4642]: I0128 07:15:42.366869 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8ce13f76-42f1-46cb-a43b-cdb1acaf6cd8-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rtjpp\" (UID: \"8ce13f76-42f1-46cb-a43b-cdb1acaf6cd8\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rtjpp" Jan 28 07:15:42 crc kubenswrapper[4642]: I0128 07:15:42.468584 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8ce13f76-42f1-46cb-a43b-cdb1acaf6cd8-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rtjpp\" (UID: \"8ce13f76-42f1-46cb-a43b-cdb1acaf6cd8\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rtjpp" Jan 28 07:15:42 crc kubenswrapper[4642]: I0128 07:15:42.468637 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2t8f\" (UniqueName: \"kubernetes.io/projected/8ce13f76-42f1-46cb-a43b-cdb1acaf6cd8-kube-api-access-p2t8f\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rtjpp\" (UID: \"8ce13f76-42f1-46cb-a43b-cdb1acaf6cd8\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rtjpp" Jan 28 07:15:42 crc kubenswrapper[4642]: I0128 07:15:42.468736 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8ce13f76-42f1-46cb-a43b-cdb1acaf6cd8-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rtjpp\" (UID: \"8ce13f76-42f1-46cb-a43b-cdb1acaf6cd8\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rtjpp" Jan 28 07:15:42 crc kubenswrapper[4642]: I0128 07:15:42.472435 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8ce13f76-42f1-46cb-a43b-cdb1acaf6cd8-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rtjpp\" (UID: \"8ce13f76-42f1-46cb-a43b-cdb1acaf6cd8\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rtjpp" Jan 28 07:15:42 crc kubenswrapper[4642]: I0128 07:15:42.472525 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8ce13f76-42f1-46cb-a43b-cdb1acaf6cd8-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rtjpp\" (UID: \"8ce13f76-42f1-46cb-a43b-cdb1acaf6cd8\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rtjpp" Jan 28 07:15:42 crc kubenswrapper[4642]: I0128 07:15:42.481629 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2t8f\" (UniqueName: \"kubernetes.io/projected/8ce13f76-42f1-46cb-a43b-cdb1acaf6cd8-kube-api-access-p2t8f\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rtjpp\" (UID: \"8ce13f76-42f1-46cb-a43b-cdb1acaf6cd8\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rtjpp" Jan 28 07:15:42 crc kubenswrapper[4642]: I0128 07:15:42.524622 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rtjpp" Jan 28 07:15:42 crc kubenswrapper[4642]: I0128 07:15:42.969608 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-rtjpp"] Jan 28 07:15:43 crc kubenswrapper[4642]: I0128 07:15:43.162042 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rtjpp" event={"ID":"8ce13f76-42f1-46cb-a43b-cdb1acaf6cd8","Type":"ContainerStarted","Data":"b80867dc8a84e78bf534b11a55ab54a115fca6b8924492557fb602a15956328f"} Jan 28 07:15:44 crc kubenswrapper[4642]: I0128 07:15:44.030871 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-5n5zh"] Jan 28 07:15:44 crc kubenswrapper[4642]: I0128 07:15:44.037742 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-5n5zh"] Jan 28 07:15:44 crc kubenswrapper[4642]: I0128 07:15:44.169453 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rtjpp" event={"ID":"8ce13f76-42f1-46cb-a43b-cdb1acaf6cd8","Type":"ContainerStarted","Data":"f0383f64353cdacb2bb923e20c2bbdf5b48283ec7bc1c7a9249fc12dc1deb93d"} Jan 28 07:15:44 crc kubenswrapper[4642]: I0128 07:15:44.184634 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rtjpp" podStartSLOduration=1.587929224 podStartE2EDuration="2.184616563s" podCreationTimestamp="2026-01-28 07:15:42 +0000 UTC" firstStartedPulling="2026-01-28 07:15:42.973746574 +0000 UTC m=+1666.205835382" lastFinishedPulling="2026-01-28 07:15:43.570433912 +0000 UTC m=+1666.802522721" observedRunningTime="2026-01-28 07:15:44.178328334 +0000 UTC m=+1667.410417143" watchObservedRunningTime="2026-01-28 07:15:44.184616563 +0000 UTC m=+1667.416705372" Jan 28 07:15:45 crc kubenswrapper[4642]: I0128 07:15:45.106415 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40790093-08bf-4d1a-8718-dc943de05f37" path="/var/lib/kubelet/pods/40790093-08bf-4d1a-8718-dc943de05f37/volumes" Jan 28 07:15:53 crc kubenswrapper[4642]: I0128 07:15:53.098548 4642 scope.go:117] "RemoveContainer" containerID="c9b0ccd578b76f711793a52e4fbebb1998cf31023e773fda5a4c995750b44303" Jan 28 07:15:53 crc kubenswrapper[4642]: E0128 07:15:53.099177 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdsmf_openshift-machine-config-operator(338ae955-434d-40bd-8519-580badf3e175)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" Jan 28 07:16:03 crc kubenswrapper[4642]: I0128 07:16:03.115180 4642 scope.go:117] "RemoveContainer" containerID="a4b3874722745c6c64f9be87b3da67e5cde249adac1a852c1222f7d9859238bc" Jan 28 07:16:03 crc kubenswrapper[4642]: I0128 07:16:03.132214 4642 scope.go:117] "RemoveContainer" containerID="ba9382bf6c4faad83b68a6e1c918e372cc1171069e3cca2bcd5e0f0de77e8f0b" Jan 28 07:16:03 crc kubenswrapper[4642]: I0128 07:16:03.164228 4642 scope.go:117] "RemoveContainer" containerID="972383baec7d5de61350f07411785e47be5c10fa045d8f2fbd4e8e0ae71872c2" Jan 28 07:16:03 crc kubenswrapper[4642]: I0128 07:16:03.190899 4642 scope.go:117] "RemoveContainer" containerID="79d6f97bcca0dcf2afa4c59c89778fc41fc99a3d080d6fd6b0b2080212833164" Jan 28 07:16:03 crc kubenswrapper[4642]: I0128 07:16:03.219118 4642 scope.go:117] "RemoveContainer" containerID="1f390e5ba9dc59d1d4b1ef1707577455571a9a0b34a3ce48db1b3641fc7e6743" Jan 28 07:16:03 crc kubenswrapper[4642]: I0128 07:16:03.262353 4642 scope.go:117] "RemoveContainer" containerID="b4ba954ce240c66dd88cc9ad803dcec524d4e9f9746504ed0d251c167a523594" Jan 28 07:16:03 crc kubenswrapper[4642]: I0128 07:16:03.280356 4642 scope.go:117] "RemoveContainer" containerID="ff06a6b5dbba6b8efc2678a3ffe95bc8f74daab7b5106b010fbd362c1fa375af" Jan 28 07:16:03 crc kubenswrapper[4642]: I0128 07:16:03.295386 4642 scope.go:117] "RemoveContainer" containerID="bc6af9f4d68d97ec9d5d0022d3c1e57a8f03a83462648eaa37c994d8784716b6" Jan 28 07:16:04 crc kubenswrapper[4642]: I0128 07:16:04.098591 4642 scope.go:117] "RemoveContainer" containerID="c9b0ccd578b76f711793a52e4fbebb1998cf31023e773fda5a4c995750b44303" Jan 28 07:16:04 crc kubenswrapper[4642]: E0128 07:16:04.098931 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdsmf_openshift-machine-config-operator(338ae955-434d-40bd-8519-580badf3e175)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" Jan 28 07:16:08 crc kubenswrapper[4642]: I0128 07:16:08.034256 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-tb67c"] Jan 28 07:16:08 crc kubenswrapper[4642]: I0128 07:16:08.042057 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-tb67c"] Jan 28 07:16:09 crc kubenswrapper[4642]: I0128 07:16:09.024116 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-qch6m"] Jan 28 07:16:09 crc kubenswrapper[4642]: I0128 07:16:09.029142 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-qch6m"] Jan 28 07:16:09 crc kubenswrapper[4642]: I0128 07:16:09.106059 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2388a8f5-7be5-4284-9370-23d6f1545c8a" path="/var/lib/kubelet/pods/2388a8f5-7be5-4284-9370-23d6f1545c8a/volumes" Jan 28 07:16:09 crc kubenswrapper[4642]: I0128 07:16:09.106623 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84c94226-5b5d-4c10-a0fb-fcf5e42e34c0" path="/var/lib/kubelet/pods/84c94226-5b5d-4c10-a0fb-fcf5e42e34c0/volumes" Jan 28 07:16:10 crc kubenswrapper[4642]: I0128 07:16:10.357998 4642 generic.go:334] "Generic (PLEG): container finished" podID="8ce13f76-42f1-46cb-a43b-cdb1acaf6cd8" containerID="f0383f64353cdacb2bb923e20c2bbdf5b48283ec7bc1c7a9249fc12dc1deb93d" exitCode=0 Jan 28 07:16:10 crc kubenswrapper[4642]: I0128 07:16:10.358074 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rtjpp" event={"ID":"8ce13f76-42f1-46cb-a43b-cdb1acaf6cd8","Type":"ContainerDied","Data":"f0383f64353cdacb2bb923e20c2bbdf5b48283ec7bc1c7a9249fc12dc1deb93d"} Jan 28 07:16:11 crc kubenswrapper[4642]: I0128 07:16:11.658093 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rtjpp" Jan 28 07:16:11 crc kubenswrapper[4642]: I0128 07:16:11.730976 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8ce13f76-42f1-46cb-a43b-cdb1acaf6cd8-inventory\") pod \"8ce13f76-42f1-46cb-a43b-cdb1acaf6cd8\" (UID: \"8ce13f76-42f1-46cb-a43b-cdb1acaf6cd8\") " Jan 28 07:16:11 crc kubenswrapper[4642]: I0128 07:16:11.731121 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2t8f\" (UniqueName: \"kubernetes.io/projected/8ce13f76-42f1-46cb-a43b-cdb1acaf6cd8-kube-api-access-p2t8f\") pod \"8ce13f76-42f1-46cb-a43b-cdb1acaf6cd8\" (UID: \"8ce13f76-42f1-46cb-a43b-cdb1acaf6cd8\") " Jan 28 07:16:11 crc kubenswrapper[4642]: I0128 07:16:11.731246 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8ce13f76-42f1-46cb-a43b-cdb1acaf6cd8-ssh-key-openstack-edpm-ipam\") pod \"8ce13f76-42f1-46cb-a43b-cdb1acaf6cd8\" (UID: \"8ce13f76-42f1-46cb-a43b-cdb1acaf6cd8\") " Jan 28 07:16:11 crc kubenswrapper[4642]: I0128 07:16:11.734907 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ce13f76-42f1-46cb-a43b-cdb1acaf6cd8-kube-api-access-p2t8f" (OuterVolumeSpecName: "kube-api-access-p2t8f") pod "8ce13f76-42f1-46cb-a43b-cdb1acaf6cd8" (UID: "8ce13f76-42f1-46cb-a43b-cdb1acaf6cd8"). InnerVolumeSpecName "kube-api-access-p2t8f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:16:11 crc kubenswrapper[4642]: I0128 07:16:11.750800 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ce13f76-42f1-46cb-a43b-cdb1acaf6cd8-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8ce13f76-42f1-46cb-a43b-cdb1acaf6cd8" (UID: "8ce13f76-42f1-46cb-a43b-cdb1acaf6cd8"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:16:11 crc kubenswrapper[4642]: I0128 07:16:11.751629 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ce13f76-42f1-46cb-a43b-cdb1acaf6cd8-inventory" (OuterVolumeSpecName: "inventory") pod "8ce13f76-42f1-46cb-a43b-cdb1acaf6cd8" (UID: "8ce13f76-42f1-46cb-a43b-cdb1acaf6cd8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:16:11 crc kubenswrapper[4642]: I0128 07:16:11.833868 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2t8f\" (UniqueName: \"kubernetes.io/projected/8ce13f76-42f1-46cb-a43b-cdb1acaf6cd8-kube-api-access-p2t8f\") on node \"crc\" DevicePath \"\"" Jan 28 07:16:11 crc kubenswrapper[4642]: I0128 07:16:11.833895 4642 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8ce13f76-42f1-46cb-a43b-cdb1acaf6cd8-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 07:16:11 crc kubenswrapper[4642]: I0128 07:16:11.833905 4642 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8ce13f76-42f1-46cb-a43b-cdb1acaf6cd8-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 07:16:12 crc kubenswrapper[4642]: I0128 07:16:12.372620 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rtjpp" event={"ID":"8ce13f76-42f1-46cb-a43b-cdb1acaf6cd8","Type":"ContainerDied","Data":"b80867dc8a84e78bf534b11a55ab54a115fca6b8924492557fb602a15956328f"} Jan 28 07:16:12 crc kubenswrapper[4642]: I0128 07:16:12.372659 4642 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b80867dc8a84e78bf534b11a55ab54a115fca6b8924492557fb602a15956328f" Jan 28 07:16:12 crc kubenswrapper[4642]: I0128 07:16:12.372666 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rtjpp" Jan 28 07:16:12 crc kubenswrapper[4642]: I0128 07:16:12.430989 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-k65rv"] Jan 28 07:16:12 crc kubenswrapper[4642]: E0128 07:16:12.431570 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ce13f76-42f1-46cb-a43b-cdb1acaf6cd8" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 28 07:16:12 crc kubenswrapper[4642]: I0128 07:16:12.431591 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ce13f76-42f1-46cb-a43b-cdb1acaf6cd8" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 28 07:16:12 crc kubenswrapper[4642]: I0128 07:16:12.431802 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ce13f76-42f1-46cb-a43b-cdb1acaf6cd8" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 28 07:16:12 crc kubenswrapper[4642]: I0128 07:16:12.432398 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-k65rv" Jan 28 07:16:12 crc kubenswrapper[4642]: I0128 07:16:12.434383 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 07:16:12 crc kubenswrapper[4642]: I0128 07:16:12.434458 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 07:16:12 crc kubenswrapper[4642]: I0128 07:16:12.434732 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r6m2m" Jan 28 07:16:12 crc kubenswrapper[4642]: I0128 07:16:12.434877 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 07:16:12 crc kubenswrapper[4642]: I0128 07:16:12.438687 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-k65rv"] Jan 28 07:16:12 crc kubenswrapper[4642]: I0128 07:16:12.442080 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e8972141-a9ad-40b1-abb7-5e0fbdf8feda-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-k65rv\" (UID: \"e8972141-a9ad-40b1-abb7-5e0fbdf8feda\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-k65rv" Jan 28 07:16:12 crc kubenswrapper[4642]: I0128 07:16:12.442247 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e8972141-a9ad-40b1-abb7-5e0fbdf8feda-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-k65rv\" (UID: \"e8972141-a9ad-40b1-abb7-5e0fbdf8feda\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-k65rv" Jan 28 07:16:12 crc kubenswrapper[4642]: I0128 07:16:12.442483 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdkkn\" (UniqueName: \"kubernetes.io/projected/e8972141-a9ad-40b1-abb7-5e0fbdf8feda-kube-api-access-wdkkn\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-k65rv\" (UID: \"e8972141-a9ad-40b1-abb7-5e0fbdf8feda\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-k65rv" Jan 28 07:16:12 crc kubenswrapper[4642]: I0128 07:16:12.544053 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e8972141-a9ad-40b1-abb7-5e0fbdf8feda-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-k65rv\" (UID: \"e8972141-a9ad-40b1-abb7-5e0fbdf8feda\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-k65rv" Jan 28 07:16:12 crc kubenswrapper[4642]: I0128 07:16:12.544207 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdkkn\" (UniqueName: \"kubernetes.io/projected/e8972141-a9ad-40b1-abb7-5e0fbdf8feda-kube-api-access-wdkkn\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-k65rv\" (UID: \"e8972141-a9ad-40b1-abb7-5e0fbdf8feda\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-k65rv" Jan 28 07:16:12 crc kubenswrapper[4642]: I0128 07:16:12.544248 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e8972141-a9ad-40b1-abb7-5e0fbdf8feda-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-k65rv\" (UID: \"e8972141-a9ad-40b1-abb7-5e0fbdf8feda\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-k65rv" Jan 28 07:16:12 crc kubenswrapper[4642]: I0128 07:16:12.547361 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e8972141-a9ad-40b1-abb7-5e0fbdf8feda-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-k65rv\" (UID: \"e8972141-a9ad-40b1-abb7-5e0fbdf8feda\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-k65rv" Jan 28 07:16:12 crc kubenswrapper[4642]: I0128 07:16:12.547372 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e8972141-a9ad-40b1-abb7-5e0fbdf8feda-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-k65rv\" (UID: \"e8972141-a9ad-40b1-abb7-5e0fbdf8feda\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-k65rv" Jan 28 07:16:12 crc kubenswrapper[4642]: I0128 07:16:12.557742 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdkkn\" (UniqueName: \"kubernetes.io/projected/e8972141-a9ad-40b1-abb7-5e0fbdf8feda-kube-api-access-wdkkn\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-k65rv\" (UID: \"e8972141-a9ad-40b1-abb7-5e0fbdf8feda\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-k65rv" Jan 28 07:16:12 crc kubenswrapper[4642]: I0128 07:16:12.753081 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-k65rv" Jan 28 07:16:13 crc kubenswrapper[4642]: I0128 07:16:13.167079 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-k65rv"] Jan 28 07:16:13 crc kubenswrapper[4642]: I0128 07:16:13.380428 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-k65rv" event={"ID":"e8972141-a9ad-40b1-abb7-5e0fbdf8feda","Type":"ContainerStarted","Data":"cc32aa12ef0fd21d55189d3ad9c0f6aefb174615b9a9631b4968e673b6c7fe3c"} Jan 28 07:16:14 crc kubenswrapper[4642]: I0128 07:16:14.393919 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-k65rv" event={"ID":"e8972141-a9ad-40b1-abb7-5e0fbdf8feda","Type":"ContainerStarted","Data":"b006225d37915ceb905edc95608ebbe75db7f62fe2bcdc6e7f67dcd45b43e360"} Jan 28 07:16:14 crc kubenswrapper[4642]: I0128 07:16:14.410893 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-k65rv" podStartSLOduration=1.9088491749999998 podStartE2EDuration="2.410877213s" podCreationTimestamp="2026-01-28 07:16:12 +0000 UTC" firstStartedPulling="2026-01-28 07:16:13.171152391 +0000 UTC m=+1696.403241201" lastFinishedPulling="2026-01-28 07:16:13.67318043 +0000 UTC m=+1696.905269239" observedRunningTime="2026-01-28 07:16:14.406245838 +0000 UTC m=+1697.638334647" watchObservedRunningTime="2026-01-28 07:16:14.410877213 +0000 UTC m=+1697.642966021" Jan 28 07:16:19 crc kubenswrapper[4642]: I0128 07:16:19.098543 4642 scope.go:117] "RemoveContainer" containerID="c9b0ccd578b76f711793a52e4fbebb1998cf31023e773fda5a4c995750b44303" Jan 28 07:16:19 crc kubenswrapper[4642]: E0128 07:16:19.099084 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdsmf_openshift-machine-config-operator(338ae955-434d-40bd-8519-580badf3e175)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" Jan 28 07:16:30 crc kubenswrapper[4642]: I0128 07:16:30.098531 4642 scope.go:117] "RemoveContainer" containerID="c9b0ccd578b76f711793a52e4fbebb1998cf31023e773fda5a4c995750b44303" Jan 28 07:16:30 crc kubenswrapper[4642]: E0128 07:16:30.099119 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdsmf_openshift-machine-config-operator(338ae955-434d-40bd-8519-580badf3e175)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" Jan 28 07:16:40 crc kubenswrapper[4642]: I0128 07:16:40.405605 4642 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-6f4ff4554c-l99rf" podUID="31037f93-2b83-4bd0-bcdf-62c0a973432a" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Jan 28 07:16:41 crc kubenswrapper[4642]: I0128 07:16:41.099063 4642 scope.go:117] "RemoveContainer" containerID="c9b0ccd578b76f711793a52e4fbebb1998cf31023e773fda5a4c995750b44303" Jan 28 07:16:41 crc kubenswrapper[4642]: E0128 07:16:41.099579 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdsmf_openshift-machine-config-operator(338ae955-434d-40bd-8519-580badf3e175)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" Jan 28 07:16:47 crc kubenswrapper[4642]: I0128 07:16:47.598021 4642 generic.go:334] "Generic (PLEG): container finished" podID="e8972141-a9ad-40b1-abb7-5e0fbdf8feda" containerID="b006225d37915ceb905edc95608ebbe75db7f62fe2bcdc6e7f67dcd45b43e360" exitCode=0 Jan 28 07:16:47 crc kubenswrapper[4642]: I0128 07:16:47.598087 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-k65rv" event={"ID":"e8972141-a9ad-40b1-abb7-5e0fbdf8feda","Type":"ContainerDied","Data":"b006225d37915ceb905edc95608ebbe75db7f62fe2bcdc6e7f67dcd45b43e360"} Jan 28 07:16:48 crc kubenswrapper[4642]: I0128 07:16:48.908919 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-k65rv" Jan 28 07:16:48 crc kubenswrapper[4642]: I0128 07:16:48.933842 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wdkkn\" (UniqueName: \"kubernetes.io/projected/e8972141-a9ad-40b1-abb7-5e0fbdf8feda-kube-api-access-wdkkn\") pod \"e8972141-a9ad-40b1-abb7-5e0fbdf8feda\" (UID: \"e8972141-a9ad-40b1-abb7-5e0fbdf8feda\") " Jan 28 07:16:48 crc kubenswrapper[4642]: I0128 07:16:48.933898 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e8972141-a9ad-40b1-abb7-5e0fbdf8feda-inventory\") pod \"e8972141-a9ad-40b1-abb7-5e0fbdf8feda\" (UID: \"e8972141-a9ad-40b1-abb7-5e0fbdf8feda\") " Jan 28 07:16:48 crc kubenswrapper[4642]: I0128 07:16:48.934115 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e8972141-a9ad-40b1-abb7-5e0fbdf8feda-ssh-key-openstack-edpm-ipam\") pod \"e8972141-a9ad-40b1-abb7-5e0fbdf8feda\" (UID: \"e8972141-a9ad-40b1-abb7-5e0fbdf8feda\") " Jan 28 07:16:48 crc kubenswrapper[4642]: I0128 07:16:48.939466 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8972141-a9ad-40b1-abb7-5e0fbdf8feda-kube-api-access-wdkkn" (OuterVolumeSpecName: "kube-api-access-wdkkn") pod "e8972141-a9ad-40b1-abb7-5e0fbdf8feda" (UID: "e8972141-a9ad-40b1-abb7-5e0fbdf8feda"). InnerVolumeSpecName "kube-api-access-wdkkn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:16:48 crc kubenswrapper[4642]: I0128 07:16:48.954831 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8972141-a9ad-40b1-abb7-5e0fbdf8feda-inventory" (OuterVolumeSpecName: "inventory") pod "e8972141-a9ad-40b1-abb7-5e0fbdf8feda" (UID: "e8972141-a9ad-40b1-abb7-5e0fbdf8feda"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:16:48 crc kubenswrapper[4642]: I0128 07:16:48.959003 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8972141-a9ad-40b1-abb7-5e0fbdf8feda-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e8972141-a9ad-40b1-abb7-5e0fbdf8feda" (UID: "e8972141-a9ad-40b1-abb7-5e0fbdf8feda"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:16:49 crc kubenswrapper[4642]: I0128 07:16:49.036149 4642 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e8972141-a9ad-40b1-abb7-5e0fbdf8feda-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 07:16:49 crc kubenswrapper[4642]: I0128 07:16:49.036176 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wdkkn\" (UniqueName: \"kubernetes.io/projected/e8972141-a9ad-40b1-abb7-5e0fbdf8feda-kube-api-access-wdkkn\") on node \"crc\" DevicePath \"\"" Jan 28 07:16:49 crc kubenswrapper[4642]: I0128 07:16:49.036206 4642 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e8972141-a9ad-40b1-abb7-5e0fbdf8feda-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 07:16:49 crc kubenswrapper[4642]: I0128 07:16:49.612463 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-k65rv" event={"ID":"e8972141-a9ad-40b1-abb7-5e0fbdf8feda","Type":"ContainerDied","Data":"cc32aa12ef0fd21d55189d3ad9c0f6aefb174615b9a9631b4968e673b6c7fe3c"} Jan 28 07:16:49 crc kubenswrapper[4642]: I0128 07:16:49.612498 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-k65rv" Jan 28 07:16:49 crc kubenswrapper[4642]: I0128 07:16:49.612501 4642 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc32aa12ef0fd21d55189d3ad9c0f6aefb174615b9a9631b4968e673b6c7fe3c" Jan 28 07:16:49 crc kubenswrapper[4642]: I0128 07:16:49.679552 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-9sx26"] Jan 28 07:16:49 crc kubenswrapper[4642]: E0128 07:16:49.679873 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8972141-a9ad-40b1-abb7-5e0fbdf8feda" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 28 07:16:49 crc kubenswrapper[4642]: I0128 07:16:49.679890 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8972141-a9ad-40b1-abb7-5e0fbdf8feda" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 28 07:16:49 crc kubenswrapper[4642]: I0128 07:16:49.680059 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8972141-a9ad-40b1-abb7-5e0fbdf8feda" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 28 07:16:49 crc kubenswrapper[4642]: I0128 07:16:49.680576 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-9sx26" Jan 28 07:16:49 crc kubenswrapper[4642]: I0128 07:16:49.682087 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 07:16:49 crc kubenswrapper[4642]: I0128 07:16:49.682252 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r6m2m" Jan 28 07:16:49 crc kubenswrapper[4642]: I0128 07:16:49.682343 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 07:16:49 crc kubenswrapper[4642]: I0128 07:16:49.682426 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 07:16:49 crc kubenswrapper[4642]: I0128 07:16:49.696772 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-9sx26"] Jan 28 07:16:49 crc kubenswrapper[4642]: I0128 07:16:49.747015 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvdbd\" (UniqueName: \"kubernetes.io/projected/00779b3c-2623-48a8-88e3-72355cdcf9f9-kube-api-access-qvdbd\") pod \"ssh-known-hosts-edpm-deployment-9sx26\" (UID: \"00779b3c-2623-48a8-88e3-72355cdcf9f9\") " pod="openstack/ssh-known-hosts-edpm-deployment-9sx26" Jan 28 07:16:49 crc kubenswrapper[4642]: I0128 07:16:49.747079 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/00779b3c-2623-48a8-88e3-72355cdcf9f9-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-9sx26\" (UID: \"00779b3c-2623-48a8-88e3-72355cdcf9f9\") " pod="openstack/ssh-known-hosts-edpm-deployment-9sx26" Jan 28 07:16:49 crc kubenswrapper[4642]: I0128 07:16:49.747224 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/00779b3c-2623-48a8-88e3-72355cdcf9f9-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-9sx26\" (UID: \"00779b3c-2623-48a8-88e3-72355cdcf9f9\") " pod="openstack/ssh-known-hosts-edpm-deployment-9sx26" Jan 28 07:16:49 crc kubenswrapper[4642]: I0128 07:16:49.848020 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvdbd\" (UniqueName: \"kubernetes.io/projected/00779b3c-2623-48a8-88e3-72355cdcf9f9-kube-api-access-qvdbd\") pod \"ssh-known-hosts-edpm-deployment-9sx26\" (UID: \"00779b3c-2623-48a8-88e3-72355cdcf9f9\") " pod="openstack/ssh-known-hosts-edpm-deployment-9sx26" Jan 28 07:16:49 crc kubenswrapper[4642]: I0128 07:16:49.848066 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/00779b3c-2623-48a8-88e3-72355cdcf9f9-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-9sx26\" (UID: \"00779b3c-2623-48a8-88e3-72355cdcf9f9\") " pod="openstack/ssh-known-hosts-edpm-deployment-9sx26" Jan 28 07:16:49 crc kubenswrapper[4642]: I0128 07:16:49.848136 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/00779b3c-2623-48a8-88e3-72355cdcf9f9-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-9sx26\" (UID: \"00779b3c-2623-48a8-88e3-72355cdcf9f9\") " pod="openstack/ssh-known-hosts-edpm-deployment-9sx26" Jan 28 07:16:49 crc kubenswrapper[4642]: I0128 07:16:49.852029 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/00779b3c-2623-48a8-88e3-72355cdcf9f9-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-9sx26\" (UID: \"00779b3c-2623-48a8-88e3-72355cdcf9f9\") " pod="openstack/ssh-known-hosts-edpm-deployment-9sx26" Jan 28 07:16:49 crc kubenswrapper[4642]: I0128 07:16:49.852144 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/00779b3c-2623-48a8-88e3-72355cdcf9f9-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-9sx26\" (UID: \"00779b3c-2623-48a8-88e3-72355cdcf9f9\") " pod="openstack/ssh-known-hosts-edpm-deployment-9sx26" Jan 28 07:16:49 crc kubenswrapper[4642]: I0128 07:16:49.861483 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvdbd\" (UniqueName: \"kubernetes.io/projected/00779b3c-2623-48a8-88e3-72355cdcf9f9-kube-api-access-qvdbd\") pod \"ssh-known-hosts-edpm-deployment-9sx26\" (UID: \"00779b3c-2623-48a8-88e3-72355cdcf9f9\") " pod="openstack/ssh-known-hosts-edpm-deployment-9sx26" Jan 28 07:16:49 crc kubenswrapper[4642]: I0128 07:16:49.999574 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-9sx26" Jan 28 07:16:50 crc kubenswrapper[4642]: I0128 07:16:50.420219 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-9sx26"] Jan 28 07:16:50 crc kubenswrapper[4642]: I0128 07:16:50.619908 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-9sx26" event={"ID":"00779b3c-2623-48a8-88e3-72355cdcf9f9","Type":"ContainerStarted","Data":"e12d716ef4e17dfbda30adfdf353132865a8cab6d2db3926dd2269ecf206e8fe"} Jan 28 07:16:51 crc kubenswrapper[4642]: I0128 07:16:51.628160 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-9sx26" event={"ID":"00779b3c-2623-48a8-88e3-72355cdcf9f9","Type":"ContainerStarted","Data":"bbc900ac513954e2d4dc358ddf1c221859ef364a3a04c0d19d53d096ee692b83"} Jan 28 07:16:51 crc kubenswrapper[4642]: I0128 07:16:51.643707 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-9sx26" podStartSLOduration=2.117906702 podStartE2EDuration="2.643692474s" podCreationTimestamp="2026-01-28 07:16:49 +0000 UTC" firstStartedPulling="2026-01-28 07:16:50.424065477 +0000 UTC m=+1733.656154285" lastFinishedPulling="2026-01-28 07:16:50.949851237 +0000 UTC m=+1734.181940057" observedRunningTime="2026-01-28 07:16:51.63731081 +0000 UTC m=+1734.869399619" watchObservedRunningTime="2026-01-28 07:16:51.643692474 +0000 UTC m=+1734.875781284" Jan 28 07:16:52 crc kubenswrapper[4642]: I0128 07:16:52.029893 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-qkm72"] Jan 28 07:16:52 crc kubenswrapper[4642]: I0128 07:16:52.036143 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-qkm72"] Jan 28 07:16:53 crc kubenswrapper[4642]: I0128 07:16:53.106203 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc69b5fc-a582-4c57-972a-fa5a1e0f0203" path="/var/lib/kubelet/pods/dc69b5fc-a582-4c57-972a-fa5a1e0f0203/volumes" Jan 28 07:16:56 crc kubenswrapper[4642]: I0128 07:16:56.098131 4642 scope.go:117] "RemoveContainer" containerID="c9b0ccd578b76f711793a52e4fbebb1998cf31023e773fda5a4c995750b44303" Jan 28 07:16:56 crc kubenswrapper[4642]: E0128 07:16:56.098614 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdsmf_openshift-machine-config-operator(338ae955-434d-40bd-8519-580badf3e175)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" Jan 28 07:16:56 crc kubenswrapper[4642]: I0128 07:16:56.658627 4642 generic.go:334] "Generic (PLEG): container finished" podID="00779b3c-2623-48a8-88e3-72355cdcf9f9" containerID="bbc900ac513954e2d4dc358ddf1c221859ef364a3a04c0d19d53d096ee692b83" exitCode=0 Jan 28 07:16:56 crc kubenswrapper[4642]: I0128 07:16:56.658709 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-9sx26" event={"ID":"00779b3c-2623-48a8-88e3-72355cdcf9f9","Type":"ContainerDied","Data":"bbc900ac513954e2d4dc358ddf1c221859ef364a3a04c0d19d53d096ee692b83"} Jan 28 07:16:57 crc kubenswrapper[4642]: I0128 07:16:57.969481 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-9sx26" Jan 28 07:16:58 crc kubenswrapper[4642]: I0128 07:16:58.075333 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/00779b3c-2623-48a8-88e3-72355cdcf9f9-ssh-key-openstack-edpm-ipam\") pod \"00779b3c-2623-48a8-88e3-72355cdcf9f9\" (UID: \"00779b3c-2623-48a8-88e3-72355cdcf9f9\") " Jan 28 07:16:58 crc kubenswrapper[4642]: I0128 07:16:58.075446 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/00779b3c-2623-48a8-88e3-72355cdcf9f9-inventory-0\") pod \"00779b3c-2623-48a8-88e3-72355cdcf9f9\" (UID: \"00779b3c-2623-48a8-88e3-72355cdcf9f9\") " Jan 28 07:16:58 crc kubenswrapper[4642]: I0128 07:16:58.075680 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qvdbd\" (UniqueName: \"kubernetes.io/projected/00779b3c-2623-48a8-88e3-72355cdcf9f9-kube-api-access-qvdbd\") pod \"00779b3c-2623-48a8-88e3-72355cdcf9f9\" (UID: \"00779b3c-2623-48a8-88e3-72355cdcf9f9\") " Jan 28 07:16:58 crc kubenswrapper[4642]: I0128 07:16:58.080040 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00779b3c-2623-48a8-88e3-72355cdcf9f9-kube-api-access-qvdbd" (OuterVolumeSpecName: "kube-api-access-qvdbd") pod "00779b3c-2623-48a8-88e3-72355cdcf9f9" (UID: "00779b3c-2623-48a8-88e3-72355cdcf9f9"). InnerVolumeSpecName "kube-api-access-qvdbd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:16:58 crc kubenswrapper[4642]: I0128 07:16:58.096282 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00779b3c-2623-48a8-88e3-72355cdcf9f9-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "00779b3c-2623-48a8-88e3-72355cdcf9f9" (UID: "00779b3c-2623-48a8-88e3-72355cdcf9f9"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:16:58 crc kubenswrapper[4642]: I0128 07:16:58.096501 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00779b3c-2623-48a8-88e3-72355cdcf9f9-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "00779b3c-2623-48a8-88e3-72355cdcf9f9" (UID: "00779b3c-2623-48a8-88e3-72355cdcf9f9"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:16:58 crc kubenswrapper[4642]: I0128 07:16:58.177987 4642 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/00779b3c-2623-48a8-88e3-72355cdcf9f9-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 07:16:58 crc kubenswrapper[4642]: I0128 07:16:58.178011 4642 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/00779b3c-2623-48a8-88e3-72355cdcf9f9-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 28 07:16:58 crc kubenswrapper[4642]: I0128 07:16:58.178020 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qvdbd\" (UniqueName: \"kubernetes.io/projected/00779b3c-2623-48a8-88e3-72355cdcf9f9-kube-api-access-qvdbd\") on node \"crc\" DevicePath \"\"" Jan 28 07:16:58 crc kubenswrapper[4642]: I0128 07:16:58.672638 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-9sx26" event={"ID":"00779b3c-2623-48a8-88e3-72355cdcf9f9","Type":"ContainerDied","Data":"e12d716ef4e17dfbda30adfdf353132865a8cab6d2db3926dd2269ecf206e8fe"} Jan 28 07:16:58 crc kubenswrapper[4642]: I0128 07:16:58.672832 4642 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e12d716ef4e17dfbda30adfdf353132865a8cab6d2db3926dd2269ecf206e8fe" Jan 28 07:16:58 crc kubenswrapper[4642]: I0128 07:16:58.672688 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-9sx26" Jan 28 07:16:58 crc kubenswrapper[4642]: I0128 07:16:58.718156 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-4lgl6"] Jan 28 07:16:58 crc kubenswrapper[4642]: E0128 07:16:58.718762 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00779b3c-2623-48a8-88e3-72355cdcf9f9" containerName="ssh-known-hosts-edpm-deployment" Jan 28 07:16:58 crc kubenswrapper[4642]: I0128 07:16:58.718780 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="00779b3c-2623-48a8-88e3-72355cdcf9f9" containerName="ssh-known-hosts-edpm-deployment" Jan 28 07:16:58 crc kubenswrapper[4642]: I0128 07:16:58.718962 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="00779b3c-2623-48a8-88e3-72355cdcf9f9" containerName="ssh-known-hosts-edpm-deployment" Jan 28 07:16:58 crc kubenswrapper[4642]: I0128 07:16:58.719534 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4lgl6" Jan 28 07:16:58 crc kubenswrapper[4642]: I0128 07:16:58.720770 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 07:16:58 crc kubenswrapper[4642]: I0128 07:16:58.720878 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 07:16:58 crc kubenswrapper[4642]: I0128 07:16:58.721119 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r6m2m" Jan 28 07:16:58 crc kubenswrapper[4642]: I0128 07:16:58.721783 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 07:16:58 crc kubenswrapper[4642]: I0128 07:16:58.728110 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-4lgl6"] Jan 28 07:16:58 crc kubenswrapper[4642]: I0128 07:16:58.786752 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8g4g\" (UniqueName: \"kubernetes.io/projected/826495f0-3162-41a2-bbf2-f95814348f47-kube-api-access-k8g4g\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4lgl6\" (UID: \"826495f0-3162-41a2-bbf2-f95814348f47\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4lgl6" Jan 28 07:16:58 crc kubenswrapper[4642]: I0128 07:16:58.786830 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/826495f0-3162-41a2-bbf2-f95814348f47-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4lgl6\" (UID: \"826495f0-3162-41a2-bbf2-f95814348f47\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4lgl6" Jan 28 07:16:58 crc kubenswrapper[4642]: I0128 07:16:58.787025 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/826495f0-3162-41a2-bbf2-f95814348f47-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4lgl6\" (UID: \"826495f0-3162-41a2-bbf2-f95814348f47\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4lgl6" Jan 28 07:16:58 crc kubenswrapper[4642]: I0128 07:16:58.888588 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/826495f0-3162-41a2-bbf2-f95814348f47-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4lgl6\" (UID: \"826495f0-3162-41a2-bbf2-f95814348f47\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4lgl6" Jan 28 07:16:58 crc kubenswrapper[4642]: I0128 07:16:58.888693 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8g4g\" (UniqueName: \"kubernetes.io/projected/826495f0-3162-41a2-bbf2-f95814348f47-kube-api-access-k8g4g\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4lgl6\" (UID: \"826495f0-3162-41a2-bbf2-f95814348f47\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4lgl6" Jan 28 07:16:58 crc kubenswrapper[4642]: I0128 07:16:58.888738 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/826495f0-3162-41a2-bbf2-f95814348f47-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4lgl6\" (UID: \"826495f0-3162-41a2-bbf2-f95814348f47\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4lgl6" Jan 28 07:16:58 crc kubenswrapper[4642]: I0128 07:16:58.891700 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/826495f0-3162-41a2-bbf2-f95814348f47-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4lgl6\" (UID: \"826495f0-3162-41a2-bbf2-f95814348f47\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4lgl6" Jan 28 07:16:58 crc kubenswrapper[4642]: I0128 07:16:58.891922 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/826495f0-3162-41a2-bbf2-f95814348f47-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4lgl6\" (UID: \"826495f0-3162-41a2-bbf2-f95814348f47\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4lgl6" Jan 28 07:16:58 crc kubenswrapper[4642]: I0128 07:16:58.902598 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8g4g\" (UniqueName: \"kubernetes.io/projected/826495f0-3162-41a2-bbf2-f95814348f47-kube-api-access-k8g4g\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4lgl6\" (UID: \"826495f0-3162-41a2-bbf2-f95814348f47\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4lgl6" Jan 28 07:16:59 crc kubenswrapper[4642]: I0128 07:16:59.035972 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4lgl6" Jan 28 07:16:59 crc kubenswrapper[4642]: I0128 07:16:59.477130 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-4lgl6"] Jan 28 07:16:59 crc kubenswrapper[4642]: I0128 07:16:59.680576 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4lgl6" event={"ID":"826495f0-3162-41a2-bbf2-f95814348f47","Type":"ContainerStarted","Data":"d2dd0eaaf6b69f8fd6fb6a6c278903bc624d1e2abd9f0735d815672bc404cab6"} Jan 28 07:17:00 crc kubenswrapper[4642]: I0128 07:17:00.689878 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4lgl6" event={"ID":"826495f0-3162-41a2-bbf2-f95814348f47","Type":"ContainerStarted","Data":"42841eaf5f345beccc7fbd677f2fe882badef5490d03989ef188ee8a4888aa8a"} Jan 28 07:17:00 crc kubenswrapper[4642]: I0128 07:17:00.702756 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4lgl6" podStartSLOduration=2.246298042 podStartE2EDuration="2.702740674s" podCreationTimestamp="2026-01-28 07:16:58 +0000 UTC" firstStartedPulling="2026-01-28 07:16:59.48191346 +0000 UTC m=+1742.714002269" lastFinishedPulling="2026-01-28 07:16:59.938356092 +0000 UTC m=+1743.170444901" observedRunningTime="2026-01-28 07:17:00.699873104 +0000 UTC m=+1743.931961913" watchObservedRunningTime="2026-01-28 07:17:00.702740674 +0000 UTC m=+1743.934829483" Jan 28 07:17:03 crc kubenswrapper[4642]: I0128 07:17:03.413611 4642 scope.go:117] "RemoveContainer" containerID="5f0e66c6206c665d6b35e28f1980eb6c6b8888516ed8ccf1196fdd05c874ac2c" Jan 28 07:17:03 crc kubenswrapper[4642]: I0128 07:17:03.449296 4642 scope.go:117] "RemoveContainer" containerID="5645b4498308085d82bbbfaac6f38f39fd0b913a902a9cd0bb0c304c0d6e6a10" Jan 28 07:17:03 crc kubenswrapper[4642]: I0128 07:17:03.478649 4642 scope.go:117] "RemoveContainer" containerID="1110d8f2c269c3abed7283437cb94bcd9cc7197b6f63d13c65ce695d81875105" Jan 28 07:17:05 crc kubenswrapper[4642]: I0128 07:17:05.722652 4642 generic.go:334] "Generic (PLEG): container finished" podID="826495f0-3162-41a2-bbf2-f95814348f47" containerID="42841eaf5f345beccc7fbd677f2fe882badef5490d03989ef188ee8a4888aa8a" exitCode=0 Jan 28 07:17:05 crc kubenswrapper[4642]: I0128 07:17:05.722725 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4lgl6" event={"ID":"826495f0-3162-41a2-bbf2-f95814348f47","Type":"ContainerDied","Data":"42841eaf5f345beccc7fbd677f2fe882badef5490d03989ef188ee8a4888aa8a"} Jan 28 07:17:07 crc kubenswrapper[4642]: I0128 07:17:07.015808 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4lgl6" Jan 28 07:17:07 crc kubenswrapper[4642]: I0128 07:17:07.036930 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/826495f0-3162-41a2-bbf2-f95814348f47-inventory\") pod \"826495f0-3162-41a2-bbf2-f95814348f47\" (UID: \"826495f0-3162-41a2-bbf2-f95814348f47\") " Jan 28 07:17:07 crc kubenswrapper[4642]: I0128 07:17:07.036979 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/826495f0-3162-41a2-bbf2-f95814348f47-ssh-key-openstack-edpm-ipam\") pod \"826495f0-3162-41a2-bbf2-f95814348f47\" (UID: \"826495f0-3162-41a2-bbf2-f95814348f47\") " Jan 28 07:17:07 crc kubenswrapper[4642]: I0128 07:17:07.037161 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8g4g\" (UniqueName: \"kubernetes.io/projected/826495f0-3162-41a2-bbf2-f95814348f47-kube-api-access-k8g4g\") pod \"826495f0-3162-41a2-bbf2-f95814348f47\" (UID: \"826495f0-3162-41a2-bbf2-f95814348f47\") " Jan 28 07:17:07 crc kubenswrapper[4642]: I0128 07:17:07.041346 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/826495f0-3162-41a2-bbf2-f95814348f47-kube-api-access-k8g4g" (OuterVolumeSpecName: "kube-api-access-k8g4g") pod "826495f0-3162-41a2-bbf2-f95814348f47" (UID: "826495f0-3162-41a2-bbf2-f95814348f47"). InnerVolumeSpecName "kube-api-access-k8g4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:17:07 crc kubenswrapper[4642]: I0128 07:17:07.057306 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/826495f0-3162-41a2-bbf2-f95814348f47-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "826495f0-3162-41a2-bbf2-f95814348f47" (UID: "826495f0-3162-41a2-bbf2-f95814348f47"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:17:07 crc kubenswrapper[4642]: I0128 07:17:07.057651 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/826495f0-3162-41a2-bbf2-f95814348f47-inventory" (OuterVolumeSpecName: "inventory") pod "826495f0-3162-41a2-bbf2-f95814348f47" (UID: "826495f0-3162-41a2-bbf2-f95814348f47"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:17:07 crc kubenswrapper[4642]: I0128 07:17:07.139640 4642 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/826495f0-3162-41a2-bbf2-f95814348f47-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 07:17:07 crc kubenswrapper[4642]: I0128 07:17:07.139759 4642 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/826495f0-3162-41a2-bbf2-f95814348f47-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 07:17:07 crc kubenswrapper[4642]: I0128 07:17:07.139787 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8g4g\" (UniqueName: \"kubernetes.io/projected/826495f0-3162-41a2-bbf2-f95814348f47-kube-api-access-k8g4g\") on node \"crc\" DevicePath \"\"" Jan 28 07:17:07 crc kubenswrapper[4642]: I0128 07:17:07.737038 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4lgl6" event={"ID":"826495f0-3162-41a2-bbf2-f95814348f47","Type":"ContainerDied","Data":"d2dd0eaaf6b69f8fd6fb6a6c278903bc624d1e2abd9f0735d815672bc404cab6"} Jan 28 07:17:07 crc kubenswrapper[4642]: I0128 07:17:07.737078 4642 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2dd0eaaf6b69f8fd6fb6a6c278903bc624d1e2abd9f0735d815672bc404cab6" Jan 28 07:17:07 crc kubenswrapper[4642]: I0128 07:17:07.737088 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4lgl6" Jan 28 07:17:07 crc kubenswrapper[4642]: I0128 07:17:07.797112 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-w7l2s"] Jan 28 07:17:07 crc kubenswrapper[4642]: E0128 07:17:07.797591 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="826495f0-3162-41a2-bbf2-f95814348f47" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 28 07:17:07 crc kubenswrapper[4642]: I0128 07:17:07.797606 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="826495f0-3162-41a2-bbf2-f95814348f47" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 28 07:17:07 crc kubenswrapper[4642]: I0128 07:17:07.797825 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="826495f0-3162-41a2-bbf2-f95814348f47" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 28 07:17:07 crc kubenswrapper[4642]: I0128 07:17:07.798394 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-w7l2s" Jan 28 07:17:07 crc kubenswrapper[4642]: I0128 07:17:07.800168 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 07:17:07 crc kubenswrapper[4642]: I0128 07:17:07.800430 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 07:17:07 crc kubenswrapper[4642]: I0128 07:17:07.800516 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r6m2m" Jan 28 07:17:07 crc kubenswrapper[4642]: I0128 07:17:07.801022 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 07:17:07 crc kubenswrapper[4642]: I0128 07:17:07.804341 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-w7l2s"] Jan 28 07:17:07 crc kubenswrapper[4642]: I0128 07:17:07.848640 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vh65r\" (UniqueName: \"kubernetes.io/projected/2e6a86f6-9d82-44b5-8f3d-cd0d0520462b-kube-api-access-vh65r\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-w7l2s\" (UID: \"2e6a86f6-9d82-44b5-8f3d-cd0d0520462b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-w7l2s" Jan 28 07:17:07 crc kubenswrapper[4642]: I0128 07:17:07.848719 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2e6a86f6-9d82-44b5-8f3d-cd0d0520462b-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-w7l2s\" (UID: \"2e6a86f6-9d82-44b5-8f3d-cd0d0520462b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-w7l2s" Jan 28 07:17:07 crc kubenswrapper[4642]: I0128 07:17:07.848875 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2e6a86f6-9d82-44b5-8f3d-cd0d0520462b-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-w7l2s\" (UID: \"2e6a86f6-9d82-44b5-8f3d-cd0d0520462b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-w7l2s" Jan 28 07:17:07 crc kubenswrapper[4642]: I0128 07:17:07.950759 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2e6a86f6-9d82-44b5-8f3d-cd0d0520462b-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-w7l2s\" (UID: \"2e6a86f6-9d82-44b5-8f3d-cd0d0520462b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-w7l2s" Jan 28 07:17:07 crc kubenswrapper[4642]: I0128 07:17:07.950892 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vh65r\" (UniqueName: \"kubernetes.io/projected/2e6a86f6-9d82-44b5-8f3d-cd0d0520462b-kube-api-access-vh65r\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-w7l2s\" (UID: \"2e6a86f6-9d82-44b5-8f3d-cd0d0520462b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-w7l2s" Jan 28 07:17:07 crc kubenswrapper[4642]: I0128 07:17:07.950944 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2e6a86f6-9d82-44b5-8f3d-cd0d0520462b-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-w7l2s\" (UID: \"2e6a86f6-9d82-44b5-8f3d-cd0d0520462b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-w7l2s" Jan 28 07:17:07 crc kubenswrapper[4642]: I0128 07:17:07.954453 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2e6a86f6-9d82-44b5-8f3d-cd0d0520462b-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-w7l2s\" (UID: \"2e6a86f6-9d82-44b5-8f3d-cd0d0520462b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-w7l2s" Jan 28 07:17:07 crc kubenswrapper[4642]: I0128 07:17:07.955225 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2e6a86f6-9d82-44b5-8f3d-cd0d0520462b-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-w7l2s\" (UID: \"2e6a86f6-9d82-44b5-8f3d-cd0d0520462b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-w7l2s" Jan 28 07:17:07 crc kubenswrapper[4642]: I0128 07:17:07.964397 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vh65r\" (UniqueName: \"kubernetes.io/projected/2e6a86f6-9d82-44b5-8f3d-cd0d0520462b-kube-api-access-vh65r\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-w7l2s\" (UID: \"2e6a86f6-9d82-44b5-8f3d-cd0d0520462b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-w7l2s" Jan 28 07:17:08 crc kubenswrapper[4642]: I0128 07:17:08.114500 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-w7l2s" Jan 28 07:17:08 crc kubenswrapper[4642]: I0128 07:17:08.523088 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-w7l2s"] Jan 28 07:17:08 crc kubenswrapper[4642]: I0128 07:17:08.744721 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-w7l2s" event={"ID":"2e6a86f6-9d82-44b5-8f3d-cd0d0520462b","Type":"ContainerStarted","Data":"c89e3e64db4bda31ac5d107636cfdacc2d7c07b5271cae75548d5e57bbac9d16"} Jan 28 07:17:09 crc kubenswrapper[4642]: I0128 07:17:09.751742 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-w7l2s" event={"ID":"2e6a86f6-9d82-44b5-8f3d-cd0d0520462b","Type":"ContainerStarted","Data":"8a645ddcf53225dfb5d8a86cb6829659780b230b0a34208b266147a772ea9785"} Jan 28 07:17:09 crc kubenswrapper[4642]: I0128 07:17:09.769070 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-w7l2s" podStartSLOduration=2.26419688 podStartE2EDuration="2.769056181s" podCreationTimestamp="2026-01-28 07:17:07 +0000 UTC" firstStartedPulling="2026-01-28 07:17:08.530062504 +0000 UTC m=+1751.762151313" lastFinishedPulling="2026-01-28 07:17:09.034921804 +0000 UTC m=+1752.267010614" observedRunningTime="2026-01-28 07:17:09.76263888 +0000 UTC m=+1752.994727689" watchObservedRunningTime="2026-01-28 07:17:09.769056181 +0000 UTC m=+1753.001144990" Jan 28 07:17:11 crc kubenswrapper[4642]: I0128 07:17:11.098199 4642 scope.go:117] "RemoveContainer" containerID="c9b0ccd578b76f711793a52e4fbebb1998cf31023e773fda5a4c995750b44303" Jan 28 07:17:11 crc kubenswrapper[4642]: E0128 07:17:11.098594 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdsmf_openshift-machine-config-operator(338ae955-434d-40bd-8519-580badf3e175)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" Jan 28 07:17:16 crc kubenswrapper[4642]: I0128 07:17:16.806715 4642 generic.go:334] "Generic (PLEG): container finished" podID="2e6a86f6-9d82-44b5-8f3d-cd0d0520462b" containerID="8a645ddcf53225dfb5d8a86cb6829659780b230b0a34208b266147a772ea9785" exitCode=0 Jan 28 07:17:16 crc kubenswrapper[4642]: I0128 07:17:16.806796 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-w7l2s" event={"ID":"2e6a86f6-9d82-44b5-8f3d-cd0d0520462b","Type":"ContainerDied","Data":"8a645ddcf53225dfb5d8a86cb6829659780b230b0a34208b266147a772ea9785"} Jan 28 07:17:18 crc kubenswrapper[4642]: I0128 07:17:18.107798 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-w7l2s" Jan 28 07:17:18 crc kubenswrapper[4642]: I0128 07:17:18.306990 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2e6a86f6-9d82-44b5-8f3d-cd0d0520462b-ssh-key-openstack-edpm-ipam\") pod \"2e6a86f6-9d82-44b5-8f3d-cd0d0520462b\" (UID: \"2e6a86f6-9d82-44b5-8f3d-cd0d0520462b\") " Jan 28 07:17:18 crc kubenswrapper[4642]: I0128 07:17:18.307081 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2e6a86f6-9d82-44b5-8f3d-cd0d0520462b-inventory\") pod \"2e6a86f6-9d82-44b5-8f3d-cd0d0520462b\" (UID: \"2e6a86f6-9d82-44b5-8f3d-cd0d0520462b\") " Jan 28 07:17:18 crc kubenswrapper[4642]: I0128 07:17:18.307117 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vh65r\" (UniqueName: \"kubernetes.io/projected/2e6a86f6-9d82-44b5-8f3d-cd0d0520462b-kube-api-access-vh65r\") pod \"2e6a86f6-9d82-44b5-8f3d-cd0d0520462b\" (UID: \"2e6a86f6-9d82-44b5-8f3d-cd0d0520462b\") " Jan 28 07:17:18 crc kubenswrapper[4642]: I0128 07:17:18.311793 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e6a86f6-9d82-44b5-8f3d-cd0d0520462b-kube-api-access-vh65r" (OuterVolumeSpecName: "kube-api-access-vh65r") pod "2e6a86f6-9d82-44b5-8f3d-cd0d0520462b" (UID: "2e6a86f6-9d82-44b5-8f3d-cd0d0520462b"). InnerVolumeSpecName "kube-api-access-vh65r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:17:18 crc kubenswrapper[4642]: I0128 07:17:18.327656 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e6a86f6-9d82-44b5-8f3d-cd0d0520462b-inventory" (OuterVolumeSpecName: "inventory") pod "2e6a86f6-9d82-44b5-8f3d-cd0d0520462b" (UID: "2e6a86f6-9d82-44b5-8f3d-cd0d0520462b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:17:18 crc kubenswrapper[4642]: I0128 07:17:18.328713 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e6a86f6-9d82-44b5-8f3d-cd0d0520462b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2e6a86f6-9d82-44b5-8f3d-cd0d0520462b" (UID: "2e6a86f6-9d82-44b5-8f3d-cd0d0520462b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:17:18 crc kubenswrapper[4642]: I0128 07:17:18.409631 4642 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2e6a86f6-9d82-44b5-8f3d-cd0d0520462b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 07:17:18 crc kubenswrapper[4642]: I0128 07:17:18.409652 4642 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2e6a86f6-9d82-44b5-8f3d-cd0d0520462b-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 07:17:18 crc kubenswrapper[4642]: I0128 07:17:18.409661 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vh65r\" (UniqueName: \"kubernetes.io/projected/2e6a86f6-9d82-44b5-8f3d-cd0d0520462b-kube-api-access-vh65r\") on node \"crc\" DevicePath \"\"" Jan 28 07:17:18 crc kubenswrapper[4642]: I0128 07:17:18.819124 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-w7l2s" event={"ID":"2e6a86f6-9d82-44b5-8f3d-cd0d0520462b","Type":"ContainerDied","Data":"c89e3e64db4bda31ac5d107636cfdacc2d7c07b5271cae75548d5e57bbac9d16"} Jan 28 07:17:18 crc kubenswrapper[4642]: I0128 07:17:18.819367 4642 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c89e3e64db4bda31ac5d107636cfdacc2d7c07b5271cae75548d5e57bbac9d16" Jan 28 07:17:18 crc kubenswrapper[4642]: I0128 07:17:18.819151 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-w7l2s" Jan 28 07:17:18 crc kubenswrapper[4642]: I0128 07:17:18.873275 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr"] Jan 28 07:17:18 crc kubenswrapper[4642]: E0128 07:17:18.873572 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e6a86f6-9d82-44b5-8f3d-cd0d0520462b" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 28 07:17:18 crc kubenswrapper[4642]: I0128 07:17:18.873589 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e6a86f6-9d82-44b5-8f3d-cd0d0520462b" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 28 07:17:18 crc kubenswrapper[4642]: I0128 07:17:18.873770 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e6a86f6-9d82-44b5-8f3d-cd0d0520462b" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 28 07:17:18 crc kubenswrapper[4642]: I0128 07:17:18.874277 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr" Jan 28 07:17:18 crc kubenswrapper[4642]: I0128 07:17:18.875953 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r6m2m" Jan 28 07:17:18 crc kubenswrapper[4642]: I0128 07:17:18.876245 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 28 07:17:18 crc kubenswrapper[4642]: I0128 07:17:18.876381 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 28 07:17:18 crc kubenswrapper[4642]: I0128 07:17:18.878515 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 07:17:18 crc kubenswrapper[4642]: I0128 07:17:18.878515 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Jan 28 07:17:18 crc kubenswrapper[4642]: I0128 07:17:18.878554 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 07:17:18 crc kubenswrapper[4642]: I0128 07:17:18.878557 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 07:17:18 crc kubenswrapper[4642]: I0128 07:17:18.880385 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 28 07:17:18 crc kubenswrapper[4642]: I0128 07:17:18.883227 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr"] Jan 28 07:17:19 crc kubenswrapper[4642]: I0128 07:17:19.018245 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a86d82be-6640-4441-a938-230f7beded20-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr\" (UID: \"a86d82be-6640-4441-a938-230f7beded20\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr" Jan 28 07:17:19 crc kubenswrapper[4642]: I0128 07:17:19.018317 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a86d82be-6640-4441-a938-230f7beded20-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr\" (UID: \"a86d82be-6640-4441-a938-230f7beded20\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr" Jan 28 07:17:19 crc kubenswrapper[4642]: I0128 07:17:19.018340 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a86d82be-6640-4441-a938-230f7beded20-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr\" (UID: \"a86d82be-6640-4441-a938-230f7beded20\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr" Jan 28 07:17:19 crc kubenswrapper[4642]: I0128 07:17:19.019108 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a86d82be-6640-4441-a938-230f7beded20-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr\" (UID: \"a86d82be-6640-4441-a938-230f7beded20\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr" Jan 28 07:17:19 crc kubenswrapper[4642]: I0128 07:17:19.019151 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a86d82be-6640-4441-a938-230f7beded20-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr\" (UID: \"a86d82be-6640-4441-a938-230f7beded20\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr" Jan 28 07:17:19 crc kubenswrapper[4642]: I0128 07:17:19.019215 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a86d82be-6640-4441-a938-230f7beded20-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr\" (UID: \"a86d82be-6640-4441-a938-230f7beded20\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr" Jan 28 07:17:19 crc kubenswrapper[4642]: I0128 07:17:19.019279 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a86d82be-6640-4441-a938-230f7beded20-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr\" (UID: \"a86d82be-6640-4441-a938-230f7beded20\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr" Jan 28 07:17:19 crc kubenswrapper[4642]: I0128 07:17:19.019440 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a86d82be-6640-4441-a938-230f7beded20-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr\" (UID: \"a86d82be-6640-4441-a938-230f7beded20\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr" Jan 28 07:17:19 crc kubenswrapper[4642]: I0128 07:17:19.019467 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcb4l\" (UniqueName: \"kubernetes.io/projected/a86d82be-6640-4441-a938-230f7beded20-kube-api-access-vcb4l\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr\" (UID: \"a86d82be-6640-4441-a938-230f7beded20\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr" Jan 28 07:17:19 crc kubenswrapper[4642]: I0128 07:17:19.019495 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a86d82be-6640-4441-a938-230f7beded20-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr\" (UID: \"a86d82be-6640-4441-a938-230f7beded20\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr" Jan 28 07:17:19 crc kubenswrapper[4642]: I0128 07:17:19.019585 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a86d82be-6640-4441-a938-230f7beded20-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr\" (UID: \"a86d82be-6640-4441-a938-230f7beded20\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr" Jan 28 07:17:19 crc kubenswrapper[4642]: I0128 07:17:19.019619 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a86d82be-6640-4441-a938-230f7beded20-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr\" (UID: \"a86d82be-6640-4441-a938-230f7beded20\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr" Jan 28 07:17:19 crc kubenswrapper[4642]: I0128 07:17:19.019653 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a86d82be-6640-4441-a938-230f7beded20-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr\" (UID: \"a86d82be-6640-4441-a938-230f7beded20\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr" Jan 28 07:17:19 crc kubenswrapper[4642]: I0128 07:17:19.019701 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a86d82be-6640-4441-a938-230f7beded20-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr\" (UID: \"a86d82be-6640-4441-a938-230f7beded20\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr" Jan 28 07:17:19 crc kubenswrapper[4642]: I0128 07:17:19.121448 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a86d82be-6640-4441-a938-230f7beded20-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr\" (UID: \"a86d82be-6640-4441-a938-230f7beded20\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr" Jan 28 07:17:19 crc kubenswrapper[4642]: I0128 07:17:19.121901 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcb4l\" (UniqueName: \"kubernetes.io/projected/a86d82be-6640-4441-a938-230f7beded20-kube-api-access-vcb4l\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr\" (UID: \"a86d82be-6640-4441-a938-230f7beded20\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr" Jan 28 07:17:19 crc kubenswrapper[4642]: I0128 07:17:19.122042 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a86d82be-6640-4441-a938-230f7beded20-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr\" (UID: \"a86d82be-6640-4441-a938-230f7beded20\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr" Jan 28 07:17:19 crc kubenswrapper[4642]: I0128 07:17:19.122678 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a86d82be-6640-4441-a938-230f7beded20-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr\" (UID: \"a86d82be-6640-4441-a938-230f7beded20\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr" Jan 28 07:17:19 crc kubenswrapper[4642]: I0128 07:17:19.122713 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a86d82be-6640-4441-a938-230f7beded20-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr\" (UID: \"a86d82be-6640-4441-a938-230f7beded20\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr" Jan 28 07:17:19 crc kubenswrapper[4642]: I0128 07:17:19.122753 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a86d82be-6640-4441-a938-230f7beded20-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr\" (UID: \"a86d82be-6640-4441-a938-230f7beded20\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr" Jan 28 07:17:19 crc kubenswrapper[4642]: I0128 07:17:19.122797 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a86d82be-6640-4441-a938-230f7beded20-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr\" (UID: \"a86d82be-6640-4441-a938-230f7beded20\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr" Jan 28 07:17:19 crc kubenswrapper[4642]: I0128 07:17:19.122857 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a86d82be-6640-4441-a938-230f7beded20-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr\" (UID: \"a86d82be-6640-4441-a938-230f7beded20\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr" Jan 28 07:17:19 crc kubenswrapper[4642]: I0128 07:17:19.122902 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a86d82be-6640-4441-a938-230f7beded20-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr\" (UID: \"a86d82be-6640-4441-a938-230f7beded20\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr" Jan 28 07:17:19 crc kubenswrapper[4642]: I0128 07:17:19.122921 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a86d82be-6640-4441-a938-230f7beded20-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr\" (UID: \"a86d82be-6640-4441-a938-230f7beded20\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr" Jan 28 07:17:19 crc kubenswrapper[4642]: I0128 07:17:19.122950 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a86d82be-6640-4441-a938-230f7beded20-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr\" (UID: \"a86d82be-6640-4441-a938-230f7beded20\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr" Jan 28 07:17:19 crc kubenswrapper[4642]: I0128 07:17:19.122969 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a86d82be-6640-4441-a938-230f7beded20-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr\" (UID: \"a86d82be-6640-4441-a938-230f7beded20\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr" Jan 28 07:17:19 crc kubenswrapper[4642]: I0128 07:17:19.123003 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a86d82be-6640-4441-a938-230f7beded20-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr\" (UID: \"a86d82be-6640-4441-a938-230f7beded20\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr" Jan 28 07:17:19 crc kubenswrapper[4642]: I0128 07:17:19.123050 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a86d82be-6640-4441-a938-230f7beded20-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr\" (UID: \"a86d82be-6640-4441-a938-230f7beded20\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr" Jan 28 07:17:19 crc kubenswrapper[4642]: I0128 07:17:19.124698 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a86d82be-6640-4441-a938-230f7beded20-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr\" (UID: \"a86d82be-6640-4441-a938-230f7beded20\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr" Jan 28 07:17:19 crc kubenswrapper[4642]: I0128 07:17:19.125482 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a86d82be-6640-4441-a938-230f7beded20-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr\" (UID: \"a86d82be-6640-4441-a938-230f7beded20\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr" Jan 28 07:17:19 crc kubenswrapper[4642]: I0128 07:17:19.125693 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a86d82be-6640-4441-a938-230f7beded20-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr\" (UID: \"a86d82be-6640-4441-a938-230f7beded20\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr" Jan 28 07:17:19 crc kubenswrapper[4642]: I0128 07:17:19.126080 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a86d82be-6640-4441-a938-230f7beded20-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr\" (UID: \"a86d82be-6640-4441-a938-230f7beded20\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr" Jan 28 07:17:19 crc kubenswrapper[4642]: I0128 07:17:19.126673 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a86d82be-6640-4441-a938-230f7beded20-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr\" (UID: \"a86d82be-6640-4441-a938-230f7beded20\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr" Jan 28 07:17:19 crc kubenswrapper[4642]: I0128 07:17:19.126729 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a86d82be-6640-4441-a938-230f7beded20-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr\" (UID: \"a86d82be-6640-4441-a938-230f7beded20\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr" Jan 28 07:17:19 crc kubenswrapper[4642]: I0128 07:17:19.127162 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a86d82be-6640-4441-a938-230f7beded20-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr\" (UID: \"a86d82be-6640-4441-a938-230f7beded20\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr" Jan 28 07:17:19 crc kubenswrapper[4642]: I0128 07:17:19.127317 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a86d82be-6640-4441-a938-230f7beded20-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr\" (UID: \"a86d82be-6640-4441-a938-230f7beded20\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr" Jan 28 07:17:19 crc kubenswrapper[4642]: I0128 07:17:19.127758 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a86d82be-6640-4441-a938-230f7beded20-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr\" (UID: \"a86d82be-6640-4441-a938-230f7beded20\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr" Jan 28 07:17:19 crc kubenswrapper[4642]: I0128 07:17:19.127835 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a86d82be-6640-4441-a938-230f7beded20-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr\" (UID: \"a86d82be-6640-4441-a938-230f7beded20\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr" Jan 28 07:17:19 crc kubenswrapper[4642]: I0128 07:17:19.128155 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a86d82be-6640-4441-a938-230f7beded20-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr\" (UID: \"a86d82be-6640-4441-a938-230f7beded20\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr" Jan 28 07:17:19 crc kubenswrapper[4642]: I0128 07:17:19.128270 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a86d82be-6640-4441-a938-230f7beded20-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr\" (UID: \"a86d82be-6640-4441-a938-230f7beded20\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr" Jan 28 07:17:19 crc kubenswrapper[4642]: I0128 07:17:19.129356 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a86d82be-6640-4441-a938-230f7beded20-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr\" (UID: \"a86d82be-6640-4441-a938-230f7beded20\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr" Jan 28 07:17:19 crc kubenswrapper[4642]: I0128 07:17:19.134634 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcb4l\" (UniqueName: \"kubernetes.io/projected/a86d82be-6640-4441-a938-230f7beded20-kube-api-access-vcb4l\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr\" (UID: \"a86d82be-6640-4441-a938-230f7beded20\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr" Jan 28 07:17:19 crc kubenswrapper[4642]: I0128 07:17:19.187148 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr" Jan 28 07:17:19 crc kubenswrapper[4642]: I0128 07:17:19.583683 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr"] Jan 28 07:17:19 crc kubenswrapper[4642]: W0128 07:17:19.586681 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda86d82be_6640_4441_a938_230f7beded20.slice/crio-ce06be0489939daffd08ef107a94ebc802ff07fdbb133e6cf1edb55eaee4ebef WatchSource:0}: Error finding container ce06be0489939daffd08ef107a94ebc802ff07fdbb133e6cf1edb55eaee4ebef: Status 404 returned error can't find the container with id ce06be0489939daffd08ef107a94ebc802ff07fdbb133e6cf1edb55eaee4ebef Jan 28 07:17:19 crc kubenswrapper[4642]: I0128 07:17:19.826067 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr" event={"ID":"a86d82be-6640-4441-a938-230f7beded20","Type":"ContainerStarted","Data":"ce06be0489939daffd08ef107a94ebc802ff07fdbb133e6cf1edb55eaee4ebef"} Jan 28 07:17:20 crc kubenswrapper[4642]: I0128 07:17:20.832463 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr" event={"ID":"a86d82be-6640-4441-a938-230f7beded20","Type":"ContainerStarted","Data":"842578b62ec50a66c3a348eb07effcaa7c702b4fe4a7aa4b99df989673982185"} Jan 28 07:17:20 crc kubenswrapper[4642]: I0128 07:17:20.848900 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr" podStartSLOduration=2.3552610720000002 podStartE2EDuration="2.848888121s" podCreationTimestamp="2026-01-28 07:17:18 +0000 UTC" firstStartedPulling="2026-01-28 07:17:19.588418861 +0000 UTC m=+1762.820507671" lastFinishedPulling="2026-01-28 07:17:20.082045921 +0000 UTC m=+1763.314134720" observedRunningTime="2026-01-28 07:17:20.846958906 +0000 UTC m=+1764.079047715" watchObservedRunningTime="2026-01-28 07:17:20.848888121 +0000 UTC m=+1764.080976931" Jan 28 07:17:23 crc kubenswrapper[4642]: I0128 07:17:23.098736 4642 scope.go:117] "RemoveContainer" containerID="c9b0ccd578b76f711793a52e4fbebb1998cf31023e773fda5a4c995750b44303" Jan 28 07:17:23 crc kubenswrapper[4642]: E0128 07:17:23.099123 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdsmf_openshift-machine-config-operator(338ae955-434d-40bd-8519-580badf3e175)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" Jan 28 07:17:36 crc kubenswrapper[4642]: I0128 07:17:36.098495 4642 scope.go:117] "RemoveContainer" containerID="c9b0ccd578b76f711793a52e4fbebb1998cf31023e773fda5a4c995750b44303" Jan 28 07:17:36 crc kubenswrapper[4642]: E0128 07:17:36.099304 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdsmf_openshift-machine-config-operator(338ae955-434d-40bd-8519-580badf3e175)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" Jan 28 07:17:45 crc kubenswrapper[4642]: I0128 07:17:45.986858 4642 generic.go:334] "Generic (PLEG): container finished" podID="a86d82be-6640-4441-a938-230f7beded20" containerID="842578b62ec50a66c3a348eb07effcaa7c702b4fe4a7aa4b99df989673982185" exitCode=0 Jan 28 07:17:45 crc kubenswrapper[4642]: I0128 07:17:45.986948 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr" event={"ID":"a86d82be-6640-4441-a938-230f7beded20","Type":"ContainerDied","Data":"842578b62ec50a66c3a348eb07effcaa7c702b4fe4a7aa4b99df989673982185"} Jan 28 07:17:47 crc kubenswrapper[4642]: I0128 07:17:47.286596 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr" Jan 28 07:17:47 crc kubenswrapper[4642]: I0128 07:17:47.364029 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a86d82be-6640-4441-a938-230f7beded20-telemetry-combined-ca-bundle\") pod \"a86d82be-6640-4441-a938-230f7beded20\" (UID: \"a86d82be-6640-4441-a938-230f7beded20\") " Jan 28 07:17:47 crc kubenswrapper[4642]: I0128 07:17:47.364064 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vcb4l\" (UniqueName: \"kubernetes.io/projected/a86d82be-6640-4441-a938-230f7beded20-kube-api-access-vcb4l\") pod \"a86d82be-6640-4441-a938-230f7beded20\" (UID: \"a86d82be-6640-4441-a938-230f7beded20\") " Jan 28 07:17:47 crc kubenswrapper[4642]: I0128 07:17:47.364084 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a86d82be-6640-4441-a938-230f7beded20-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"a86d82be-6640-4441-a938-230f7beded20\" (UID: \"a86d82be-6640-4441-a938-230f7beded20\") " Jan 28 07:17:47 crc kubenswrapper[4642]: I0128 07:17:47.364116 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a86d82be-6640-4441-a938-230f7beded20-ssh-key-openstack-edpm-ipam\") pod \"a86d82be-6640-4441-a938-230f7beded20\" (UID: \"a86d82be-6640-4441-a938-230f7beded20\") " Jan 28 07:17:47 crc kubenswrapper[4642]: I0128 07:17:47.364144 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a86d82be-6640-4441-a938-230f7beded20-repo-setup-combined-ca-bundle\") pod \"a86d82be-6640-4441-a938-230f7beded20\" (UID: \"a86d82be-6640-4441-a938-230f7beded20\") " Jan 28 07:17:47 crc kubenswrapper[4642]: I0128 07:17:47.364165 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a86d82be-6640-4441-a938-230f7beded20-openstack-edpm-ipam-ovn-default-certs-0\") pod \"a86d82be-6640-4441-a938-230f7beded20\" (UID: \"a86d82be-6640-4441-a938-230f7beded20\") " Jan 28 07:17:47 crc kubenswrapper[4642]: I0128 07:17:47.364209 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a86d82be-6640-4441-a938-230f7beded20-inventory\") pod \"a86d82be-6640-4441-a938-230f7beded20\" (UID: \"a86d82be-6640-4441-a938-230f7beded20\") " Jan 28 07:17:47 crc kubenswrapper[4642]: I0128 07:17:47.364786 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a86d82be-6640-4441-a938-230f7beded20-nova-combined-ca-bundle\") pod \"a86d82be-6640-4441-a938-230f7beded20\" (UID: \"a86d82be-6640-4441-a938-230f7beded20\") " Jan 28 07:17:47 crc kubenswrapper[4642]: I0128 07:17:47.364824 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a86d82be-6640-4441-a938-230f7beded20-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"a86d82be-6640-4441-a938-230f7beded20\" (UID: \"a86d82be-6640-4441-a938-230f7beded20\") " Jan 28 07:17:47 crc kubenswrapper[4642]: I0128 07:17:47.364874 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a86d82be-6640-4441-a938-230f7beded20-libvirt-combined-ca-bundle\") pod \"a86d82be-6640-4441-a938-230f7beded20\" (UID: \"a86d82be-6640-4441-a938-230f7beded20\") " Jan 28 07:17:47 crc kubenswrapper[4642]: I0128 07:17:47.364905 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a86d82be-6640-4441-a938-230f7beded20-ovn-combined-ca-bundle\") pod \"a86d82be-6640-4441-a938-230f7beded20\" (UID: \"a86d82be-6640-4441-a938-230f7beded20\") " Jan 28 07:17:47 crc kubenswrapper[4642]: I0128 07:17:47.364943 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a86d82be-6640-4441-a938-230f7beded20-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"a86d82be-6640-4441-a938-230f7beded20\" (UID: \"a86d82be-6640-4441-a938-230f7beded20\") " Jan 28 07:17:47 crc kubenswrapper[4642]: I0128 07:17:47.364963 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a86d82be-6640-4441-a938-230f7beded20-bootstrap-combined-ca-bundle\") pod \"a86d82be-6640-4441-a938-230f7beded20\" (UID: \"a86d82be-6640-4441-a938-230f7beded20\") " Jan 28 07:17:47 crc kubenswrapper[4642]: I0128 07:17:47.364985 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a86d82be-6640-4441-a938-230f7beded20-neutron-metadata-combined-ca-bundle\") pod \"a86d82be-6640-4441-a938-230f7beded20\" (UID: \"a86d82be-6640-4441-a938-230f7beded20\") " Jan 28 07:17:47 crc kubenswrapper[4642]: I0128 07:17:47.369601 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a86d82be-6640-4441-a938-230f7beded20-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "a86d82be-6640-4441-a938-230f7beded20" (UID: "a86d82be-6640-4441-a938-230f7beded20"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:17:47 crc kubenswrapper[4642]: I0128 07:17:47.369764 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a86d82be-6640-4441-a938-230f7beded20-kube-api-access-vcb4l" (OuterVolumeSpecName: "kube-api-access-vcb4l") pod "a86d82be-6640-4441-a938-230f7beded20" (UID: "a86d82be-6640-4441-a938-230f7beded20"). InnerVolumeSpecName "kube-api-access-vcb4l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:17:47 crc kubenswrapper[4642]: I0128 07:17:47.369782 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a86d82be-6640-4441-a938-230f7beded20-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "a86d82be-6640-4441-a938-230f7beded20" (UID: "a86d82be-6640-4441-a938-230f7beded20"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:17:47 crc kubenswrapper[4642]: I0128 07:17:47.370333 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a86d82be-6640-4441-a938-230f7beded20-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "a86d82be-6640-4441-a938-230f7beded20" (UID: "a86d82be-6640-4441-a938-230f7beded20"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:17:47 crc kubenswrapper[4642]: I0128 07:17:47.370346 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a86d82be-6640-4441-a938-230f7beded20-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "a86d82be-6640-4441-a938-230f7beded20" (UID: "a86d82be-6640-4441-a938-230f7beded20"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:17:47 crc kubenswrapper[4642]: I0128 07:17:47.370592 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a86d82be-6640-4441-a938-230f7beded20-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "a86d82be-6640-4441-a938-230f7beded20" (UID: "a86d82be-6640-4441-a938-230f7beded20"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:17:47 crc kubenswrapper[4642]: I0128 07:17:47.370690 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a86d82be-6640-4441-a938-230f7beded20-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "a86d82be-6640-4441-a938-230f7beded20" (UID: "a86d82be-6640-4441-a938-230f7beded20"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:17:47 crc kubenswrapper[4642]: I0128 07:17:47.370763 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a86d82be-6640-4441-a938-230f7beded20-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "a86d82be-6640-4441-a938-230f7beded20" (UID: "a86d82be-6640-4441-a938-230f7beded20"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:17:47 crc kubenswrapper[4642]: I0128 07:17:47.370875 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a86d82be-6640-4441-a938-230f7beded20-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "a86d82be-6640-4441-a938-230f7beded20" (UID: "a86d82be-6640-4441-a938-230f7beded20"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:17:47 crc kubenswrapper[4642]: I0128 07:17:47.370998 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a86d82be-6640-4441-a938-230f7beded20-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "a86d82be-6640-4441-a938-230f7beded20" (UID: "a86d82be-6640-4441-a938-230f7beded20"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:17:47 crc kubenswrapper[4642]: I0128 07:17:47.371915 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a86d82be-6640-4441-a938-230f7beded20-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "a86d82be-6640-4441-a938-230f7beded20" (UID: "a86d82be-6640-4441-a938-230f7beded20"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:17:47 crc kubenswrapper[4642]: I0128 07:17:47.373646 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a86d82be-6640-4441-a938-230f7beded20-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "a86d82be-6640-4441-a938-230f7beded20" (UID: "a86d82be-6640-4441-a938-230f7beded20"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:17:47 crc kubenswrapper[4642]: I0128 07:17:47.386104 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a86d82be-6640-4441-a938-230f7beded20-inventory" (OuterVolumeSpecName: "inventory") pod "a86d82be-6640-4441-a938-230f7beded20" (UID: "a86d82be-6640-4441-a938-230f7beded20"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:17:47 crc kubenswrapper[4642]: I0128 07:17:47.386322 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a86d82be-6640-4441-a938-230f7beded20-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a86d82be-6640-4441-a938-230f7beded20" (UID: "a86d82be-6640-4441-a938-230f7beded20"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:17:47 crc kubenswrapper[4642]: I0128 07:17:47.467275 4642 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a86d82be-6640-4441-a938-230f7beded20-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 07:17:47 crc kubenswrapper[4642]: I0128 07:17:47.467303 4642 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a86d82be-6640-4441-a938-230f7beded20-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 28 07:17:47 crc kubenswrapper[4642]: I0128 07:17:47.467313 4642 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a86d82be-6640-4441-a938-230f7beded20-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 07:17:47 crc kubenswrapper[4642]: I0128 07:17:47.467322 4642 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a86d82be-6640-4441-a938-230f7beded20-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 07:17:47 crc kubenswrapper[4642]: I0128 07:17:47.467331 4642 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a86d82be-6640-4441-a938-230f7beded20-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 28 07:17:47 crc kubenswrapper[4642]: I0128 07:17:47.467338 4642 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a86d82be-6640-4441-a938-230f7beded20-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 07:17:47 crc kubenswrapper[4642]: I0128 07:17:47.467346 4642 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a86d82be-6640-4441-a938-230f7beded20-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 07:17:47 crc kubenswrapper[4642]: I0128 07:17:47.467356 4642 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a86d82be-6640-4441-a938-230f7beded20-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 07:17:47 crc kubenswrapper[4642]: I0128 07:17:47.467365 4642 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a86d82be-6640-4441-a938-230f7beded20-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 28 07:17:47 crc kubenswrapper[4642]: I0128 07:17:47.467386 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vcb4l\" (UniqueName: \"kubernetes.io/projected/a86d82be-6640-4441-a938-230f7beded20-kube-api-access-vcb4l\") on node \"crc\" DevicePath \"\"" Jan 28 07:17:47 crc kubenswrapper[4642]: I0128 07:17:47.467393 4642 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a86d82be-6640-4441-a938-230f7beded20-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 07:17:47 crc kubenswrapper[4642]: I0128 07:17:47.467401 4642 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a86d82be-6640-4441-a938-230f7beded20-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 07:17:47 crc kubenswrapper[4642]: I0128 07:17:47.467408 4642 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a86d82be-6640-4441-a938-230f7beded20-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 28 07:17:47 crc kubenswrapper[4642]: I0128 07:17:47.467416 4642 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a86d82be-6640-4441-a938-230f7beded20-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 07:17:48 crc kubenswrapper[4642]: I0128 07:17:48.000305 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr" event={"ID":"a86d82be-6640-4441-a938-230f7beded20","Type":"ContainerDied","Data":"ce06be0489939daffd08ef107a94ebc802ff07fdbb133e6cf1edb55eaee4ebef"} Jan 28 07:17:48 crc kubenswrapper[4642]: I0128 07:17:48.000338 4642 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce06be0489939daffd08ef107a94ebc802ff07fdbb133e6cf1edb55eaee4ebef" Jan 28 07:17:48 crc kubenswrapper[4642]: I0128 07:17:48.000346 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr" Jan 28 07:17:48 crc kubenswrapper[4642]: I0128 07:17:48.077803 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-h6cx4"] Jan 28 07:17:48 crc kubenswrapper[4642]: E0128 07:17:48.078128 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a86d82be-6640-4441-a938-230f7beded20" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 28 07:17:48 crc kubenswrapper[4642]: I0128 07:17:48.078144 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="a86d82be-6640-4441-a938-230f7beded20" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 28 07:17:48 crc kubenswrapper[4642]: I0128 07:17:48.078296 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="a86d82be-6640-4441-a938-230f7beded20" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 28 07:17:48 crc kubenswrapper[4642]: I0128 07:17:48.078839 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-h6cx4" Jan 28 07:17:48 crc kubenswrapper[4642]: I0128 07:17:48.080298 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 07:17:48 crc kubenswrapper[4642]: I0128 07:17:48.080568 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r6m2m" Jan 28 07:17:48 crc kubenswrapper[4642]: I0128 07:17:48.080741 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 07:17:48 crc kubenswrapper[4642]: I0128 07:17:48.080905 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 28 07:17:48 crc kubenswrapper[4642]: I0128 07:17:48.080966 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 07:17:48 crc kubenswrapper[4642]: I0128 07:17:48.083740 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-h6cx4"] Jan 28 07:17:48 crc kubenswrapper[4642]: I0128 07:17:48.098250 4642 scope.go:117] "RemoveContainer" containerID="c9b0ccd578b76f711793a52e4fbebb1998cf31023e773fda5a4c995750b44303" Jan 28 07:17:48 crc kubenswrapper[4642]: I0128 07:17:48.178229 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/2de02231-7ff5-4fea-8660-09a3a907adbe-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-h6cx4\" (UID: \"2de02231-7ff5-4fea-8660-09a3a907adbe\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-h6cx4" Jan 28 07:17:48 crc kubenswrapper[4642]: I0128 07:17:48.178315 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2de02231-7ff5-4fea-8660-09a3a907adbe-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-h6cx4\" (UID: \"2de02231-7ff5-4fea-8660-09a3a907adbe\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-h6cx4" Jan 28 07:17:48 crc kubenswrapper[4642]: I0128 07:17:48.178398 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gq2ss\" (UniqueName: \"kubernetes.io/projected/2de02231-7ff5-4fea-8660-09a3a907adbe-kube-api-access-gq2ss\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-h6cx4\" (UID: \"2de02231-7ff5-4fea-8660-09a3a907adbe\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-h6cx4" Jan 28 07:17:48 crc kubenswrapper[4642]: I0128 07:17:48.178432 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2de02231-7ff5-4fea-8660-09a3a907adbe-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-h6cx4\" (UID: \"2de02231-7ff5-4fea-8660-09a3a907adbe\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-h6cx4" Jan 28 07:17:48 crc kubenswrapper[4642]: I0128 07:17:48.178497 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2de02231-7ff5-4fea-8660-09a3a907adbe-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-h6cx4\" (UID: \"2de02231-7ff5-4fea-8660-09a3a907adbe\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-h6cx4" Jan 28 07:17:48 crc kubenswrapper[4642]: I0128 07:17:48.280587 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2de02231-7ff5-4fea-8660-09a3a907adbe-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-h6cx4\" (UID: \"2de02231-7ff5-4fea-8660-09a3a907adbe\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-h6cx4" Jan 28 07:17:48 crc kubenswrapper[4642]: I0128 07:17:48.280938 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/2de02231-7ff5-4fea-8660-09a3a907adbe-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-h6cx4\" (UID: \"2de02231-7ff5-4fea-8660-09a3a907adbe\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-h6cx4" Jan 28 07:17:48 crc kubenswrapper[4642]: I0128 07:17:48.280984 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2de02231-7ff5-4fea-8660-09a3a907adbe-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-h6cx4\" (UID: \"2de02231-7ff5-4fea-8660-09a3a907adbe\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-h6cx4" Jan 28 07:17:48 crc kubenswrapper[4642]: I0128 07:17:48.281017 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gq2ss\" (UniqueName: \"kubernetes.io/projected/2de02231-7ff5-4fea-8660-09a3a907adbe-kube-api-access-gq2ss\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-h6cx4\" (UID: \"2de02231-7ff5-4fea-8660-09a3a907adbe\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-h6cx4" Jan 28 07:17:48 crc kubenswrapper[4642]: I0128 07:17:48.281034 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2de02231-7ff5-4fea-8660-09a3a907adbe-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-h6cx4\" (UID: \"2de02231-7ff5-4fea-8660-09a3a907adbe\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-h6cx4" Jan 28 07:17:48 crc kubenswrapper[4642]: I0128 07:17:48.282072 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/2de02231-7ff5-4fea-8660-09a3a907adbe-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-h6cx4\" (UID: \"2de02231-7ff5-4fea-8660-09a3a907adbe\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-h6cx4" Jan 28 07:17:48 crc kubenswrapper[4642]: I0128 07:17:48.284306 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2de02231-7ff5-4fea-8660-09a3a907adbe-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-h6cx4\" (UID: \"2de02231-7ff5-4fea-8660-09a3a907adbe\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-h6cx4" Jan 28 07:17:48 crc kubenswrapper[4642]: I0128 07:17:48.284717 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2de02231-7ff5-4fea-8660-09a3a907adbe-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-h6cx4\" (UID: \"2de02231-7ff5-4fea-8660-09a3a907adbe\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-h6cx4" Jan 28 07:17:48 crc kubenswrapper[4642]: I0128 07:17:48.284800 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2de02231-7ff5-4fea-8660-09a3a907adbe-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-h6cx4\" (UID: \"2de02231-7ff5-4fea-8660-09a3a907adbe\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-h6cx4" Jan 28 07:17:48 crc kubenswrapper[4642]: I0128 07:17:48.293813 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gq2ss\" (UniqueName: \"kubernetes.io/projected/2de02231-7ff5-4fea-8660-09a3a907adbe-kube-api-access-gq2ss\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-h6cx4\" (UID: \"2de02231-7ff5-4fea-8660-09a3a907adbe\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-h6cx4" Jan 28 07:17:48 crc kubenswrapper[4642]: I0128 07:17:48.398173 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-h6cx4" Jan 28 07:17:48 crc kubenswrapper[4642]: I0128 07:17:48.829047 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-h6cx4"] Jan 28 07:17:49 crc kubenswrapper[4642]: I0128 07:17:49.008580 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" event={"ID":"338ae955-434d-40bd-8519-580badf3e175","Type":"ContainerStarted","Data":"d8d011239be864ef4fc9acb8d923c22235a3960efad1e195326a0020a8cd4b79"} Jan 28 07:17:49 crc kubenswrapper[4642]: I0128 07:17:49.009568 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-h6cx4" event={"ID":"2de02231-7ff5-4fea-8660-09a3a907adbe","Type":"ContainerStarted","Data":"96398fda7ada4590f5c2335457ad5e1115129b116256d3ffd17585c7ffdb1668"} Jan 28 07:17:50 crc kubenswrapper[4642]: I0128 07:17:50.016335 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-h6cx4" event={"ID":"2de02231-7ff5-4fea-8660-09a3a907adbe","Type":"ContainerStarted","Data":"3c3601a09a17d517509dfd39381d5b8103f2c4563e2bfef1552aeb276ff93929"} Jan 28 07:17:50 crc kubenswrapper[4642]: I0128 07:17:50.029620 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-h6cx4" podStartSLOduration=1.530854403 podStartE2EDuration="2.029606956s" podCreationTimestamp="2026-01-28 07:17:48 +0000 UTC" firstStartedPulling="2026-01-28 07:17:48.833799328 +0000 UTC m=+1792.065888137" lastFinishedPulling="2026-01-28 07:17:49.33255188 +0000 UTC m=+1792.564640690" observedRunningTime="2026-01-28 07:17:50.025997594 +0000 UTC m=+1793.258086403" watchObservedRunningTime="2026-01-28 07:17:50.029606956 +0000 UTC m=+1793.261695766" Jan 28 07:18:34 crc kubenswrapper[4642]: I0128 07:18:34.285015 4642 generic.go:334] "Generic (PLEG): container finished" podID="2de02231-7ff5-4fea-8660-09a3a907adbe" containerID="3c3601a09a17d517509dfd39381d5b8103f2c4563e2bfef1552aeb276ff93929" exitCode=0 Jan 28 07:18:34 crc kubenswrapper[4642]: I0128 07:18:34.285160 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-h6cx4" event={"ID":"2de02231-7ff5-4fea-8660-09a3a907adbe","Type":"ContainerDied","Data":"3c3601a09a17d517509dfd39381d5b8103f2c4563e2bfef1552aeb276ff93929"} Jan 28 07:18:35 crc kubenswrapper[4642]: I0128 07:18:35.592134 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-h6cx4" Jan 28 07:18:35 crc kubenswrapper[4642]: I0128 07:18:35.733886 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gq2ss\" (UniqueName: \"kubernetes.io/projected/2de02231-7ff5-4fea-8660-09a3a907adbe-kube-api-access-gq2ss\") pod \"2de02231-7ff5-4fea-8660-09a3a907adbe\" (UID: \"2de02231-7ff5-4fea-8660-09a3a907adbe\") " Jan 28 07:18:35 crc kubenswrapper[4642]: I0128 07:18:35.734028 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2de02231-7ff5-4fea-8660-09a3a907adbe-inventory\") pod \"2de02231-7ff5-4fea-8660-09a3a907adbe\" (UID: \"2de02231-7ff5-4fea-8660-09a3a907adbe\") " Jan 28 07:18:35 crc kubenswrapper[4642]: I0128 07:18:35.734097 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2de02231-7ff5-4fea-8660-09a3a907adbe-ssh-key-openstack-edpm-ipam\") pod \"2de02231-7ff5-4fea-8660-09a3a907adbe\" (UID: \"2de02231-7ff5-4fea-8660-09a3a907adbe\") " Jan 28 07:18:35 crc kubenswrapper[4642]: I0128 07:18:35.734129 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2de02231-7ff5-4fea-8660-09a3a907adbe-ovn-combined-ca-bundle\") pod \"2de02231-7ff5-4fea-8660-09a3a907adbe\" (UID: \"2de02231-7ff5-4fea-8660-09a3a907adbe\") " Jan 28 07:18:35 crc kubenswrapper[4642]: I0128 07:18:35.734164 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/2de02231-7ff5-4fea-8660-09a3a907adbe-ovncontroller-config-0\") pod \"2de02231-7ff5-4fea-8660-09a3a907adbe\" (UID: \"2de02231-7ff5-4fea-8660-09a3a907adbe\") " Jan 28 07:18:35 crc kubenswrapper[4642]: I0128 07:18:35.738370 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2de02231-7ff5-4fea-8660-09a3a907adbe-kube-api-access-gq2ss" (OuterVolumeSpecName: "kube-api-access-gq2ss") pod "2de02231-7ff5-4fea-8660-09a3a907adbe" (UID: "2de02231-7ff5-4fea-8660-09a3a907adbe"). InnerVolumeSpecName "kube-api-access-gq2ss". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:18:35 crc kubenswrapper[4642]: I0128 07:18:35.738485 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2de02231-7ff5-4fea-8660-09a3a907adbe-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "2de02231-7ff5-4fea-8660-09a3a907adbe" (UID: "2de02231-7ff5-4fea-8660-09a3a907adbe"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:18:35 crc kubenswrapper[4642]: I0128 07:18:35.752657 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2de02231-7ff5-4fea-8660-09a3a907adbe-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "2de02231-7ff5-4fea-8660-09a3a907adbe" (UID: "2de02231-7ff5-4fea-8660-09a3a907adbe"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:18:35 crc kubenswrapper[4642]: I0128 07:18:35.753575 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2de02231-7ff5-4fea-8660-09a3a907adbe-inventory" (OuterVolumeSpecName: "inventory") pod "2de02231-7ff5-4fea-8660-09a3a907adbe" (UID: "2de02231-7ff5-4fea-8660-09a3a907adbe"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:18:35 crc kubenswrapper[4642]: I0128 07:18:35.753865 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2de02231-7ff5-4fea-8660-09a3a907adbe-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2de02231-7ff5-4fea-8660-09a3a907adbe" (UID: "2de02231-7ff5-4fea-8660-09a3a907adbe"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:18:35 crc kubenswrapper[4642]: I0128 07:18:35.836392 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gq2ss\" (UniqueName: \"kubernetes.io/projected/2de02231-7ff5-4fea-8660-09a3a907adbe-kube-api-access-gq2ss\") on node \"crc\" DevicePath \"\"" Jan 28 07:18:35 crc kubenswrapper[4642]: I0128 07:18:35.836416 4642 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2de02231-7ff5-4fea-8660-09a3a907adbe-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 07:18:35 crc kubenswrapper[4642]: I0128 07:18:35.836426 4642 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2de02231-7ff5-4fea-8660-09a3a907adbe-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 07:18:35 crc kubenswrapper[4642]: I0128 07:18:35.836451 4642 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2de02231-7ff5-4fea-8660-09a3a907adbe-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 07:18:35 crc kubenswrapper[4642]: I0128 07:18:35.836461 4642 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/2de02231-7ff5-4fea-8660-09a3a907adbe-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 28 07:18:36 crc kubenswrapper[4642]: I0128 07:18:36.301869 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-h6cx4" event={"ID":"2de02231-7ff5-4fea-8660-09a3a907adbe","Type":"ContainerDied","Data":"96398fda7ada4590f5c2335457ad5e1115129b116256d3ffd17585c7ffdb1668"} Jan 28 07:18:36 crc kubenswrapper[4642]: I0128 07:18:36.302081 4642 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96398fda7ada4590f5c2335457ad5e1115129b116256d3ffd17585c7ffdb1668" Jan 28 07:18:36 crc kubenswrapper[4642]: I0128 07:18:36.301903 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-h6cx4" Jan 28 07:18:36 crc kubenswrapper[4642]: I0128 07:18:36.360851 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jd58d"] Jan 28 07:18:36 crc kubenswrapper[4642]: E0128 07:18:36.361144 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2de02231-7ff5-4fea-8660-09a3a907adbe" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 28 07:18:36 crc kubenswrapper[4642]: I0128 07:18:36.361160 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="2de02231-7ff5-4fea-8660-09a3a907adbe" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 28 07:18:36 crc kubenswrapper[4642]: I0128 07:18:36.361378 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="2de02231-7ff5-4fea-8660-09a3a907adbe" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 28 07:18:36 crc kubenswrapper[4642]: I0128 07:18:36.361876 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jd58d" Jan 28 07:18:36 crc kubenswrapper[4642]: I0128 07:18:36.363668 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 28 07:18:36 crc kubenswrapper[4642]: I0128 07:18:36.363933 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 07:18:36 crc kubenswrapper[4642]: I0128 07:18:36.364740 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 28 07:18:36 crc kubenswrapper[4642]: I0128 07:18:36.365239 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 07:18:36 crc kubenswrapper[4642]: I0128 07:18:36.366983 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r6m2m" Jan 28 07:18:36 crc kubenswrapper[4642]: I0128 07:18:36.367020 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 07:18:36 crc kubenswrapper[4642]: I0128 07:18:36.372099 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jd58d"] Jan 28 07:18:36 crc kubenswrapper[4642]: I0128 07:18:36.444219 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/29a8a74b-b7e6-4315-93a4-cde0bdc10ae9-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jd58d\" (UID: \"29a8a74b-b7e6-4315-93a4-cde0bdc10ae9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jd58d" Jan 28 07:18:36 crc kubenswrapper[4642]: I0128 07:18:36.444284 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wfbs\" (UniqueName: \"kubernetes.io/projected/29a8a74b-b7e6-4315-93a4-cde0bdc10ae9-kube-api-access-8wfbs\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jd58d\" (UID: \"29a8a74b-b7e6-4315-93a4-cde0bdc10ae9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jd58d" Jan 28 07:18:36 crc kubenswrapper[4642]: I0128 07:18:36.444306 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/29a8a74b-b7e6-4315-93a4-cde0bdc10ae9-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jd58d\" (UID: \"29a8a74b-b7e6-4315-93a4-cde0bdc10ae9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jd58d" Jan 28 07:18:36 crc kubenswrapper[4642]: I0128 07:18:36.444389 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29a8a74b-b7e6-4315-93a4-cde0bdc10ae9-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jd58d\" (UID: \"29a8a74b-b7e6-4315-93a4-cde0bdc10ae9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jd58d" Jan 28 07:18:36 crc kubenswrapper[4642]: I0128 07:18:36.444410 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/29a8a74b-b7e6-4315-93a4-cde0bdc10ae9-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jd58d\" (UID: \"29a8a74b-b7e6-4315-93a4-cde0bdc10ae9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jd58d" Jan 28 07:18:36 crc kubenswrapper[4642]: I0128 07:18:36.444437 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/29a8a74b-b7e6-4315-93a4-cde0bdc10ae9-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jd58d\" (UID: \"29a8a74b-b7e6-4315-93a4-cde0bdc10ae9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jd58d" Jan 28 07:18:36 crc kubenswrapper[4642]: I0128 07:18:36.545736 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/29a8a74b-b7e6-4315-93a4-cde0bdc10ae9-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jd58d\" (UID: \"29a8a74b-b7e6-4315-93a4-cde0bdc10ae9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jd58d" Jan 28 07:18:36 crc kubenswrapper[4642]: I0128 07:18:36.545792 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wfbs\" (UniqueName: \"kubernetes.io/projected/29a8a74b-b7e6-4315-93a4-cde0bdc10ae9-kube-api-access-8wfbs\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jd58d\" (UID: \"29a8a74b-b7e6-4315-93a4-cde0bdc10ae9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jd58d" Jan 28 07:18:36 crc kubenswrapper[4642]: I0128 07:18:36.545815 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/29a8a74b-b7e6-4315-93a4-cde0bdc10ae9-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jd58d\" (UID: \"29a8a74b-b7e6-4315-93a4-cde0bdc10ae9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jd58d" Jan 28 07:18:36 crc kubenswrapper[4642]: I0128 07:18:36.545883 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29a8a74b-b7e6-4315-93a4-cde0bdc10ae9-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jd58d\" (UID: \"29a8a74b-b7e6-4315-93a4-cde0bdc10ae9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jd58d" Jan 28 07:18:36 crc kubenswrapper[4642]: I0128 07:18:36.546021 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/29a8a74b-b7e6-4315-93a4-cde0bdc10ae9-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jd58d\" (UID: \"29a8a74b-b7e6-4315-93a4-cde0bdc10ae9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jd58d" Jan 28 07:18:36 crc kubenswrapper[4642]: I0128 07:18:36.546486 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/29a8a74b-b7e6-4315-93a4-cde0bdc10ae9-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jd58d\" (UID: \"29a8a74b-b7e6-4315-93a4-cde0bdc10ae9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jd58d" Jan 28 07:18:36 crc kubenswrapper[4642]: I0128 07:18:36.550229 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/29a8a74b-b7e6-4315-93a4-cde0bdc10ae9-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jd58d\" (UID: \"29a8a74b-b7e6-4315-93a4-cde0bdc10ae9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jd58d" Jan 28 07:18:36 crc kubenswrapper[4642]: I0128 07:18:36.550375 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/29a8a74b-b7e6-4315-93a4-cde0bdc10ae9-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jd58d\" (UID: \"29a8a74b-b7e6-4315-93a4-cde0bdc10ae9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jd58d" Jan 28 07:18:36 crc kubenswrapper[4642]: I0128 07:18:36.551090 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/29a8a74b-b7e6-4315-93a4-cde0bdc10ae9-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jd58d\" (UID: \"29a8a74b-b7e6-4315-93a4-cde0bdc10ae9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jd58d" Jan 28 07:18:36 crc kubenswrapper[4642]: I0128 07:18:36.551230 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29a8a74b-b7e6-4315-93a4-cde0bdc10ae9-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jd58d\" (UID: \"29a8a74b-b7e6-4315-93a4-cde0bdc10ae9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jd58d" Jan 28 07:18:36 crc kubenswrapper[4642]: I0128 07:18:36.552124 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/29a8a74b-b7e6-4315-93a4-cde0bdc10ae9-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jd58d\" (UID: \"29a8a74b-b7e6-4315-93a4-cde0bdc10ae9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jd58d" Jan 28 07:18:36 crc kubenswrapper[4642]: I0128 07:18:36.558028 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wfbs\" (UniqueName: \"kubernetes.io/projected/29a8a74b-b7e6-4315-93a4-cde0bdc10ae9-kube-api-access-8wfbs\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jd58d\" (UID: \"29a8a74b-b7e6-4315-93a4-cde0bdc10ae9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jd58d" Jan 28 07:18:36 crc kubenswrapper[4642]: I0128 07:18:36.675640 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jd58d" Jan 28 07:18:37 crc kubenswrapper[4642]: I0128 07:18:37.096086 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jd58d"] Jan 28 07:18:37 crc kubenswrapper[4642]: I0128 07:18:37.100496 4642 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 07:18:37 crc kubenswrapper[4642]: I0128 07:18:37.308550 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jd58d" event={"ID":"29a8a74b-b7e6-4315-93a4-cde0bdc10ae9","Type":"ContainerStarted","Data":"f7faaf337873c931ba491b7b04fd5ab051d3b7b08d62edb1914bbb92ec87778e"} Jan 28 07:18:38 crc kubenswrapper[4642]: I0128 07:18:38.315064 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jd58d" event={"ID":"29a8a74b-b7e6-4315-93a4-cde0bdc10ae9","Type":"ContainerStarted","Data":"b1dd145b60f2c7b5df8733c22402a95fc9e167d6d9e2a444d9ecffac042a4ee4"} Jan 28 07:18:38 crc kubenswrapper[4642]: I0128 07:18:38.335212 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jd58d" podStartSLOduration=1.839801555 podStartE2EDuration="2.335198977s" podCreationTimestamp="2026-01-28 07:18:36 +0000 UTC" firstStartedPulling="2026-01-28 07:18:37.100274354 +0000 UTC m=+1840.332363163" lastFinishedPulling="2026-01-28 07:18:37.595671775 +0000 UTC m=+1840.827760585" observedRunningTime="2026-01-28 07:18:38.328436469 +0000 UTC m=+1841.560525279" watchObservedRunningTime="2026-01-28 07:18:38.335198977 +0000 UTC m=+1841.567287785" Jan 28 07:19:11 crc kubenswrapper[4642]: I0128 07:19:11.500942 4642 generic.go:334] "Generic (PLEG): container finished" podID="29a8a74b-b7e6-4315-93a4-cde0bdc10ae9" containerID="b1dd145b60f2c7b5df8733c22402a95fc9e167d6d9e2a444d9ecffac042a4ee4" exitCode=0 Jan 28 07:19:11 crc kubenswrapper[4642]: I0128 07:19:11.501009 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jd58d" event={"ID":"29a8a74b-b7e6-4315-93a4-cde0bdc10ae9","Type":"ContainerDied","Data":"b1dd145b60f2c7b5df8733c22402a95fc9e167d6d9e2a444d9ecffac042a4ee4"} Jan 28 07:19:12 crc kubenswrapper[4642]: I0128 07:19:12.790146 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jd58d" Jan 28 07:19:12 crc kubenswrapper[4642]: I0128 07:19:12.869151 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29a8a74b-b7e6-4315-93a4-cde0bdc10ae9-neutron-metadata-combined-ca-bundle\") pod \"29a8a74b-b7e6-4315-93a4-cde0bdc10ae9\" (UID: \"29a8a74b-b7e6-4315-93a4-cde0bdc10ae9\") " Jan 28 07:19:12 crc kubenswrapper[4642]: I0128 07:19:12.869807 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/29a8a74b-b7e6-4315-93a4-cde0bdc10ae9-nova-metadata-neutron-config-0\") pod \"29a8a74b-b7e6-4315-93a4-cde0bdc10ae9\" (UID: \"29a8a74b-b7e6-4315-93a4-cde0bdc10ae9\") " Jan 28 07:19:12 crc kubenswrapper[4642]: I0128 07:19:12.869850 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/29a8a74b-b7e6-4315-93a4-cde0bdc10ae9-inventory\") pod \"29a8a74b-b7e6-4315-93a4-cde0bdc10ae9\" (UID: \"29a8a74b-b7e6-4315-93a4-cde0bdc10ae9\") " Jan 28 07:19:12 crc kubenswrapper[4642]: I0128 07:19:12.869928 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/29a8a74b-b7e6-4315-93a4-cde0bdc10ae9-neutron-ovn-metadata-agent-neutron-config-0\") pod \"29a8a74b-b7e6-4315-93a4-cde0bdc10ae9\" (UID: \"29a8a74b-b7e6-4315-93a4-cde0bdc10ae9\") " Jan 28 07:19:12 crc kubenswrapper[4642]: I0128 07:19:12.869960 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8wfbs\" (UniqueName: \"kubernetes.io/projected/29a8a74b-b7e6-4315-93a4-cde0bdc10ae9-kube-api-access-8wfbs\") pod \"29a8a74b-b7e6-4315-93a4-cde0bdc10ae9\" (UID: \"29a8a74b-b7e6-4315-93a4-cde0bdc10ae9\") " Jan 28 07:19:12 crc kubenswrapper[4642]: I0128 07:19:12.870034 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/29a8a74b-b7e6-4315-93a4-cde0bdc10ae9-ssh-key-openstack-edpm-ipam\") pod \"29a8a74b-b7e6-4315-93a4-cde0bdc10ae9\" (UID: \"29a8a74b-b7e6-4315-93a4-cde0bdc10ae9\") " Jan 28 07:19:12 crc kubenswrapper[4642]: I0128 07:19:12.874443 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29a8a74b-b7e6-4315-93a4-cde0bdc10ae9-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "29a8a74b-b7e6-4315-93a4-cde0bdc10ae9" (UID: "29a8a74b-b7e6-4315-93a4-cde0bdc10ae9"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:19:12 crc kubenswrapper[4642]: I0128 07:19:12.875165 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29a8a74b-b7e6-4315-93a4-cde0bdc10ae9-kube-api-access-8wfbs" (OuterVolumeSpecName: "kube-api-access-8wfbs") pod "29a8a74b-b7e6-4315-93a4-cde0bdc10ae9" (UID: "29a8a74b-b7e6-4315-93a4-cde0bdc10ae9"). InnerVolumeSpecName "kube-api-access-8wfbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:19:12 crc kubenswrapper[4642]: I0128 07:19:12.891077 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29a8a74b-b7e6-4315-93a4-cde0bdc10ae9-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "29a8a74b-b7e6-4315-93a4-cde0bdc10ae9" (UID: "29a8a74b-b7e6-4315-93a4-cde0bdc10ae9"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:19:12 crc kubenswrapper[4642]: I0128 07:19:12.891101 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29a8a74b-b7e6-4315-93a4-cde0bdc10ae9-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "29a8a74b-b7e6-4315-93a4-cde0bdc10ae9" (UID: "29a8a74b-b7e6-4315-93a4-cde0bdc10ae9"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:19:12 crc kubenswrapper[4642]: I0128 07:19:12.891684 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29a8a74b-b7e6-4315-93a4-cde0bdc10ae9-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "29a8a74b-b7e6-4315-93a4-cde0bdc10ae9" (UID: "29a8a74b-b7e6-4315-93a4-cde0bdc10ae9"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:19:12 crc kubenswrapper[4642]: I0128 07:19:12.892051 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29a8a74b-b7e6-4315-93a4-cde0bdc10ae9-inventory" (OuterVolumeSpecName: "inventory") pod "29a8a74b-b7e6-4315-93a4-cde0bdc10ae9" (UID: "29a8a74b-b7e6-4315-93a4-cde0bdc10ae9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:19:12 crc kubenswrapper[4642]: I0128 07:19:12.972864 4642 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29a8a74b-b7e6-4315-93a4-cde0bdc10ae9-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 07:19:12 crc kubenswrapper[4642]: I0128 07:19:12.972890 4642 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/29a8a74b-b7e6-4315-93a4-cde0bdc10ae9-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 28 07:19:12 crc kubenswrapper[4642]: I0128 07:19:12.972900 4642 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/29a8a74b-b7e6-4315-93a4-cde0bdc10ae9-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 07:19:12 crc kubenswrapper[4642]: I0128 07:19:12.972910 4642 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/29a8a74b-b7e6-4315-93a4-cde0bdc10ae9-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 28 07:19:12 crc kubenswrapper[4642]: I0128 07:19:12.972920 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8wfbs\" (UniqueName: \"kubernetes.io/projected/29a8a74b-b7e6-4315-93a4-cde0bdc10ae9-kube-api-access-8wfbs\") on node \"crc\" DevicePath \"\"" Jan 28 07:19:12 crc kubenswrapper[4642]: I0128 07:19:12.972927 4642 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/29a8a74b-b7e6-4315-93a4-cde0bdc10ae9-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 07:19:13 crc kubenswrapper[4642]: I0128 07:19:13.515116 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jd58d" event={"ID":"29a8a74b-b7e6-4315-93a4-cde0bdc10ae9","Type":"ContainerDied","Data":"f7faaf337873c931ba491b7b04fd5ab051d3b7b08d62edb1914bbb92ec87778e"} Jan 28 07:19:13 crc kubenswrapper[4642]: I0128 07:19:13.515154 4642 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f7faaf337873c931ba491b7b04fd5ab051d3b7b08d62edb1914bbb92ec87778e" Jan 28 07:19:13 crc kubenswrapper[4642]: I0128 07:19:13.515221 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jd58d" Jan 28 07:19:13 crc kubenswrapper[4642]: I0128 07:19:13.571276 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5snh2"] Jan 28 07:19:13 crc kubenswrapper[4642]: E0128 07:19:13.571599 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29a8a74b-b7e6-4315-93a4-cde0bdc10ae9" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 28 07:19:13 crc kubenswrapper[4642]: I0128 07:19:13.571619 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="29a8a74b-b7e6-4315-93a4-cde0bdc10ae9" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 28 07:19:13 crc kubenswrapper[4642]: I0128 07:19:13.571784 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="29a8a74b-b7e6-4315-93a4-cde0bdc10ae9" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 28 07:19:13 crc kubenswrapper[4642]: I0128 07:19:13.572307 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5snh2" Jan 28 07:19:13 crc kubenswrapper[4642]: I0128 07:19:13.573489 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 28 07:19:13 crc kubenswrapper[4642]: I0128 07:19:13.575756 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r6m2m" Jan 28 07:19:13 crc kubenswrapper[4642]: I0128 07:19:13.575809 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 07:19:13 crc kubenswrapper[4642]: I0128 07:19:13.575909 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 07:19:13 crc kubenswrapper[4642]: I0128 07:19:13.575955 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 07:19:13 crc kubenswrapper[4642]: I0128 07:19:13.579610 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nf8tp\" (UniqueName: \"kubernetes.io/projected/4fe048e8-d571-4c2a-a306-3b1e9fdc1798-kube-api-access-nf8tp\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5snh2\" (UID: \"4fe048e8-d571-4c2a-a306-3b1e9fdc1798\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5snh2" Jan 28 07:19:13 crc kubenswrapper[4642]: I0128 07:19:13.579693 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4fe048e8-d571-4c2a-a306-3b1e9fdc1798-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5snh2\" (UID: \"4fe048e8-d571-4c2a-a306-3b1e9fdc1798\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5snh2" Jan 28 07:19:13 crc kubenswrapper[4642]: I0128 07:19:13.579744 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fe048e8-d571-4c2a-a306-3b1e9fdc1798-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5snh2\" (UID: \"4fe048e8-d571-4c2a-a306-3b1e9fdc1798\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5snh2" Jan 28 07:19:13 crc kubenswrapper[4642]: I0128 07:19:13.579781 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/4fe048e8-d571-4c2a-a306-3b1e9fdc1798-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5snh2\" (UID: \"4fe048e8-d571-4c2a-a306-3b1e9fdc1798\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5snh2" Jan 28 07:19:13 crc kubenswrapper[4642]: I0128 07:19:13.579795 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5snh2"] Jan 28 07:19:13 crc kubenswrapper[4642]: I0128 07:19:13.579908 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4fe048e8-d571-4c2a-a306-3b1e9fdc1798-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5snh2\" (UID: \"4fe048e8-d571-4c2a-a306-3b1e9fdc1798\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5snh2" Jan 28 07:19:13 crc kubenswrapper[4642]: I0128 07:19:13.681833 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4fe048e8-d571-4c2a-a306-3b1e9fdc1798-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5snh2\" (UID: \"4fe048e8-d571-4c2a-a306-3b1e9fdc1798\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5snh2" Jan 28 07:19:13 crc kubenswrapper[4642]: I0128 07:19:13.681987 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fe048e8-d571-4c2a-a306-3b1e9fdc1798-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5snh2\" (UID: \"4fe048e8-d571-4c2a-a306-3b1e9fdc1798\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5snh2" Jan 28 07:19:13 crc kubenswrapper[4642]: I0128 07:19:13.682070 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/4fe048e8-d571-4c2a-a306-3b1e9fdc1798-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5snh2\" (UID: \"4fe048e8-d571-4c2a-a306-3b1e9fdc1798\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5snh2" Jan 28 07:19:13 crc kubenswrapper[4642]: I0128 07:19:13.682162 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4fe048e8-d571-4c2a-a306-3b1e9fdc1798-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5snh2\" (UID: \"4fe048e8-d571-4c2a-a306-3b1e9fdc1798\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5snh2" Jan 28 07:19:13 crc kubenswrapper[4642]: I0128 07:19:13.682269 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nf8tp\" (UniqueName: \"kubernetes.io/projected/4fe048e8-d571-4c2a-a306-3b1e9fdc1798-kube-api-access-nf8tp\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5snh2\" (UID: \"4fe048e8-d571-4c2a-a306-3b1e9fdc1798\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5snh2" Jan 28 07:19:13 crc kubenswrapper[4642]: I0128 07:19:13.685666 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fe048e8-d571-4c2a-a306-3b1e9fdc1798-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5snh2\" (UID: \"4fe048e8-d571-4c2a-a306-3b1e9fdc1798\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5snh2" Jan 28 07:19:13 crc kubenswrapper[4642]: I0128 07:19:13.685728 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4fe048e8-d571-4c2a-a306-3b1e9fdc1798-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5snh2\" (UID: \"4fe048e8-d571-4c2a-a306-3b1e9fdc1798\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5snh2" Jan 28 07:19:13 crc kubenswrapper[4642]: I0128 07:19:13.690251 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/4fe048e8-d571-4c2a-a306-3b1e9fdc1798-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5snh2\" (UID: \"4fe048e8-d571-4c2a-a306-3b1e9fdc1798\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5snh2" Jan 28 07:19:13 crc kubenswrapper[4642]: I0128 07:19:13.705018 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nf8tp\" (UniqueName: \"kubernetes.io/projected/4fe048e8-d571-4c2a-a306-3b1e9fdc1798-kube-api-access-nf8tp\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5snh2\" (UID: \"4fe048e8-d571-4c2a-a306-3b1e9fdc1798\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5snh2" Jan 28 07:19:13 crc kubenswrapper[4642]: I0128 07:19:13.705387 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4fe048e8-d571-4c2a-a306-3b1e9fdc1798-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5snh2\" (UID: \"4fe048e8-d571-4c2a-a306-3b1e9fdc1798\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5snh2" Jan 28 07:19:13 crc kubenswrapper[4642]: I0128 07:19:13.889245 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5snh2" Jan 28 07:19:14 crc kubenswrapper[4642]: I0128 07:19:14.298423 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5snh2"] Jan 28 07:19:14 crc kubenswrapper[4642]: I0128 07:19:14.522743 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5snh2" event={"ID":"4fe048e8-d571-4c2a-a306-3b1e9fdc1798","Type":"ContainerStarted","Data":"d2ecb2e24b173065f1ada0b4ca839299d85c83197d91436454aeee1878314e0b"} Jan 28 07:19:15 crc kubenswrapper[4642]: I0128 07:19:15.542587 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5snh2" event={"ID":"4fe048e8-d571-4c2a-a306-3b1e9fdc1798","Type":"ContainerStarted","Data":"19b8615b966aa448d929fdcaa56cab7f983a3af6507ee224560cb2acabd0e479"} Jan 28 07:19:15 crc kubenswrapper[4642]: I0128 07:19:15.558944 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5snh2" podStartSLOduration=1.948505527 podStartE2EDuration="2.558927197s" podCreationTimestamp="2026-01-28 07:19:13 +0000 UTC" firstStartedPulling="2026-01-28 07:19:14.303588381 +0000 UTC m=+1877.535677191" lastFinishedPulling="2026-01-28 07:19:14.914010052 +0000 UTC m=+1878.146098861" observedRunningTime="2026-01-28 07:19:15.557486547 +0000 UTC m=+1878.789575356" watchObservedRunningTime="2026-01-28 07:19:15.558927197 +0000 UTC m=+1878.791016005" Jan 28 07:20:08 crc kubenswrapper[4642]: I0128 07:20:08.199649 4642 patch_prober.go:28] interesting pod/machine-config-daemon-hdsmf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 07:20:08 crc kubenswrapper[4642]: I0128 07:20:08.199972 4642 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 07:20:38 crc kubenswrapper[4642]: I0128 07:20:38.199471 4642 patch_prober.go:28] interesting pod/machine-config-daemon-hdsmf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 07:20:38 crc kubenswrapper[4642]: I0128 07:20:38.199807 4642 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 07:21:08 crc kubenswrapper[4642]: I0128 07:21:08.199981 4642 patch_prober.go:28] interesting pod/machine-config-daemon-hdsmf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 07:21:08 crc kubenswrapper[4642]: I0128 07:21:08.200451 4642 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 07:21:08 crc kubenswrapper[4642]: I0128 07:21:08.200492 4642 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" Jan 28 07:21:08 crc kubenswrapper[4642]: I0128 07:21:08.201114 4642 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d8d011239be864ef4fc9acb8d923c22235a3960efad1e195326a0020a8cd4b79"} pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 07:21:08 crc kubenswrapper[4642]: I0128 07:21:08.201162 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" containerName="machine-config-daemon" containerID="cri-o://d8d011239be864ef4fc9acb8d923c22235a3960efad1e195326a0020a8cd4b79" gracePeriod=600 Jan 28 07:21:09 crc kubenswrapper[4642]: I0128 07:21:09.200917 4642 generic.go:334] "Generic (PLEG): container finished" podID="338ae955-434d-40bd-8519-580badf3e175" containerID="d8d011239be864ef4fc9acb8d923c22235a3960efad1e195326a0020a8cd4b79" exitCode=0 Jan 28 07:21:09 crc kubenswrapper[4642]: I0128 07:21:09.201012 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" event={"ID":"338ae955-434d-40bd-8519-580badf3e175","Type":"ContainerDied","Data":"d8d011239be864ef4fc9acb8d923c22235a3960efad1e195326a0020a8cd4b79"} Jan 28 07:21:09 crc kubenswrapper[4642]: I0128 07:21:09.201397 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" event={"ID":"338ae955-434d-40bd-8519-580badf3e175","Type":"ContainerStarted","Data":"bba2d80ba7f88f6fda7268412cc8b7c2aef2558774f4bc4008196378f1695212"} Jan 28 07:21:09 crc kubenswrapper[4642]: I0128 07:21:09.201422 4642 scope.go:117] "RemoveContainer" containerID="c9b0ccd578b76f711793a52e4fbebb1998cf31023e773fda5a4c995750b44303" Jan 28 07:21:17 crc kubenswrapper[4642]: I0128 07:21:17.858051 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7znwd"] Jan 28 07:21:17 crc kubenswrapper[4642]: I0128 07:21:17.860065 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7znwd" Jan 28 07:21:17 crc kubenswrapper[4642]: I0128 07:21:17.865621 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7znwd"] Jan 28 07:21:17 crc kubenswrapper[4642]: I0128 07:21:17.868637 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d87495c-28e5-4c2d-a492-22199aa2d3c8-utilities\") pod \"redhat-marketplace-7znwd\" (UID: \"0d87495c-28e5-4c2d-a492-22199aa2d3c8\") " pod="openshift-marketplace/redhat-marketplace-7znwd" Jan 28 07:21:17 crc kubenswrapper[4642]: I0128 07:21:17.868665 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nb2h5\" (UniqueName: \"kubernetes.io/projected/0d87495c-28e5-4c2d-a492-22199aa2d3c8-kube-api-access-nb2h5\") pod \"redhat-marketplace-7znwd\" (UID: \"0d87495c-28e5-4c2d-a492-22199aa2d3c8\") " pod="openshift-marketplace/redhat-marketplace-7znwd" Jan 28 07:21:17 crc kubenswrapper[4642]: I0128 07:21:17.868694 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d87495c-28e5-4c2d-a492-22199aa2d3c8-catalog-content\") pod \"redhat-marketplace-7znwd\" (UID: \"0d87495c-28e5-4c2d-a492-22199aa2d3c8\") " pod="openshift-marketplace/redhat-marketplace-7znwd" Jan 28 07:21:17 crc kubenswrapper[4642]: I0128 07:21:17.970851 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d87495c-28e5-4c2d-a492-22199aa2d3c8-catalog-content\") pod \"redhat-marketplace-7znwd\" (UID: \"0d87495c-28e5-4c2d-a492-22199aa2d3c8\") " pod="openshift-marketplace/redhat-marketplace-7znwd" Jan 28 07:21:17 crc kubenswrapper[4642]: I0128 07:21:17.971207 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d87495c-28e5-4c2d-a492-22199aa2d3c8-utilities\") pod \"redhat-marketplace-7znwd\" (UID: \"0d87495c-28e5-4c2d-a492-22199aa2d3c8\") " pod="openshift-marketplace/redhat-marketplace-7znwd" Jan 28 07:21:17 crc kubenswrapper[4642]: I0128 07:21:17.971236 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nb2h5\" (UniqueName: \"kubernetes.io/projected/0d87495c-28e5-4c2d-a492-22199aa2d3c8-kube-api-access-nb2h5\") pod \"redhat-marketplace-7znwd\" (UID: \"0d87495c-28e5-4c2d-a492-22199aa2d3c8\") " pod="openshift-marketplace/redhat-marketplace-7znwd" Jan 28 07:21:17 crc kubenswrapper[4642]: I0128 07:21:17.971927 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d87495c-28e5-4c2d-a492-22199aa2d3c8-catalog-content\") pod \"redhat-marketplace-7znwd\" (UID: \"0d87495c-28e5-4c2d-a492-22199aa2d3c8\") " pod="openshift-marketplace/redhat-marketplace-7znwd" Jan 28 07:21:17 crc kubenswrapper[4642]: I0128 07:21:17.972159 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d87495c-28e5-4c2d-a492-22199aa2d3c8-utilities\") pod \"redhat-marketplace-7znwd\" (UID: \"0d87495c-28e5-4c2d-a492-22199aa2d3c8\") " pod="openshift-marketplace/redhat-marketplace-7znwd" Jan 28 07:21:17 crc kubenswrapper[4642]: I0128 07:21:17.995133 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nb2h5\" (UniqueName: \"kubernetes.io/projected/0d87495c-28e5-4c2d-a492-22199aa2d3c8-kube-api-access-nb2h5\") pod \"redhat-marketplace-7znwd\" (UID: \"0d87495c-28e5-4c2d-a492-22199aa2d3c8\") " pod="openshift-marketplace/redhat-marketplace-7znwd" Jan 28 07:21:18 crc kubenswrapper[4642]: I0128 07:21:18.174036 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7znwd" Jan 28 07:21:18 crc kubenswrapper[4642]: I0128 07:21:18.553284 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7znwd"] Jan 28 07:21:19 crc kubenswrapper[4642]: I0128 07:21:19.260110 4642 generic.go:334] "Generic (PLEG): container finished" podID="0d87495c-28e5-4c2d-a492-22199aa2d3c8" containerID="c9ffda4a3964f006f2444782a4993272a161d8a1dc44d9f0922e65a106c061e6" exitCode=0 Jan 28 07:21:19 crc kubenswrapper[4642]: I0128 07:21:19.260171 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7znwd" event={"ID":"0d87495c-28e5-4c2d-a492-22199aa2d3c8","Type":"ContainerDied","Data":"c9ffda4a3964f006f2444782a4993272a161d8a1dc44d9f0922e65a106c061e6"} Jan 28 07:21:19 crc kubenswrapper[4642]: I0128 07:21:19.260380 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7znwd" event={"ID":"0d87495c-28e5-4c2d-a492-22199aa2d3c8","Type":"ContainerStarted","Data":"e0df58251d97c0d97347d5b527e8a4a7f6033088a8dbb6a5b04996d44212108f"} Jan 28 07:21:20 crc kubenswrapper[4642]: I0128 07:21:20.268844 4642 generic.go:334] "Generic (PLEG): container finished" podID="0d87495c-28e5-4c2d-a492-22199aa2d3c8" containerID="832eee8f4fd8ca1194cd1d908c310685720e8d28e43a5ee679cf2fa8a403c385" exitCode=0 Jan 28 07:21:20 crc kubenswrapper[4642]: I0128 07:21:20.268931 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7znwd" event={"ID":"0d87495c-28e5-4c2d-a492-22199aa2d3c8","Type":"ContainerDied","Data":"832eee8f4fd8ca1194cd1d908c310685720e8d28e43a5ee679cf2fa8a403c385"} Jan 28 07:21:21 crc kubenswrapper[4642]: I0128 07:21:21.277802 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7znwd" event={"ID":"0d87495c-28e5-4c2d-a492-22199aa2d3c8","Type":"ContainerStarted","Data":"a2ad3c2695d86b94c3677a3d8702e6f953cf735ef6adebb4759018698b237e9c"} Jan 28 07:21:21 crc kubenswrapper[4642]: I0128 07:21:21.293388 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7znwd" podStartSLOduration=2.741151489 podStartE2EDuration="4.293373284s" podCreationTimestamp="2026-01-28 07:21:17 +0000 UTC" firstStartedPulling="2026-01-28 07:21:19.261782545 +0000 UTC m=+2002.493871355" lastFinishedPulling="2026-01-28 07:21:20.814004341 +0000 UTC m=+2004.046093150" observedRunningTime="2026-01-28 07:21:21.289301408 +0000 UTC m=+2004.521390217" watchObservedRunningTime="2026-01-28 07:21:21.293373284 +0000 UTC m=+2004.525462093" Jan 28 07:21:28 crc kubenswrapper[4642]: I0128 07:21:28.174340 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7znwd" Jan 28 07:21:28 crc kubenswrapper[4642]: I0128 07:21:28.175034 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7znwd" Jan 28 07:21:28 crc kubenswrapper[4642]: I0128 07:21:28.207704 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7znwd" Jan 28 07:21:28 crc kubenswrapper[4642]: I0128 07:21:28.359875 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7znwd" Jan 28 07:21:28 crc kubenswrapper[4642]: I0128 07:21:28.433241 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7znwd"] Jan 28 07:21:30 crc kubenswrapper[4642]: I0128 07:21:30.340756 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7znwd" podUID="0d87495c-28e5-4c2d-a492-22199aa2d3c8" containerName="registry-server" containerID="cri-o://a2ad3c2695d86b94c3677a3d8702e6f953cf735ef6adebb4759018698b237e9c" gracePeriod=2 Jan 28 07:21:30 crc kubenswrapper[4642]: I0128 07:21:30.699402 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7znwd" Jan 28 07:21:30 crc kubenswrapper[4642]: I0128 07:21:30.883992 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d87495c-28e5-4c2d-a492-22199aa2d3c8-catalog-content\") pod \"0d87495c-28e5-4c2d-a492-22199aa2d3c8\" (UID: \"0d87495c-28e5-4c2d-a492-22199aa2d3c8\") " Jan 28 07:21:30 crc kubenswrapper[4642]: I0128 07:21:30.884121 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nb2h5\" (UniqueName: \"kubernetes.io/projected/0d87495c-28e5-4c2d-a492-22199aa2d3c8-kube-api-access-nb2h5\") pod \"0d87495c-28e5-4c2d-a492-22199aa2d3c8\" (UID: \"0d87495c-28e5-4c2d-a492-22199aa2d3c8\") " Jan 28 07:21:30 crc kubenswrapper[4642]: I0128 07:21:30.884212 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d87495c-28e5-4c2d-a492-22199aa2d3c8-utilities\") pod \"0d87495c-28e5-4c2d-a492-22199aa2d3c8\" (UID: \"0d87495c-28e5-4c2d-a492-22199aa2d3c8\") " Jan 28 07:21:30 crc kubenswrapper[4642]: I0128 07:21:30.884974 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d87495c-28e5-4c2d-a492-22199aa2d3c8-utilities" (OuterVolumeSpecName: "utilities") pod "0d87495c-28e5-4c2d-a492-22199aa2d3c8" (UID: "0d87495c-28e5-4c2d-a492-22199aa2d3c8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 07:21:30 crc kubenswrapper[4642]: I0128 07:21:30.889342 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d87495c-28e5-4c2d-a492-22199aa2d3c8-kube-api-access-nb2h5" (OuterVolumeSpecName: "kube-api-access-nb2h5") pod "0d87495c-28e5-4c2d-a492-22199aa2d3c8" (UID: "0d87495c-28e5-4c2d-a492-22199aa2d3c8"). InnerVolumeSpecName "kube-api-access-nb2h5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:21:30 crc kubenswrapper[4642]: I0128 07:21:30.900164 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d87495c-28e5-4c2d-a492-22199aa2d3c8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0d87495c-28e5-4c2d-a492-22199aa2d3c8" (UID: "0d87495c-28e5-4c2d-a492-22199aa2d3c8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 07:21:30 crc kubenswrapper[4642]: I0128 07:21:30.986654 4642 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d87495c-28e5-4c2d-a492-22199aa2d3c8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 07:21:30 crc kubenswrapper[4642]: I0128 07:21:30.986831 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nb2h5\" (UniqueName: \"kubernetes.io/projected/0d87495c-28e5-4c2d-a492-22199aa2d3c8-kube-api-access-nb2h5\") on node \"crc\" DevicePath \"\"" Jan 28 07:21:30 crc kubenswrapper[4642]: I0128 07:21:30.986897 4642 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d87495c-28e5-4c2d-a492-22199aa2d3c8-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 07:21:31 crc kubenswrapper[4642]: I0128 07:21:31.347961 4642 generic.go:334] "Generic (PLEG): container finished" podID="0d87495c-28e5-4c2d-a492-22199aa2d3c8" containerID="a2ad3c2695d86b94c3677a3d8702e6f953cf735ef6adebb4759018698b237e9c" exitCode=0 Jan 28 07:21:31 crc kubenswrapper[4642]: I0128 07:21:31.348001 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7znwd" event={"ID":"0d87495c-28e5-4c2d-a492-22199aa2d3c8","Type":"ContainerDied","Data":"a2ad3c2695d86b94c3677a3d8702e6f953cf735ef6adebb4759018698b237e9c"} Jan 28 07:21:31 crc kubenswrapper[4642]: I0128 07:21:31.348024 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7znwd" event={"ID":"0d87495c-28e5-4c2d-a492-22199aa2d3c8","Type":"ContainerDied","Data":"e0df58251d97c0d97347d5b527e8a4a7f6033088a8dbb6a5b04996d44212108f"} Jan 28 07:21:31 crc kubenswrapper[4642]: I0128 07:21:31.348030 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7znwd" Jan 28 07:21:31 crc kubenswrapper[4642]: I0128 07:21:31.348039 4642 scope.go:117] "RemoveContainer" containerID="a2ad3c2695d86b94c3677a3d8702e6f953cf735ef6adebb4759018698b237e9c" Jan 28 07:21:31 crc kubenswrapper[4642]: I0128 07:21:31.364639 4642 scope.go:117] "RemoveContainer" containerID="832eee8f4fd8ca1194cd1d908c310685720e8d28e43a5ee679cf2fa8a403c385" Jan 28 07:21:31 crc kubenswrapper[4642]: I0128 07:21:31.365198 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7znwd"] Jan 28 07:21:31 crc kubenswrapper[4642]: I0128 07:21:31.371148 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7znwd"] Jan 28 07:21:31 crc kubenswrapper[4642]: I0128 07:21:31.391585 4642 scope.go:117] "RemoveContainer" containerID="c9ffda4a3964f006f2444782a4993272a161d8a1dc44d9f0922e65a106c061e6" Jan 28 07:21:31 crc kubenswrapper[4642]: I0128 07:21:31.411121 4642 scope.go:117] "RemoveContainer" containerID="a2ad3c2695d86b94c3677a3d8702e6f953cf735ef6adebb4759018698b237e9c" Jan 28 07:21:31 crc kubenswrapper[4642]: E0128 07:21:31.411619 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2ad3c2695d86b94c3677a3d8702e6f953cf735ef6adebb4759018698b237e9c\": container with ID starting with a2ad3c2695d86b94c3677a3d8702e6f953cf735ef6adebb4759018698b237e9c not found: ID does not exist" containerID="a2ad3c2695d86b94c3677a3d8702e6f953cf735ef6adebb4759018698b237e9c" Jan 28 07:21:31 crc kubenswrapper[4642]: I0128 07:21:31.411649 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2ad3c2695d86b94c3677a3d8702e6f953cf735ef6adebb4759018698b237e9c"} err="failed to get container status \"a2ad3c2695d86b94c3677a3d8702e6f953cf735ef6adebb4759018698b237e9c\": rpc error: code = NotFound desc = could not find container \"a2ad3c2695d86b94c3677a3d8702e6f953cf735ef6adebb4759018698b237e9c\": container with ID starting with a2ad3c2695d86b94c3677a3d8702e6f953cf735ef6adebb4759018698b237e9c not found: ID does not exist" Jan 28 07:21:31 crc kubenswrapper[4642]: I0128 07:21:31.411669 4642 scope.go:117] "RemoveContainer" containerID="832eee8f4fd8ca1194cd1d908c310685720e8d28e43a5ee679cf2fa8a403c385" Jan 28 07:21:31 crc kubenswrapper[4642]: E0128 07:21:31.411929 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"832eee8f4fd8ca1194cd1d908c310685720e8d28e43a5ee679cf2fa8a403c385\": container with ID starting with 832eee8f4fd8ca1194cd1d908c310685720e8d28e43a5ee679cf2fa8a403c385 not found: ID does not exist" containerID="832eee8f4fd8ca1194cd1d908c310685720e8d28e43a5ee679cf2fa8a403c385" Jan 28 07:21:31 crc kubenswrapper[4642]: I0128 07:21:31.411957 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"832eee8f4fd8ca1194cd1d908c310685720e8d28e43a5ee679cf2fa8a403c385"} err="failed to get container status \"832eee8f4fd8ca1194cd1d908c310685720e8d28e43a5ee679cf2fa8a403c385\": rpc error: code = NotFound desc = could not find container \"832eee8f4fd8ca1194cd1d908c310685720e8d28e43a5ee679cf2fa8a403c385\": container with ID starting with 832eee8f4fd8ca1194cd1d908c310685720e8d28e43a5ee679cf2fa8a403c385 not found: ID does not exist" Jan 28 07:21:31 crc kubenswrapper[4642]: I0128 07:21:31.411970 4642 scope.go:117] "RemoveContainer" containerID="c9ffda4a3964f006f2444782a4993272a161d8a1dc44d9f0922e65a106c061e6" Jan 28 07:21:31 crc kubenswrapper[4642]: E0128 07:21:31.412319 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9ffda4a3964f006f2444782a4993272a161d8a1dc44d9f0922e65a106c061e6\": container with ID starting with c9ffda4a3964f006f2444782a4993272a161d8a1dc44d9f0922e65a106c061e6 not found: ID does not exist" containerID="c9ffda4a3964f006f2444782a4993272a161d8a1dc44d9f0922e65a106c061e6" Jan 28 07:21:31 crc kubenswrapper[4642]: I0128 07:21:31.412338 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9ffda4a3964f006f2444782a4993272a161d8a1dc44d9f0922e65a106c061e6"} err="failed to get container status \"c9ffda4a3964f006f2444782a4993272a161d8a1dc44d9f0922e65a106c061e6\": rpc error: code = NotFound desc = could not find container \"c9ffda4a3964f006f2444782a4993272a161d8a1dc44d9f0922e65a106c061e6\": container with ID starting with c9ffda4a3964f006f2444782a4993272a161d8a1dc44d9f0922e65a106c061e6 not found: ID does not exist" Jan 28 07:21:33 crc kubenswrapper[4642]: I0128 07:21:33.106128 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d87495c-28e5-4c2d-a492-22199aa2d3c8" path="/var/lib/kubelet/pods/0d87495c-28e5-4c2d-a492-22199aa2d3c8/volumes" Jan 28 07:22:00 crc kubenswrapper[4642]: I0128 07:22:00.519263 4642 generic.go:334] "Generic (PLEG): container finished" podID="4fe048e8-d571-4c2a-a306-3b1e9fdc1798" containerID="19b8615b966aa448d929fdcaa56cab7f983a3af6507ee224560cb2acabd0e479" exitCode=0 Jan 28 07:22:00 crc kubenswrapper[4642]: I0128 07:22:00.519360 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5snh2" event={"ID":"4fe048e8-d571-4c2a-a306-3b1e9fdc1798","Type":"ContainerDied","Data":"19b8615b966aa448d929fdcaa56cab7f983a3af6507ee224560cb2acabd0e479"} Jan 28 07:22:01 crc kubenswrapper[4642]: I0128 07:22:01.847008 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5snh2" Jan 28 07:22:01 crc kubenswrapper[4642]: I0128 07:22:01.878101 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/4fe048e8-d571-4c2a-a306-3b1e9fdc1798-libvirt-secret-0\") pod \"4fe048e8-d571-4c2a-a306-3b1e9fdc1798\" (UID: \"4fe048e8-d571-4c2a-a306-3b1e9fdc1798\") " Jan 28 07:22:01 crc kubenswrapper[4642]: I0128 07:22:01.878144 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nf8tp\" (UniqueName: \"kubernetes.io/projected/4fe048e8-d571-4c2a-a306-3b1e9fdc1798-kube-api-access-nf8tp\") pod \"4fe048e8-d571-4c2a-a306-3b1e9fdc1798\" (UID: \"4fe048e8-d571-4c2a-a306-3b1e9fdc1798\") " Jan 28 07:22:01 crc kubenswrapper[4642]: I0128 07:22:01.878275 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fe048e8-d571-4c2a-a306-3b1e9fdc1798-libvirt-combined-ca-bundle\") pod \"4fe048e8-d571-4c2a-a306-3b1e9fdc1798\" (UID: \"4fe048e8-d571-4c2a-a306-3b1e9fdc1798\") " Jan 28 07:22:01 crc kubenswrapper[4642]: I0128 07:22:01.878315 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4fe048e8-d571-4c2a-a306-3b1e9fdc1798-ssh-key-openstack-edpm-ipam\") pod \"4fe048e8-d571-4c2a-a306-3b1e9fdc1798\" (UID: \"4fe048e8-d571-4c2a-a306-3b1e9fdc1798\") " Jan 28 07:22:01 crc kubenswrapper[4642]: I0128 07:22:01.878354 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4fe048e8-d571-4c2a-a306-3b1e9fdc1798-inventory\") pod \"4fe048e8-d571-4c2a-a306-3b1e9fdc1798\" (UID: \"4fe048e8-d571-4c2a-a306-3b1e9fdc1798\") " Jan 28 07:22:01 crc kubenswrapper[4642]: I0128 07:22:01.883704 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fe048e8-d571-4c2a-a306-3b1e9fdc1798-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "4fe048e8-d571-4c2a-a306-3b1e9fdc1798" (UID: "4fe048e8-d571-4c2a-a306-3b1e9fdc1798"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:22:01 crc kubenswrapper[4642]: I0128 07:22:01.885199 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fe048e8-d571-4c2a-a306-3b1e9fdc1798-kube-api-access-nf8tp" (OuterVolumeSpecName: "kube-api-access-nf8tp") pod "4fe048e8-d571-4c2a-a306-3b1e9fdc1798" (UID: "4fe048e8-d571-4c2a-a306-3b1e9fdc1798"). InnerVolumeSpecName "kube-api-access-nf8tp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:22:01 crc kubenswrapper[4642]: I0128 07:22:01.898087 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fe048e8-d571-4c2a-a306-3b1e9fdc1798-inventory" (OuterVolumeSpecName: "inventory") pod "4fe048e8-d571-4c2a-a306-3b1e9fdc1798" (UID: "4fe048e8-d571-4c2a-a306-3b1e9fdc1798"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:22:01 crc kubenswrapper[4642]: I0128 07:22:01.898786 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fe048e8-d571-4c2a-a306-3b1e9fdc1798-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4fe048e8-d571-4c2a-a306-3b1e9fdc1798" (UID: "4fe048e8-d571-4c2a-a306-3b1e9fdc1798"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:22:01 crc kubenswrapper[4642]: I0128 07:22:01.900613 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fe048e8-d571-4c2a-a306-3b1e9fdc1798-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "4fe048e8-d571-4c2a-a306-3b1e9fdc1798" (UID: "4fe048e8-d571-4c2a-a306-3b1e9fdc1798"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:22:01 crc kubenswrapper[4642]: I0128 07:22:01.980117 4642 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fe048e8-d571-4c2a-a306-3b1e9fdc1798-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 07:22:01 crc kubenswrapper[4642]: I0128 07:22:01.980142 4642 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4fe048e8-d571-4c2a-a306-3b1e9fdc1798-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 07:22:01 crc kubenswrapper[4642]: I0128 07:22:01.980153 4642 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4fe048e8-d571-4c2a-a306-3b1e9fdc1798-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 07:22:01 crc kubenswrapper[4642]: I0128 07:22:01.980162 4642 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/4fe048e8-d571-4c2a-a306-3b1e9fdc1798-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 28 07:22:01 crc kubenswrapper[4642]: I0128 07:22:01.980169 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nf8tp\" (UniqueName: \"kubernetes.io/projected/4fe048e8-d571-4c2a-a306-3b1e9fdc1798-kube-api-access-nf8tp\") on node \"crc\" DevicePath \"\"" Jan 28 07:22:02 crc kubenswrapper[4642]: I0128 07:22:02.532852 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5snh2" event={"ID":"4fe048e8-d571-4c2a-a306-3b1e9fdc1798","Type":"ContainerDied","Data":"d2ecb2e24b173065f1ada0b4ca839299d85c83197d91436454aeee1878314e0b"} Jan 28 07:22:02 crc kubenswrapper[4642]: I0128 07:22:02.532892 4642 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2ecb2e24b173065f1ada0b4ca839299d85c83197d91436454aeee1878314e0b" Jan 28 07:22:02 crc kubenswrapper[4642]: I0128 07:22:02.532898 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5snh2" Jan 28 07:22:02 crc kubenswrapper[4642]: I0128 07:22:02.593646 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-jkwgg"] Jan 28 07:22:02 crc kubenswrapper[4642]: E0128 07:22:02.594201 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d87495c-28e5-4c2d-a492-22199aa2d3c8" containerName="extract-content" Jan 28 07:22:02 crc kubenswrapper[4642]: I0128 07:22:02.594217 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d87495c-28e5-4c2d-a492-22199aa2d3c8" containerName="extract-content" Jan 28 07:22:02 crc kubenswrapper[4642]: E0128 07:22:02.594227 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d87495c-28e5-4c2d-a492-22199aa2d3c8" containerName="extract-utilities" Jan 28 07:22:02 crc kubenswrapper[4642]: I0128 07:22:02.594233 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d87495c-28e5-4c2d-a492-22199aa2d3c8" containerName="extract-utilities" Jan 28 07:22:02 crc kubenswrapper[4642]: E0128 07:22:02.594255 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d87495c-28e5-4c2d-a492-22199aa2d3c8" containerName="registry-server" Jan 28 07:22:02 crc kubenswrapper[4642]: I0128 07:22:02.594262 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d87495c-28e5-4c2d-a492-22199aa2d3c8" containerName="registry-server" Jan 28 07:22:02 crc kubenswrapper[4642]: E0128 07:22:02.594278 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fe048e8-d571-4c2a-a306-3b1e9fdc1798" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 28 07:22:02 crc kubenswrapper[4642]: I0128 07:22:02.594285 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fe048e8-d571-4c2a-a306-3b1e9fdc1798" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 28 07:22:02 crc kubenswrapper[4642]: I0128 07:22:02.594433 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fe048e8-d571-4c2a-a306-3b1e9fdc1798" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 28 07:22:02 crc kubenswrapper[4642]: I0128 07:22:02.594443 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d87495c-28e5-4c2d-a492-22199aa2d3c8" containerName="registry-server" Jan 28 07:22:02 crc kubenswrapper[4642]: I0128 07:22:02.594904 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jkwgg" Jan 28 07:22:02 crc kubenswrapper[4642]: I0128 07:22:02.596445 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 28 07:22:02 crc kubenswrapper[4642]: I0128 07:22:02.596629 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r6m2m" Jan 28 07:22:02 crc kubenswrapper[4642]: I0128 07:22:02.596833 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 07:22:02 crc kubenswrapper[4642]: I0128 07:22:02.596972 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 07:22:02 crc kubenswrapper[4642]: I0128 07:22:02.597075 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 28 07:22:02 crc kubenswrapper[4642]: I0128 07:22:02.597204 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 28 07:22:02 crc kubenswrapper[4642]: I0128 07:22:02.598311 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 07:22:02 crc kubenswrapper[4642]: I0128 07:22:02.601442 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-jkwgg"] Jan 28 07:22:02 crc kubenswrapper[4642]: I0128 07:22:02.792297 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/57077b74-e1c4-4ab3-b414-1301bacf7e3c-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jkwgg\" (UID: \"57077b74-e1c4-4ab3-b414-1301bacf7e3c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jkwgg" Jan 28 07:22:02 crc kubenswrapper[4642]: I0128 07:22:02.792359 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/57077b74-e1c4-4ab3-b414-1301bacf7e3c-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jkwgg\" (UID: \"57077b74-e1c4-4ab3-b414-1301bacf7e3c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jkwgg" Jan 28 07:22:02 crc kubenswrapper[4642]: I0128 07:22:02.792450 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/57077b74-e1c4-4ab3-b414-1301bacf7e3c-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jkwgg\" (UID: \"57077b74-e1c4-4ab3-b414-1301bacf7e3c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jkwgg" Jan 28 07:22:02 crc kubenswrapper[4642]: I0128 07:22:02.792477 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/57077b74-e1c4-4ab3-b414-1301bacf7e3c-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jkwgg\" (UID: \"57077b74-e1c4-4ab3-b414-1301bacf7e3c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jkwgg" Jan 28 07:22:02 crc kubenswrapper[4642]: I0128 07:22:02.792520 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/57077b74-e1c4-4ab3-b414-1301bacf7e3c-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jkwgg\" (UID: \"57077b74-e1c4-4ab3-b414-1301bacf7e3c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jkwgg" Jan 28 07:22:02 crc kubenswrapper[4642]: I0128 07:22:02.792542 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57077b74-e1c4-4ab3-b414-1301bacf7e3c-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jkwgg\" (UID: \"57077b74-e1c4-4ab3-b414-1301bacf7e3c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jkwgg" Jan 28 07:22:02 crc kubenswrapper[4642]: I0128 07:22:02.792559 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/57077b74-e1c4-4ab3-b414-1301bacf7e3c-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jkwgg\" (UID: \"57077b74-e1c4-4ab3-b414-1301bacf7e3c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jkwgg" Jan 28 07:22:02 crc kubenswrapper[4642]: I0128 07:22:02.792661 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdtg2\" (UniqueName: \"kubernetes.io/projected/57077b74-e1c4-4ab3-b414-1301bacf7e3c-kube-api-access-wdtg2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jkwgg\" (UID: \"57077b74-e1c4-4ab3-b414-1301bacf7e3c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jkwgg" Jan 28 07:22:02 crc kubenswrapper[4642]: I0128 07:22:02.792703 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/57077b74-e1c4-4ab3-b414-1301bacf7e3c-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jkwgg\" (UID: \"57077b74-e1c4-4ab3-b414-1301bacf7e3c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jkwgg" Jan 28 07:22:02 crc kubenswrapper[4642]: I0128 07:22:02.894343 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdtg2\" (UniqueName: \"kubernetes.io/projected/57077b74-e1c4-4ab3-b414-1301bacf7e3c-kube-api-access-wdtg2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jkwgg\" (UID: \"57077b74-e1c4-4ab3-b414-1301bacf7e3c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jkwgg" Jan 28 07:22:02 crc kubenswrapper[4642]: I0128 07:22:02.894422 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/57077b74-e1c4-4ab3-b414-1301bacf7e3c-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jkwgg\" (UID: \"57077b74-e1c4-4ab3-b414-1301bacf7e3c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jkwgg" Jan 28 07:22:02 crc kubenswrapper[4642]: I0128 07:22:02.894471 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/57077b74-e1c4-4ab3-b414-1301bacf7e3c-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jkwgg\" (UID: \"57077b74-e1c4-4ab3-b414-1301bacf7e3c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jkwgg" Jan 28 07:22:02 crc kubenswrapper[4642]: I0128 07:22:02.894515 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/57077b74-e1c4-4ab3-b414-1301bacf7e3c-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jkwgg\" (UID: \"57077b74-e1c4-4ab3-b414-1301bacf7e3c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jkwgg" Jan 28 07:22:02 crc kubenswrapper[4642]: I0128 07:22:02.894585 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/57077b74-e1c4-4ab3-b414-1301bacf7e3c-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jkwgg\" (UID: \"57077b74-e1c4-4ab3-b414-1301bacf7e3c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jkwgg" Jan 28 07:22:02 crc kubenswrapper[4642]: I0128 07:22:02.894605 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/57077b74-e1c4-4ab3-b414-1301bacf7e3c-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jkwgg\" (UID: \"57077b74-e1c4-4ab3-b414-1301bacf7e3c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jkwgg" Jan 28 07:22:02 crc kubenswrapper[4642]: I0128 07:22:02.894659 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/57077b74-e1c4-4ab3-b414-1301bacf7e3c-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jkwgg\" (UID: \"57077b74-e1c4-4ab3-b414-1301bacf7e3c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jkwgg" Jan 28 07:22:02 crc kubenswrapper[4642]: I0128 07:22:02.894684 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57077b74-e1c4-4ab3-b414-1301bacf7e3c-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jkwgg\" (UID: \"57077b74-e1c4-4ab3-b414-1301bacf7e3c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jkwgg" Jan 28 07:22:02 crc kubenswrapper[4642]: I0128 07:22:02.894701 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/57077b74-e1c4-4ab3-b414-1301bacf7e3c-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jkwgg\" (UID: \"57077b74-e1c4-4ab3-b414-1301bacf7e3c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jkwgg" Jan 28 07:22:02 crc kubenswrapper[4642]: I0128 07:22:02.895449 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/57077b74-e1c4-4ab3-b414-1301bacf7e3c-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jkwgg\" (UID: \"57077b74-e1c4-4ab3-b414-1301bacf7e3c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jkwgg" Jan 28 07:22:02 crc kubenswrapper[4642]: I0128 07:22:02.898263 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/57077b74-e1c4-4ab3-b414-1301bacf7e3c-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jkwgg\" (UID: \"57077b74-e1c4-4ab3-b414-1301bacf7e3c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jkwgg" Jan 28 07:22:02 crc kubenswrapper[4642]: I0128 07:22:02.898400 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/57077b74-e1c4-4ab3-b414-1301bacf7e3c-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jkwgg\" (UID: \"57077b74-e1c4-4ab3-b414-1301bacf7e3c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jkwgg" Jan 28 07:22:02 crc kubenswrapper[4642]: I0128 07:22:02.898605 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57077b74-e1c4-4ab3-b414-1301bacf7e3c-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jkwgg\" (UID: \"57077b74-e1c4-4ab3-b414-1301bacf7e3c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jkwgg" Jan 28 07:22:02 crc kubenswrapper[4642]: I0128 07:22:02.899170 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/57077b74-e1c4-4ab3-b414-1301bacf7e3c-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jkwgg\" (UID: \"57077b74-e1c4-4ab3-b414-1301bacf7e3c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jkwgg" Jan 28 07:22:02 crc kubenswrapper[4642]: I0128 07:22:02.899388 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/57077b74-e1c4-4ab3-b414-1301bacf7e3c-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jkwgg\" (UID: \"57077b74-e1c4-4ab3-b414-1301bacf7e3c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jkwgg" Jan 28 07:22:02 crc kubenswrapper[4642]: I0128 07:22:02.899773 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/57077b74-e1c4-4ab3-b414-1301bacf7e3c-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jkwgg\" (UID: \"57077b74-e1c4-4ab3-b414-1301bacf7e3c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jkwgg" Jan 28 07:22:02 crc kubenswrapper[4642]: I0128 07:22:02.900660 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/57077b74-e1c4-4ab3-b414-1301bacf7e3c-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jkwgg\" (UID: \"57077b74-e1c4-4ab3-b414-1301bacf7e3c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jkwgg" Jan 28 07:22:02 crc kubenswrapper[4642]: I0128 07:22:02.908459 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdtg2\" (UniqueName: \"kubernetes.io/projected/57077b74-e1c4-4ab3-b414-1301bacf7e3c-kube-api-access-wdtg2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jkwgg\" (UID: \"57077b74-e1c4-4ab3-b414-1301bacf7e3c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jkwgg" Jan 28 07:22:03 crc kubenswrapper[4642]: I0128 07:22:03.208023 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jkwgg" Jan 28 07:22:03 crc kubenswrapper[4642]: I0128 07:22:03.628791 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-jkwgg"] Jan 28 07:22:04 crc kubenswrapper[4642]: I0128 07:22:04.549074 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jkwgg" event={"ID":"57077b74-e1c4-4ab3-b414-1301bacf7e3c","Type":"ContainerStarted","Data":"7b03a74edfea99f31cf9f909866cc62c75c3a1766887bf31e865485bb75e333c"} Jan 28 07:22:04 crc kubenswrapper[4642]: I0128 07:22:04.549460 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jkwgg" event={"ID":"57077b74-e1c4-4ab3-b414-1301bacf7e3c","Type":"ContainerStarted","Data":"d286959119e1dcf9beadb35ea8cd86b6b7e00aa2355d34763f7772d1b1019366"} Jan 28 07:22:04 crc kubenswrapper[4642]: I0128 07:22:04.561858 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jkwgg" podStartSLOduration=2.037715631 podStartE2EDuration="2.561846428s" podCreationTimestamp="2026-01-28 07:22:02 +0000 UTC" firstStartedPulling="2026-01-28 07:22:03.633007941 +0000 UTC m=+2046.865096751" lastFinishedPulling="2026-01-28 07:22:04.157138739 +0000 UTC m=+2047.389227548" observedRunningTime="2026-01-28 07:22:04.559640661 +0000 UTC m=+2047.791729470" watchObservedRunningTime="2026-01-28 07:22:04.561846428 +0000 UTC m=+2047.793935237" Jan 28 07:23:08 crc kubenswrapper[4642]: I0128 07:23:08.199075 4642 patch_prober.go:28] interesting pod/machine-config-daemon-hdsmf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 07:23:08 crc kubenswrapper[4642]: I0128 07:23:08.199420 4642 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 07:23:38 crc kubenswrapper[4642]: I0128 07:23:38.082374 4642 generic.go:334] "Generic (PLEG): container finished" podID="57077b74-e1c4-4ab3-b414-1301bacf7e3c" containerID="7b03a74edfea99f31cf9f909866cc62c75c3a1766887bf31e865485bb75e333c" exitCode=0 Jan 28 07:23:38 crc kubenswrapper[4642]: I0128 07:23:38.082398 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jkwgg" event={"ID":"57077b74-e1c4-4ab3-b414-1301bacf7e3c","Type":"ContainerDied","Data":"7b03a74edfea99f31cf9f909866cc62c75c3a1766887bf31e865485bb75e333c"} Jan 28 07:23:38 crc kubenswrapper[4642]: I0128 07:23:38.199722 4642 patch_prober.go:28] interesting pod/machine-config-daemon-hdsmf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 07:23:38 crc kubenswrapper[4642]: I0128 07:23:38.199781 4642 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 07:23:39 crc kubenswrapper[4642]: I0128 07:23:39.406336 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jkwgg" Jan 28 07:23:39 crc kubenswrapper[4642]: I0128 07:23:39.532769 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/57077b74-e1c4-4ab3-b414-1301bacf7e3c-nova-migration-ssh-key-0\") pod \"57077b74-e1c4-4ab3-b414-1301bacf7e3c\" (UID: \"57077b74-e1c4-4ab3-b414-1301bacf7e3c\") " Jan 28 07:23:39 crc kubenswrapper[4642]: I0128 07:23:39.532829 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wdtg2\" (UniqueName: \"kubernetes.io/projected/57077b74-e1c4-4ab3-b414-1301bacf7e3c-kube-api-access-wdtg2\") pod \"57077b74-e1c4-4ab3-b414-1301bacf7e3c\" (UID: \"57077b74-e1c4-4ab3-b414-1301bacf7e3c\") " Jan 28 07:23:39 crc kubenswrapper[4642]: I0128 07:23:39.532926 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57077b74-e1c4-4ab3-b414-1301bacf7e3c-nova-combined-ca-bundle\") pod \"57077b74-e1c4-4ab3-b414-1301bacf7e3c\" (UID: \"57077b74-e1c4-4ab3-b414-1301bacf7e3c\") " Jan 28 07:23:39 crc kubenswrapper[4642]: I0128 07:23:39.532978 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/57077b74-e1c4-4ab3-b414-1301bacf7e3c-nova-cell1-compute-config-0\") pod \"57077b74-e1c4-4ab3-b414-1301bacf7e3c\" (UID: \"57077b74-e1c4-4ab3-b414-1301bacf7e3c\") " Jan 28 07:23:39 crc kubenswrapper[4642]: I0128 07:23:39.533000 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/57077b74-e1c4-4ab3-b414-1301bacf7e3c-nova-migration-ssh-key-1\") pod \"57077b74-e1c4-4ab3-b414-1301bacf7e3c\" (UID: \"57077b74-e1c4-4ab3-b414-1301bacf7e3c\") " Jan 28 07:23:39 crc kubenswrapper[4642]: I0128 07:23:39.533030 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/57077b74-e1c4-4ab3-b414-1301bacf7e3c-nova-extra-config-0\") pod \"57077b74-e1c4-4ab3-b414-1301bacf7e3c\" (UID: \"57077b74-e1c4-4ab3-b414-1301bacf7e3c\") " Jan 28 07:23:39 crc kubenswrapper[4642]: I0128 07:23:39.533054 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/57077b74-e1c4-4ab3-b414-1301bacf7e3c-inventory\") pod \"57077b74-e1c4-4ab3-b414-1301bacf7e3c\" (UID: \"57077b74-e1c4-4ab3-b414-1301bacf7e3c\") " Jan 28 07:23:39 crc kubenswrapper[4642]: I0128 07:23:39.533094 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/57077b74-e1c4-4ab3-b414-1301bacf7e3c-ssh-key-openstack-edpm-ipam\") pod \"57077b74-e1c4-4ab3-b414-1301bacf7e3c\" (UID: \"57077b74-e1c4-4ab3-b414-1301bacf7e3c\") " Jan 28 07:23:39 crc kubenswrapper[4642]: I0128 07:23:39.533111 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/57077b74-e1c4-4ab3-b414-1301bacf7e3c-nova-cell1-compute-config-1\") pod \"57077b74-e1c4-4ab3-b414-1301bacf7e3c\" (UID: \"57077b74-e1c4-4ab3-b414-1301bacf7e3c\") " Jan 28 07:23:39 crc kubenswrapper[4642]: I0128 07:23:39.539517 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57077b74-e1c4-4ab3-b414-1301bacf7e3c-kube-api-access-wdtg2" (OuterVolumeSpecName: "kube-api-access-wdtg2") pod "57077b74-e1c4-4ab3-b414-1301bacf7e3c" (UID: "57077b74-e1c4-4ab3-b414-1301bacf7e3c"). InnerVolumeSpecName "kube-api-access-wdtg2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:23:39 crc kubenswrapper[4642]: I0128 07:23:39.557418 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57077b74-e1c4-4ab3-b414-1301bacf7e3c-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "57077b74-e1c4-4ab3-b414-1301bacf7e3c" (UID: "57077b74-e1c4-4ab3-b414-1301bacf7e3c"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:23:39 crc kubenswrapper[4642]: I0128 07:23:39.557970 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57077b74-e1c4-4ab3-b414-1301bacf7e3c-inventory" (OuterVolumeSpecName: "inventory") pod "57077b74-e1c4-4ab3-b414-1301bacf7e3c" (UID: "57077b74-e1c4-4ab3-b414-1301bacf7e3c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:23:39 crc kubenswrapper[4642]: I0128 07:23:39.558351 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57077b74-e1c4-4ab3-b414-1301bacf7e3c-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "57077b74-e1c4-4ab3-b414-1301bacf7e3c" (UID: "57077b74-e1c4-4ab3-b414-1301bacf7e3c"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:23:39 crc kubenswrapper[4642]: I0128 07:23:39.559229 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57077b74-e1c4-4ab3-b414-1301bacf7e3c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "57077b74-e1c4-4ab3-b414-1301bacf7e3c" (UID: "57077b74-e1c4-4ab3-b414-1301bacf7e3c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:23:39 crc kubenswrapper[4642]: I0128 07:23:39.559763 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57077b74-e1c4-4ab3-b414-1301bacf7e3c-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "57077b74-e1c4-4ab3-b414-1301bacf7e3c" (UID: "57077b74-e1c4-4ab3-b414-1301bacf7e3c"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:23:39 crc kubenswrapper[4642]: I0128 07:23:39.564579 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57077b74-e1c4-4ab3-b414-1301bacf7e3c-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "57077b74-e1c4-4ab3-b414-1301bacf7e3c" (UID: "57077b74-e1c4-4ab3-b414-1301bacf7e3c"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:23:39 crc kubenswrapper[4642]: I0128 07:23:39.577533 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57077b74-e1c4-4ab3-b414-1301bacf7e3c-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "57077b74-e1c4-4ab3-b414-1301bacf7e3c" (UID: "57077b74-e1c4-4ab3-b414-1301bacf7e3c"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:23:39 crc kubenswrapper[4642]: I0128 07:23:39.580491 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57077b74-e1c4-4ab3-b414-1301bacf7e3c-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "57077b74-e1c4-4ab3-b414-1301bacf7e3c" (UID: "57077b74-e1c4-4ab3-b414-1301bacf7e3c"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:23:39 crc kubenswrapper[4642]: I0128 07:23:39.634614 4642 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/57077b74-e1c4-4ab3-b414-1301bacf7e3c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 07:23:39 crc kubenswrapper[4642]: I0128 07:23:39.634829 4642 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/57077b74-e1c4-4ab3-b414-1301bacf7e3c-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 28 07:23:39 crc kubenswrapper[4642]: I0128 07:23:39.634853 4642 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/57077b74-e1c4-4ab3-b414-1301bacf7e3c-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 28 07:23:39 crc kubenswrapper[4642]: I0128 07:23:39.634861 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wdtg2\" (UniqueName: \"kubernetes.io/projected/57077b74-e1c4-4ab3-b414-1301bacf7e3c-kube-api-access-wdtg2\") on node \"crc\" DevicePath \"\"" Jan 28 07:23:39 crc kubenswrapper[4642]: I0128 07:23:39.634871 4642 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57077b74-e1c4-4ab3-b414-1301bacf7e3c-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 07:23:39 crc kubenswrapper[4642]: I0128 07:23:39.634879 4642 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/57077b74-e1c4-4ab3-b414-1301bacf7e3c-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 28 07:23:39 crc kubenswrapper[4642]: I0128 07:23:39.634887 4642 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/57077b74-e1c4-4ab3-b414-1301bacf7e3c-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 28 07:23:39 crc kubenswrapper[4642]: I0128 07:23:39.634895 4642 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/57077b74-e1c4-4ab3-b414-1301bacf7e3c-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 28 07:23:39 crc kubenswrapper[4642]: I0128 07:23:39.634902 4642 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/57077b74-e1c4-4ab3-b414-1301bacf7e3c-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 07:23:40 crc kubenswrapper[4642]: I0128 07:23:40.095044 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jkwgg" event={"ID":"57077b74-e1c4-4ab3-b414-1301bacf7e3c","Type":"ContainerDied","Data":"d286959119e1dcf9beadb35ea8cd86b6b7e00aa2355d34763f7772d1b1019366"} Jan 28 07:23:40 crc kubenswrapper[4642]: I0128 07:23:40.095080 4642 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d286959119e1dcf9beadb35ea8cd86b6b7e00aa2355d34763f7772d1b1019366" Jan 28 07:23:40 crc kubenswrapper[4642]: I0128 07:23:40.095099 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jkwgg" Jan 28 07:23:40 crc kubenswrapper[4642]: I0128 07:23:40.274092 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xb7sb"] Jan 28 07:23:40 crc kubenswrapper[4642]: E0128 07:23:40.274428 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57077b74-e1c4-4ab3-b414-1301bacf7e3c" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 28 07:23:40 crc kubenswrapper[4642]: I0128 07:23:40.274444 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="57077b74-e1c4-4ab3-b414-1301bacf7e3c" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 28 07:23:40 crc kubenswrapper[4642]: I0128 07:23:40.274600 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="57077b74-e1c4-4ab3-b414-1301bacf7e3c" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 28 07:23:40 crc kubenswrapper[4642]: I0128 07:23:40.275104 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xb7sb" Jan 28 07:23:40 crc kubenswrapper[4642]: I0128 07:23:40.276198 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Jan 28 07:23:40 crc kubenswrapper[4642]: I0128 07:23:40.276818 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r6m2m" Jan 28 07:23:40 crc kubenswrapper[4642]: I0128 07:23:40.277484 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 07:23:40 crc kubenswrapper[4642]: I0128 07:23:40.277978 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 07:23:40 crc kubenswrapper[4642]: I0128 07:23:40.278568 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 07:23:40 crc kubenswrapper[4642]: I0128 07:23:40.284342 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xb7sb"] Jan 28 07:23:40 crc kubenswrapper[4642]: I0128 07:23:40.446017 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/3d342237-d10d-4315-a659-c8f91ecc6d5d-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xb7sb\" (UID: \"3d342237-d10d-4315-a659-c8f91ecc6d5d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xb7sb" Jan 28 07:23:40 crc kubenswrapper[4642]: I0128 07:23:40.446091 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d342237-d10d-4315-a659-c8f91ecc6d5d-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xb7sb\" (UID: \"3d342237-d10d-4315-a659-c8f91ecc6d5d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xb7sb" Jan 28 07:23:40 crc kubenswrapper[4642]: I0128 07:23:40.446136 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/3d342237-d10d-4315-a659-c8f91ecc6d5d-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xb7sb\" (UID: \"3d342237-d10d-4315-a659-c8f91ecc6d5d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xb7sb" Jan 28 07:23:40 crc kubenswrapper[4642]: I0128 07:23:40.446283 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgng9\" (UniqueName: \"kubernetes.io/projected/3d342237-d10d-4315-a659-c8f91ecc6d5d-kube-api-access-hgng9\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xb7sb\" (UID: \"3d342237-d10d-4315-a659-c8f91ecc6d5d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xb7sb" Jan 28 07:23:40 crc kubenswrapper[4642]: I0128 07:23:40.446314 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3d342237-d10d-4315-a659-c8f91ecc6d5d-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xb7sb\" (UID: \"3d342237-d10d-4315-a659-c8f91ecc6d5d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xb7sb" Jan 28 07:23:40 crc kubenswrapper[4642]: I0128 07:23:40.446610 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/3d342237-d10d-4315-a659-c8f91ecc6d5d-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xb7sb\" (UID: \"3d342237-d10d-4315-a659-c8f91ecc6d5d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xb7sb" Jan 28 07:23:40 crc kubenswrapper[4642]: I0128 07:23:40.446683 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3d342237-d10d-4315-a659-c8f91ecc6d5d-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xb7sb\" (UID: \"3d342237-d10d-4315-a659-c8f91ecc6d5d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xb7sb" Jan 28 07:23:40 crc kubenswrapper[4642]: I0128 07:23:40.548849 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d342237-d10d-4315-a659-c8f91ecc6d5d-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xb7sb\" (UID: \"3d342237-d10d-4315-a659-c8f91ecc6d5d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xb7sb" Jan 28 07:23:40 crc kubenswrapper[4642]: I0128 07:23:40.548899 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/3d342237-d10d-4315-a659-c8f91ecc6d5d-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xb7sb\" (UID: \"3d342237-d10d-4315-a659-c8f91ecc6d5d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xb7sb" Jan 28 07:23:40 crc kubenswrapper[4642]: I0128 07:23:40.548946 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgng9\" (UniqueName: \"kubernetes.io/projected/3d342237-d10d-4315-a659-c8f91ecc6d5d-kube-api-access-hgng9\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xb7sb\" (UID: \"3d342237-d10d-4315-a659-c8f91ecc6d5d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xb7sb" Jan 28 07:23:40 crc kubenswrapper[4642]: I0128 07:23:40.548975 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3d342237-d10d-4315-a659-c8f91ecc6d5d-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xb7sb\" (UID: \"3d342237-d10d-4315-a659-c8f91ecc6d5d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xb7sb" Jan 28 07:23:40 crc kubenswrapper[4642]: I0128 07:23:40.549046 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/3d342237-d10d-4315-a659-c8f91ecc6d5d-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xb7sb\" (UID: \"3d342237-d10d-4315-a659-c8f91ecc6d5d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xb7sb" Jan 28 07:23:40 crc kubenswrapper[4642]: I0128 07:23:40.549080 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3d342237-d10d-4315-a659-c8f91ecc6d5d-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xb7sb\" (UID: \"3d342237-d10d-4315-a659-c8f91ecc6d5d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xb7sb" Jan 28 07:23:40 crc kubenswrapper[4642]: I0128 07:23:40.549116 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/3d342237-d10d-4315-a659-c8f91ecc6d5d-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xb7sb\" (UID: \"3d342237-d10d-4315-a659-c8f91ecc6d5d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xb7sb" Jan 28 07:23:40 crc kubenswrapper[4642]: I0128 07:23:40.552460 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/3d342237-d10d-4315-a659-c8f91ecc6d5d-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xb7sb\" (UID: \"3d342237-d10d-4315-a659-c8f91ecc6d5d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xb7sb" Jan 28 07:23:40 crc kubenswrapper[4642]: I0128 07:23:40.552889 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/3d342237-d10d-4315-a659-c8f91ecc6d5d-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xb7sb\" (UID: \"3d342237-d10d-4315-a659-c8f91ecc6d5d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xb7sb" Jan 28 07:23:40 crc kubenswrapper[4642]: I0128 07:23:40.553229 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d342237-d10d-4315-a659-c8f91ecc6d5d-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xb7sb\" (UID: \"3d342237-d10d-4315-a659-c8f91ecc6d5d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xb7sb" Jan 28 07:23:40 crc kubenswrapper[4642]: I0128 07:23:40.553274 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3d342237-d10d-4315-a659-c8f91ecc6d5d-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xb7sb\" (UID: \"3d342237-d10d-4315-a659-c8f91ecc6d5d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xb7sb" Jan 28 07:23:40 crc kubenswrapper[4642]: I0128 07:23:40.554210 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/3d342237-d10d-4315-a659-c8f91ecc6d5d-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xb7sb\" (UID: \"3d342237-d10d-4315-a659-c8f91ecc6d5d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xb7sb" Jan 28 07:23:40 crc kubenswrapper[4642]: I0128 07:23:40.560494 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3d342237-d10d-4315-a659-c8f91ecc6d5d-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xb7sb\" (UID: \"3d342237-d10d-4315-a659-c8f91ecc6d5d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xb7sb" Jan 28 07:23:40 crc kubenswrapper[4642]: I0128 07:23:40.562014 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgng9\" (UniqueName: \"kubernetes.io/projected/3d342237-d10d-4315-a659-c8f91ecc6d5d-kube-api-access-hgng9\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xb7sb\" (UID: \"3d342237-d10d-4315-a659-c8f91ecc6d5d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xb7sb" Jan 28 07:23:40 crc kubenswrapper[4642]: I0128 07:23:40.599226 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xb7sb" Jan 28 07:23:41 crc kubenswrapper[4642]: I0128 07:23:41.031986 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xb7sb"] Jan 28 07:23:41 crc kubenswrapper[4642]: I0128 07:23:41.039889 4642 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 07:23:41 crc kubenswrapper[4642]: I0128 07:23:41.105727 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xb7sb" event={"ID":"3d342237-d10d-4315-a659-c8f91ecc6d5d","Type":"ContainerStarted","Data":"135ff3f3b6acc39e8852b728fbd76f585fda555e51d70e67ddf5a9af37ed234d"} Jan 28 07:23:42 crc kubenswrapper[4642]: I0128 07:23:42.110056 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xb7sb" event={"ID":"3d342237-d10d-4315-a659-c8f91ecc6d5d","Type":"ContainerStarted","Data":"d3171381a01ab1d24a3b5cca6c6c4d72206f2697877f72bdd91b305a84899fab"} Jan 28 07:23:42 crc kubenswrapper[4642]: I0128 07:23:42.124468 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xb7sb" podStartSLOduration=1.5707447079999999 podStartE2EDuration="2.124452194s" podCreationTimestamp="2026-01-28 07:23:40 +0000 UTC" firstStartedPulling="2026-01-28 07:23:41.039705432 +0000 UTC m=+2144.271794241" lastFinishedPulling="2026-01-28 07:23:41.593412919 +0000 UTC m=+2144.825501727" observedRunningTime="2026-01-28 07:23:42.121251175 +0000 UTC m=+2145.353339984" watchObservedRunningTime="2026-01-28 07:23:42.124452194 +0000 UTC m=+2145.356541003" Jan 28 07:24:05 crc kubenswrapper[4642]: I0128 07:24:05.401231 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qjvsc"] Jan 28 07:24:05 crc kubenswrapper[4642]: I0128 07:24:05.403434 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qjvsc" Jan 28 07:24:05 crc kubenswrapper[4642]: I0128 07:24:05.411856 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qjvsc"] Jan 28 07:24:05 crc kubenswrapper[4642]: I0128 07:24:05.486511 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70ce17da-66a5-4aed-90f5-3ed27538b630-catalog-content\") pod \"community-operators-qjvsc\" (UID: \"70ce17da-66a5-4aed-90f5-3ed27538b630\") " pod="openshift-marketplace/community-operators-qjvsc" Jan 28 07:24:05 crc kubenswrapper[4642]: I0128 07:24:05.486561 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70ce17da-66a5-4aed-90f5-3ed27538b630-utilities\") pod \"community-operators-qjvsc\" (UID: \"70ce17da-66a5-4aed-90f5-3ed27538b630\") " pod="openshift-marketplace/community-operators-qjvsc" Jan 28 07:24:05 crc kubenswrapper[4642]: I0128 07:24:05.486606 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhdsn\" (UniqueName: \"kubernetes.io/projected/70ce17da-66a5-4aed-90f5-3ed27538b630-kube-api-access-nhdsn\") pod \"community-operators-qjvsc\" (UID: \"70ce17da-66a5-4aed-90f5-3ed27538b630\") " pod="openshift-marketplace/community-operators-qjvsc" Jan 28 07:24:05 crc kubenswrapper[4642]: I0128 07:24:05.588230 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhdsn\" (UniqueName: \"kubernetes.io/projected/70ce17da-66a5-4aed-90f5-3ed27538b630-kube-api-access-nhdsn\") pod \"community-operators-qjvsc\" (UID: \"70ce17da-66a5-4aed-90f5-3ed27538b630\") " pod="openshift-marketplace/community-operators-qjvsc" Jan 28 07:24:05 crc kubenswrapper[4642]: I0128 07:24:05.588379 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70ce17da-66a5-4aed-90f5-3ed27538b630-catalog-content\") pod \"community-operators-qjvsc\" (UID: \"70ce17da-66a5-4aed-90f5-3ed27538b630\") " pod="openshift-marketplace/community-operators-qjvsc" Jan 28 07:24:05 crc kubenswrapper[4642]: I0128 07:24:05.588400 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70ce17da-66a5-4aed-90f5-3ed27538b630-utilities\") pod \"community-operators-qjvsc\" (UID: \"70ce17da-66a5-4aed-90f5-3ed27538b630\") " pod="openshift-marketplace/community-operators-qjvsc" Jan 28 07:24:05 crc kubenswrapper[4642]: I0128 07:24:05.588764 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70ce17da-66a5-4aed-90f5-3ed27538b630-utilities\") pod \"community-operators-qjvsc\" (UID: \"70ce17da-66a5-4aed-90f5-3ed27538b630\") " pod="openshift-marketplace/community-operators-qjvsc" Jan 28 07:24:05 crc kubenswrapper[4642]: I0128 07:24:05.588814 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70ce17da-66a5-4aed-90f5-3ed27538b630-catalog-content\") pod \"community-operators-qjvsc\" (UID: \"70ce17da-66a5-4aed-90f5-3ed27538b630\") " pod="openshift-marketplace/community-operators-qjvsc" Jan 28 07:24:05 crc kubenswrapper[4642]: I0128 07:24:05.601268 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lxxsv"] Jan 28 07:24:05 crc kubenswrapper[4642]: I0128 07:24:05.602741 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lxxsv" Jan 28 07:24:05 crc kubenswrapper[4642]: I0128 07:24:05.623241 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lxxsv"] Jan 28 07:24:05 crc kubenswrapper[4642]: I0128 07:24:05.630857 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhdsn\" (UniqueName: \"kubernetes.io/projected/70ce17da-66a5-4aed-90f5-3ed27538b630-kube-api-access-nhdsn\") pod \"community-operators-qjvsc\" (UID: \"70ce17da-66a5-4aed-90f5-3ed27538b630\") " pod="openshift-marketplace/community-operators-qjvsc" Jan 28 07:24:05 crc kubenswrapper[4642]: I0128 07:24:05.690155 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xv42t\" (UniqueName: \"kubernetes.io/projected/5d822fa6-1a1e-4fc4-8a2c-7d57744d06e3-kube-api-access-xv42t\") pod \"certified-operators-lxxsv\" (UID: \"5d822fa6-1a1e-4fc4-8a2c-7d57744d06e3\") " pod="openshift-marketplace/certified-operators-lxxsv" Jan 28 07:24:05 crc kubenswrapper[4642]: I0128 07:24:05.690356 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d822fa6-1a1e-4fc4-8a2c-7d57744d06e3-utilities\") pod \"certified-operators-lxxsv\" (UID: \"5d822fa6-1a1e-4fc4-8a2c-7d57744d06e3\") " pod="openshift-marketplace/certified-operators-lxxsv" Jan 28 07:24:05 crc kubenswrapper[4642]: I0128 07:24:05.690471 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d822fa6-1a1e-4fc4-8a2c-7d57744d06e3-catalog-content\") pod \"certified-operators-lxxsv\" (UID: \"5d822fa6-1a1e-4fc4-8a2c-7d57744d06e3\") " pod="openshift-marketplace/certified-operators-lxxsv" Jan 28 07:24:05 crc kubenswrapper[4642]: I0128 07:24:05.717098 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qjvsc" Jan 28 07:24:05 crc kubenswrapper[4642]: I0128 07:24:05.791608 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d822fa6-1a1e-4fc4-8a2c-7d57744d06e3-utilities\") pod \"certified-operators-lxxsv\" (UID: \"5d822fa6-1a1e-4fc4-8a2c-7d57744d06e3\") " pod="openshift-marketplace/certified-operators-lxxsv" Jan 28 07:24:05 crc kubenswrapper[4642]: I0128 07:24:05.791669 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d822fa6-1a1e-4fc4-8a2c-7d57744d06e3-catalog-content\") pod \"certified-operators-lxxsv\" (UID: \"5d822fa6-1a1e-4fc4-8a2c-7d57744d06e3\") " pod="openshift-marketplace/certified-operators-lxxsv" Jan 28 07:24:05 crc kubenswrapper[4642]: I0128 07:24:05.791759 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xv42t\" (UniqueName: \"kubernetes.io/projected/5d822fa6-1a1e-4fc4-8a2c-7d57744d06e3-kube-api-access-xv42t\") pod \"certified-operators-lxxsv\" (UID: \"5d822fa6-1a1e-4fc4-8a2c-7d57744d06e3\") " pod="openshift-marketplace/certified-operators-lxxsv" Jan 28 07:24:05 crc kubenswrapper[4642]: I0128 07:24:05.792041 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d822fa6-1a1e-4fc4-8a2c-7d57744d06e3-utilities\") pod \"certified-operators-lxxsv\" (UID: \"5d822fa6-1a1e-4fc4-8a2c-7d57744d06e3\") " pod="openshift-marketplace/certified-operators-lxxsv" Jan 28 07:24:05 crc kubenswrapper[4642]: I0128 07:24:05.792064 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d822fa6-1a1e-4fc4-8a2c-7d57744d06e3-catalog-content\") pod \"certified-operators-lxxsv\" (UID: \"5d822fa6-1a1e-4fc4-8a2c-7d57744d06e3\") " pod="openshift-marketplace/certified-operators-lxxsv" Jan 28 07:24:05 crc kubenswrapper[4642]: I0128 07:24:05.816004 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xv42t\" (UniqueName: \"kubernetes.io/projected/5d822fa6-1a1e-4fc4-8a2c-7d57744d06e3-kube-api-access-xv42t\") pod \"certified-operators-lxxsv\" (UID: \"5d822fa6-1a1e-4fc4-8a2c-7d57744d06e3\") " pod="openshift-marketplace/certified-operators-lxxsv" Jan 28 07:24:05 crc kubenswrapper[4642]: I0128 07:24:05.961792 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lxxsv" Jan 28 07:24:06 crc kubenswrapper[4642]: I0128 07:24:06.196999 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qjvsc"] Jan 28 07:24:06 crc kubenswrapper[4642]: I0128 07:24:06.268243 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qjvsc" event={"ID":"70ce17da-66a5-4aed-90f5-3ed27538b630","Type":"ContainerStarted","Data":"e1e0fc5cb1ba8097f874481129f03d887c3cb4709438965ac287403759b1e659"} Jan 28 07:24:06 crc kubenswrapper[4642]: I0128 07:24:06.376454 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lxxsv"] Jan 28 07:24:06 crc kubenswrapper[4642]: W0128 07:24:06.425083 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5d822fa6_1a1e_4fc4_8a2c_7d57744d06e3.slice/crio-7aebed28bfd446fad9c9e0e357919af0df25ee93487b55f4cf596f53d4028f41 WatchSource:0}: Error finding container 7aebed28bfd446fad9c9e0e357919af0df25ee93487b55f4cf596f53d4028f41: Status 404 returned error can't find the container with id 7aebed28bfd446fad9c9e0e357919af0df25ee93487b55f4cf596f53d4028f41 Jan 28 07:24:07 crc kubenswrapper[4642]: I0128 07:24:07.276028 4642 generic.go:334] "Generic (PLEG): container finished" podID="5d822fa6-1a1e-4fc4-8a2c-7d57744d06e3" containerID="6b4fa66817c6eb16799cabc85b0e53331d56d1005f50d5839ea24b39067e6017" exitCode=0 Jan 28 07:24:07 crc kubenswrapper[4642]: I0128 07:24:07.276144 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lxxsv" event={"ID":"5d822fa6-1a1e-4fc4-8a2c-7d57744d06e3","Type":"ContainerDied","Data":"6b4fa66817c6eb16799cabc85b0e53331d56d1005f50d5839ea24b39067e6017"} Jan 28 07:24:07 crc kubenswrapper[4642]: I0128 07:24:07.276340 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lxxsv" event={"ID":"5d822fa6-1a1e-4fc4-8a2c-7d57744d06e3","Type":"ContainerStarted","Data":"7aebed28bfd446fad9c9e0e357919af0df25ee93487b55f4cf596f53d4028f41"} Jan 28 07:24:07 crc kubenswrapper[4642]: I0128 07:24:07.277990 4642 generic.go:334] "Generic (PLEG): container finished" podID="70ce17da-66a5-4aed-90f5-3ed27538b630" containerID="42504fe9565292e0ded7fb90e17293b8bad673a8ac1e01643ffd895424305e99" exitCode=0 Jan 28 07:24:07 crc kubenswrapper[4642]: I0128 07:24:07.278038 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qjvsc" event={"ID":"70ce17da-66a5-4aed-90f5-3ed27538b630","Type":"ContainerDied","Data":"42504fe9565292e0ded7fb90e17293b8bad673a8ac1e01643ffd895424305e99"} Jan 28 07:24:08 crc kubenswrapper[4642]: I0128 07:24:08.199436 4642 patch_prober.go:28] interesting pod/machine-config-daemon-hdsmf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 07:24:08 crc kubenswrapper[4642]: I0128 07:24:08.199813 4642 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 07:24:08 crc kubenswrapper[4642]: I0128 07:24:08.199863 4642 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" Jan 28 07:24:08 crc kubenswrapper[4642]: I0128 07:24:08.200900 4642 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bba2d80ba7f88f6fda7268412cc8b7c2aef2558774f4bc4008196378f1695212"} pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 07:24:08 crc kubenswrapper[4642]: I0128 07:24:08.200972 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" containerName="machine-config-daemon" containerID="cri-o://bba2d80ba7f88f6fda7268412cc8b7c2aef2558774f4bc4008196378f1695212" gracePeriod=600 Jan 28 07:24:08 crc kubenswrapper[4642]: I0128 07:24:08.288591 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lxxsv" event={"ID":"5d822fa6-1a1e-4fc4-8a2c-7d57744d06e3","Type":"ContainerStarted","Data":"b0e660921129fbc0a45627edeaf2297b6a75ba6c0d5a4346f1480bf446504a3d"} Jan 28 07:24:08 crc kubenswrapper[4642]: E0128 07:24:08.397041 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdsmf_openshift-machine-config-operator(338ae955-434d-40bd-8519-580badf3e175)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" Jan 28 07:24:08 crc kubenswrapper[4642]: I0128 07:24:08.606000 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qrxf2"] Jan 28 07:24:08 crc kubenswrapper[4642]: I0128 07:24:08.607641 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qrxf2" Jan 28 07:24:08 crc kubenswrapper[4642]: I0128 07:24:08.613404 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qrxf2"] Jan 28 07:24:08 crc kubenswrapper[4642]: I0128 07:24:08.744519 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6c1968f-45d6-47aa-887e-971ca2c44bc2-catalog-content\") pod \"redhat-operators-qrxf2\" (UID: \"f6c1968f-45d6-47aa-887e-971ca2c44bc2\") " pod="openshift-marketplace/redhat-operators-qrxf2" Jan 28 07:24:08 crc kubenswrapper[4642]: I0128 07:24:08.744577 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rgk2\" (UniqueName: \"kubernetes.io/projected/f6c1968f-45d6-47aa-887e-971ca2c44bc2-kube-api-access-4rgk2\") pod \"redhat-operators-qrxf2\" (UID: \"f6c1968f-45d6-47aa-887e-971ca2c44bc2\") " pod="openshift-marketplace/redhat-operators-qrxf2" Jan 28 07:24:08 crc kubenswrapper[4642]: I0128 07:24:08.744749 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6c1968f-45d6-47aa-887e-971ca2c44bc2-utilities\") pod \"redhat-operators-qrxf2\" (UID: \"f6c1968f-45d6-47aa-887e-971ca2c44bc2\") " pod="openshift-marketplace/redhat-operators-qrxf2" Jan 28 07:24:08 crc kubenswrapper[4642]: I0128 07:24:08.846405 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6c1968f-45d6-47aa-887e-971ca2c44bc2-utilities\") pod \"redhat-operators-qrxf2\" (UID: \"f6c1968f-45d6-47aa-887e-971ca2c44bc2\") " pod="openshift-marketplace/redhat-operators-qrxf2" Jan 28 07:24:08 crc kubenswrapper[4642]: I0128 07:24:08.846529 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6c1968f-45d6-47aa-887e-971ca2c44bc2-catalog-content\") pod \"redhat-operators-qrxf2\" (UID: \"f6c1968f-45d6-47aa-887e-971ca2c44bc2\") " pod="openshift-marketplace/redhat-operators-qrxf2" Jan 28 07:24:08 crc kubenswrapper[4642]: I0128 07:24:08.846571 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rgk2\" (UniqueName: \"kubernetes.io/projected/f6c1968f-45d6-47aa-887e-971ca2c44bc2-kube-api-access-4rgk2\") pod \"redhat-operators-qrxf2\" (UID: \"f6c1968f-45d6-47aa-887e-971ca2c44bc2\") " pod="openshift-marketplace/redhat-operators-qrxf2" Jan 28 07:24:08 crc kubenswrapper[4642]: I0128 07:24:08.846858 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6c1968f-45d6-47aa-887e-971ca2c44bc2-utilities\") pod \"redhat-operators-qrxf2\" (UID: \"f6c1968f-45d6-47aa-887e-971ca2c44bc2\") " pod="openshift-marketplace/redhat-operators-qrxf2" Jan 28 07:24:08 crc kubenswrapper[4642]: I0128 07:24:08.847113 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6c1968f-45d6-47aa-887e-971ca2c44bc2-catalog-content\") pod \"redhat-operators-qrxf2\" (UID: \"f6c1968f-45d6-47aa-887e-971ca2c44bc2\") " pod="openshift-marketplace/redhat-operators-qrxf2" Jan 28 07:24:08 crc kubenswrapper[4642]: I0128 07:24:08.868513 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rgk2\" (UniqueName: \"kubernetes.io/projected/f6c1968f-45d6-47aa-887e-971ca2c44bc2-kube-api-access-4rgk2\") pod \"redhat-operators-qrxf2\" (UID: \"f6c1968f-45d6-47aa-887e-971ca2c44bc2\") " pod="openshift-marketplace/redhat-operators-qrxf2" Jan 28 07:24:08 crc kubenswrapper[4642]: I0128 07:24:08.930771 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qrxf2" Jan 28 07:24:09 crc kubenswrapper[4642]: I0128 07:24:09.299608 4642 generic.go:334] "Generic (PLEG): container finished" podID="338ae955-434d-40bd-8519-580badf3e175" containerID="bba2d80ba7f88f6fda7268412cc8b7c2aef2558774f4bc4008196378f1695212" exitCode=0 Jan 28 07:24:09 crc kubenswrapper[4642]: I0128 07:24:09.299894 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" event={"ID":"338ae955-434d-40bd-8519-580badf3e175","Type":"ContainerDied","Data":"bba2d80ba7f88f6fda7268412cc8b7c2aef2558774f4bc4008196378f1695212"} Jan 28 07:24:09 crc kubenswrapper[4642]: I0128 07:24:09.299945 4642 scope.go:117] "RemoveContainer" containerID="d8d011239be864ef4fc9acb8d923c22235a3960efad1e195326a0020a8cd4b79" Jan 28 07:24:09 crc kubenswrapper[4642]: I0128 07:24:09.300581 4642 scope.go:117] "RemoveContainer" containerID="bba2d80ba7f88f6fda7268412cc8b7c2aef2558774f4bc4008196378f1695212" Jan 28 07:24:09 crc kubenswrapper[4642]: E0128 07:24:09.300852 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdsmf_openshift-machine-config-operator(338ae955-434d-40bd-8519-580badf3e175)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" Jan 28 07:24:09 crc kubenswrapper[4642]: I0128 07:24:09.303825 4642 generic.go:334] "Generic (PLEG): container finished" podID="5d822fa6-1a1e-4fc4-8a2c-7d57744d06e3" containerID="b0e660921129fbc0a45627edeaf2297b6a75ba6c0d5a4346f1480bf446504a3d" exitCode=0 Jan 28 07:24:09 crc kubenswrapper[4642]: I0128 07:24:09.303852 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lxxsv" event={"ID":"5d822fa6-1a1e-4fc4-8a2c-7d57744d06e3","Type":"ContainerDied","Data":"b0e660921129fbc0a45627edeaf2297b6a75ba6c0d5a4346f1480bf446504a3d"} Jan 28 07:24:09 crc kubenswrapper[4642]: W0128 07:24:09.358849 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6c1968f_45d6_47aa_887e_971ca2c44bc2.slice/crio-70111618fb3f8e1528d135c106beb2c5b32152af9d087e9b371fcba520ffcf01 WatchSource:0}: Error finding container 70111618fb3f8e1528d135c106beb2c5b32152af9d087e9b371fcba520ffcf01: Status 404 returned error can't find the container with id 70111618fb3f8e1528d135c106beb2c5b32152af9d087e9b371fcba520ffcf01 Jan 28 07:24:09 crc kubenswrapper[4642]: I0128 07:24:09.369583 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qrxf2"] Jan 28 07:24:10 crc kubenswrapper[4642]: I0128 07:24:10.313240 4642 generic.go:334] "Generic (PLEG): container finished" podID="f6c1968f-45d6-47aa-887e-971ca2c44bc2" containerID="8dfc1cb4b1db61a00e248b26ab0a9e0e3a865a0d736b103feaa3d4694a53ca62" exitCode=0 Jan 28 07:24:10 crc kubenswrapper[4642]: I0128 07:24:10.313319 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qrxf2" event={"ID":"f6c1968f-45d6-47aa-887e-971ca2c44bc2","Type":"ContainerDied","Data":"8dfc1cb4b1db61a00e248b26ab0a9e0e3a865a0d736b103feaa3d4694a53ca62"} Jan 28 07:24:10 crc kubenswrapper[4642]: I0128 07:24:10.313606 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qrxf2" event={"ID":"f6c1968f-45d6-47aa-887e-971ca2c44bc2","Type":"ContainerStarted","Data":"70111618fb3f8e1528d135c106beb2c5b32152af9d087e9b371fcba520ffcf01"} Jan 28 07:24:11 crc kubenswrapper[4642]: I0128 07:24:11.321610 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lxxsv" event={"ID":"5d822fa6-1a1e-4fc4-8a2c-7d57744d06e3","Type":"ContainerStarted","Data":"814a1e8c61726cd8ca03a86194d062a47be11c16125e761959d258193e931a5e"} Jan 28 07:24:11 crc kubenswrapper[4642]: I0128 07:24:11.323073 4642 generic.go:334] "Generic (PLEG): container finished" podID="70ce17da-66a5-4aed-90f5-3ed27538b630" containerID="1dbd626f2f81dc84eb09eb9502283775a8dc89c141685e8862b73b90ddf627fd" exitCode=0 Jan 28 07:24:11 crc kubenswrapper[4642]: I0128 07:24:11.323108 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qjvsc" event={"ID":"70ce17da-66a5-4aed-90f5-3ed27538b630","Type":"ContainerDied","Data":"1dbd626f2f81dc84eb09eb9502283775a8dc89c141685e8862b73b90ddf627fd"} Jan 28 07:24:11 crc kubenswrapper[4642]: I0128 07:24:11.339705 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lxxsv" podStartSLOduration=2.912878851 podStartE2EDuration="6.339689651s" podCreationTimestamp="2026-01-28 07:24:05 +0000 UTC" firstStartedPulling="2026-01-28 07:24:07.278011113 +0000 UTC m=+2170.510099921" lastFinishedPulling="2026-01-28 07:24:10.704821912 +0000 UTC m=+2173.936910721" observedRunningTime="2026-01-28 07:24:11.336050078 +0000 UTC m=+2174.568138887" watchObservedRunningTime="2026-01-28 07:24:11.339689651 +0000 UTC m=+2174.571778460" Jan 28 07:24:12 crc kubenswrapper[4642]: I0128 07:24:12.331556 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qjvsc" event={"ID":"70ce17da-66a5-4aed-90f5-3ed27538b630","Type":"ContainerStarted","Data":"9c2de18bbe6029f047a309241511bb4763966cc27bd58091a506d9391fa63b4a"} Jan 28 07:24:12 crc kubenswrapper[4642]: I0128 07:24:12.332870 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qrxf2" event={"ID":"f6c1968f-45d6-47aa-887e-971ca2c44bc2","Type":"ContainerStarted","Data":"9eacc7fe5901f5a257e8c81532c18a017c1a62c62377ab761b5d2d675937013b"} Jan 28 07:24:12 crc kubenswrapper[4642]: I0128 07:24:12.384675 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qjvsc" podStartSLOduration=2.880434669 podStartE2EDuration="7.384659273s" podCreationTimestamp="2026-01-28 07:24:05 +0000 UTC" firstStartedPulling="2026-01-28 07:24:07.279282392 +0000 UTC m=+2170.511371202" lastFinishedPulling="2026-01-28 07:24:11.783506998 +0000 UTC m=+2175.015595806" observedRunningTime="2026-01-28 07:24:12.362664005 +0000 UTC m=+2175.594752815" watchObservedRunningTime="2026-01-28 07:24:12.384659273 +0000 UTC m=+2175.616748081" Jan 28 07:24:14 crc kubenswrapper[4642]: I0128 07:24:14.345117 4642 generic.go:334] "Generic (PLEG): container finished" podID="f6c1968f-45d6-47aa-887e-971ca2c44bc2" containerID="9eacc7fe5901f5a257e8c81532c18a017c1a62c62377ab761b5d2d675937013b" exitCode=0 Jan 28 07:24:14 crc kubenswrapper[4642]: I0128 07:24:14.345156 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qrxf2" event={"ID":"f6c1968f-45d6-47aa-887e-971ca2c44bc2","Type":"ContainerDied","Data":"9eacc7fe5901f5a257e8c81532c18a017c1a62c62377ab761b5d2d675937013b"} Jan 28 07:24:15 crc kubenswrapper[4642]: I0128 07:24:15.353761 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qrxf2" event={"ID":"f6c1968f-45d6-47aa-887e-971ca2c44bc2","Type":"ContainerStarted","Data":"4e890babfe5b6330703ab65e80991c9023f97d127dca64ce678ae834a8b7a320"} Jan 28 07:24:15 crc kubenswrapper[4642]: I0128 07:24:15.371730 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qrxf2" podStartSLOduration=3.232673314 podStartE2EDuration="7.371714344s" podCreationTimestamp="2026-01-28 07:24:08 +0000 UTC" firstStartedPulling="2026-01-28 07:24:10.691562632 +0000 UTC m=+2173.923651441" lastFinishedPulling="2026-01-28 07:24:14.830603662 +0000 UTC m=+2178.062692471" observedRunningTime="2026-01-28 07:24:15.365920169 +0000 UTC m=+2178.598008978" watchObservedRunningTime="2026-01-28 07:24:15.371714344 +0000 UTC m=+2178.603803153" Jan 28 07:24:15 crc kubenswrapper[4642]: I0128 07:24:15.717778 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qjvsc" Jan 28 07:24:15 crc kubenswrapper[4642]: I0128 07:24:15.717823 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qjvsc" Jan 28 07:24:15 crc kubenswrapper[4642]: I0128 07:24:15.753398 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qjvsc" Jan 28 07:24:15 crc kubenswrapper[4642]: I0128 07:24:15.962844 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lxxsv" Jan 28 07:24:15 crc kubenswrapper[4642]: I0128 07:24:15.962893 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-lxxsv" Jan 28 07:24:16 crc kubenswrapper[4642]: I0128 07:24:16.000096 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lxxsv" Jan 28 07:24:16 crc kubenswrapper[4642]: I0128 07:24:16.392021 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qjvsc" Jan 28 07:24:16 crc kubenswrapper[4642]: I0128 07:24:16.402144 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lxxsv" Jan 28 07:24:16 crc kubenswrapper[4642]: I0128 07:24:16.997516 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lxxsv"] Jan 28 07:24:18 crc kubenswrapper[4642]: I0128 07:24:18.372925 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-lxxsv" podUID="5d822fa6-1a1e-4fc4-8a2c-7d57744d06e3" containerName="registry-server" containerID="cri-o://814a1e8c61726cd8ca03a86194d062a47be11c16125e761959d258193e931a5e" gracePeriod=2 Jan 28 07:24:18 crc kubenswrapper[4642]: I0128 07:24:18.727565 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lxxsv" Jan 28 07:24:18 crc kubenswrapper[4642]: I0128 07:24:18.812780 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qjvsc"] Jan 28 07:24:18 crc kubenswrapper[4642]: I0128 07:24:18.917449 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d822fa6-1a1e-4fc4-8a2c-7d57744d06e3-utilities\") pod \"5d822fa6-1a1e-4fc4-8a2c-7d57744d06e3\" (UID: \"5d822fa6-1a1e-4fc4-8a2c-7d57744d06e3\") " Jan 28 07:24:18 crc kubenswrapper[4642]: I0128 07:24:18.917621 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xv42t\" (UniqueName: \"kubernetes.io/projected/5d822fa6-1a1e-4fc4-8a2c-7d57744d06e3-kube-api-access-xv42t\") pod \"5d822fa6-1a1e-4fc4-8a2c-7d57744d06e3\" (UID: \"5d822fa6-1a1e-4fc4-8a2c-7d57744d06e3\") " Jan 28 07:24:18 crc kubenswrapper[4642]: I0128 07:24:18.917790 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d822fa6-1a1e-4fc4-8a2c-7d57744d06e3-catalog-content\") pod \"5d822fa6-1a1e-4fc4-8a2c-7d57744d06e3\" (UID: \"5d822fa6-1a1e-4fc4-8a2c-7d57744d06e3\") " Jan 28 07:24:18 crc kubenswrapper[4642]: I0128 07:24:18.917971 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d822fa6-1a1e-4fc4-8a2c-7d57744d06e3-utilities" (OuterVolumeSpecName: "utilities") pod "5d822fa6-1a1e-4fc4-8a2c-7d57744d06e3" (UID: "5d822fa6-1a1e-4fc4-8a2c-7d57744d06e3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 07:24:18 crc kubenswrapper[4642]: I0128 07:24:18.918339 4642 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d822fa6-1a1e-4fc4-8a2c-7d57744d06e3-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 07:24:18 crc kubenswrapper[4642]: I0128 07:24:18.922738 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d822fa6-1a1e-4fc4-8a2c-7d57744d06e3-kube-api-access-xv42t" (OuterVolumeSpecName: "kube-api-access-xv42t") pod "5d822fa6-1a1e-4fc4-8a2c-7d57744d06e3" (UID: "5d822fa6-1a1e-4fc4-8a2c-7d57744d06e3"). InnerVolumeSpecName "kube-api-access-xv42t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:24:18 crc kubenswrapper[4642]: I0128 07:24:18.931310 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qrxf2" Jan 28 07:24:18 crc kubenswrapper[4642]: I0128 07:24:18.931361 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qrxf2" Jan 28 07:24:18 crc kubenswrapper[4642]: I0128 07:24:18.947065 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d822fa6-1a1e-4fc4-8a2c-7d57744d06e3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5d822fa6-1a1e-4fc4-8a2c-7d57744d06e3" (UID: "5d822fa6-1a1e-4fc4-8a2c-7d57744d06e3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 07:24:19 crc kubenswrapper[4642]: I0128 07:24:19.019950 4642 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d822fa6-1a1e-4fc4-8a2c-7d57744d06e3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 07:24:19 crc kubenswrapper[4642]: I0128 07:24:19.020107 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xv42t\" (UniqueName: \"kubernetes.io/projected/5d822fa6-1a1e-4fc4-8a2c-7d57744d06e3-kube-api-access-xv42t\") on node \"crc\" DevicePath \"\"" Jan 28 07:24:19 crc kubenswrapper[4642]: I0128 07:24:19.192209 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pm857"] Jan 28 07:24:19 crc kubenswrapper[4642]: I0128 07:24:19.192417 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pm857" podUID="4ecdab48-c9af-40b5-9fbb-8c290986cef1" containerName="registry-server" containerID="cri-o://45a63fa4cd5dbd942ee01e7213dc178a94512dbc13a144028444c316cf620c14" gracePeriod=2 Jan 28 07:24:19 crc kubenswrapper[4642]: I0128 07:24:19.383327 4642 generic.go:334] "Generic (PLEG): container finished" podID="5d822fa6-1a1e-4fc4-8a2c-7d57744d06e3" containerID="814a1e8c61726cd8ca03a86194d062a47be11c16125e761959d258193e931a5e" exitCode=0 Jan 28 07:24:19 crc kubenswrapper[4642]: I0128 07:24:19.383376 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lxxsv" Jan 28 07:24:19 crc kubenswrapper[4642]: I0128 07:24:19.383391 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lxxsv" event={"ID":"5d822fa6-1a1e-4fc4-8a2c-7d57744d06e3","Type":"ContainerDied","Data":"814a1e8c61726cd8ca03a86194d062a47be11c16125e761959d258193e931a5e"} Jan 28 07:24:19 crc kubenswrapper[4642]: I0128 07:24:19.383416 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lxxsv" event={"ID":"5d822fa6-1a1e-4fc4-8a2c-7d57744d06e3","Type":"ContainerDied","Data":"7aebed28bfd446fad9c9e0e357919af0df25ee93487b55f4cf596f53d4028f41"} Jan 28 07:24:19 crc kubenswrapper[4642]: I0128 07:24:19.383431 4642 scope.go:117] "RemoveContainer" containerID="814a1e8c61726cd8ca03a86194d062a47be11c16125e761959d258193e931a5e" Jan 28 07:24:19 crc kubenswrapper[4642]: I0128 07:24:19.388145 4642 generic.go:334] "Generic (PLEG): container finished" podID="4ecdab48-c9af-40b5-9fbb-8c290986cef1" containerID="45a63fa4cd5dbd942ee01e7213dc178a94512dbc13a144028444c316cf620c14" exitCode=0 Jan 28 07:24:19 crc kubenswrapper[4642]: I0128 07:24:19.388206 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pm857" event={"ID":"4ecdab48-c9af-40b5-9fbb-8c290986cef1","Type":"ContainerDied","Data":"45a63fa4cd5dbd942ee01e7213dc178a94512dbc13a144028444c316cf620c14"} Jan 28 07:24:19 crc kubenswrapper[4642]: I0128 07:24:19.404165 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lxxsv"] Jan 28 07:24:19 crc kubenswrapper[4642]: I0128 07:24:19.414714 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-lxxsv"] Jan 28 07:24:19 crc kubenswrapper[4642]: I0128 07:24:19.420716 4642 scope.go:117] "RemoveContainer" containerID="b0e660921129fbc0a45627edeaf2297b6a75ba6c0d5a4346f1480bf446504a3d" Jan 28 07:24:19 crc kubenswrapper[4642]: I0128 07:24:19.446274 4642 scope.go:117] "RemoveContainer" containerID="6b4fa66817c6eb16799cabc85b0e53331d56d1005f50d5839ea24b39067e6017" Jan 28 07:24:19 crc kubenswrapper[4642]: I0128 07:24:19.469204 4642 scope.go:117] "RemoveContainer" containerID="814a1e8c61726cd8ca03a86194d062a47be11c16125e761959d258193e931a5e" Jan 28 07:24:19 crc kubenswrapper[4642]: E0128 07:24:19.470693 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"814a1e8c61726cd8ca03a86194d062a47be11c16125e761959d258193e931a5e\": container with ID starting with 814a1e8c61726cd8ca03a86194d062a47be11c16125e761959d258193e931a5e not found: ID does not exist" containerID="814a1e8c61726cd8ca03a86194d062a47be11c16125e761959d258193e931a5e" Jan 28 07:24:19 crc kubenswrapper[4642]: I0128 07:24:19.470731 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"814a1e8c61726cd8ca03a86194d062a47be11c16125e761959d258193e931a5e"} err="failed to get container status \"814a1e8c61726cd8ca03a86194d062a47be11c16125e761959d258193e931a5e\": rpc error: code = NotFound desc = could not find container \"814a1e8c61726cd8ca03a86194d062a47be11c16125e761959d258193e931a5e\": container with ID starting with 814a1e8c61726cd8ca03a86194d062a47be11c16125e761959d258193e931a5e not found: ID does not exist" Jan 28 07:24:19 crc kubenswrapper[4642]: I0128 07:24:19.470753 4642 scope.go:117] "RemoveContainer" containerID="b0e660921129fbc0a45627edeaf2297b6a75ba6c0d5a4346f1480bf446504a3d" Jan 28 07:24:19 crc kubenswrapper[4642]: E0128 07:24:19.471011 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0e660921129fbc0a45627edeaf2297b6a75ba6c0d5a4346f1480bf446504a3d\": container with ID starting with b0e660921129fbc0a45627edeaf2297b6a75ba6c0d5a4346f1480bf446504a3d not found: ID does not exist" containerID="b0e660921129fbc0a45627edeaf2297b6a75ba6c0d5a4346f1480bf446504a3d" Jan 28 07:24:19 crc kubenswrapper[4642]: I0128 07:24:19.471028 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0e660921129fbc0a45627edeaf2297b6a75ba6c0d5a4346f1480bf446504a3d"} err="failed to get container status \"b0e660921129fbc0a45627edeaf2297b6a75ba6c0d5a4346f1480bf446504a3d\": rpc error: code = NotFound desc = could not find container \"b0e660921129fbc0a45627edeaf2297b6a75ba6c0d5a4346f1480bf446504a3d\": container with ID starting with b0e660921129fbc0a45627edeaf2297b6a75ba6c0d5a4346f1480bf446504a3d not found: ID does not exist" Jan 28 07:24:19 crc kubenswrapper[4642]: I0128 07:24:19.471040 4642 scope.go:117] "RemoveContainer" containerID="6b4fa66817c6eb16799cabc85b0e53331d56d1005f50d5839ea24b39067e6017" Jan 28 07:24:19 crc kubenswrapper[4642]: E0128 07:24:19.471366 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b4fa66817c6eb16799cabc85b0e53331d56d1005f50d5839ea24b39067e6017\": container with ID starting with 6b4fa66817c6eb16799cabc85b0e53331d56d1005f50d5839ea24b39067e6017 not found: ID does not exist" containerID="6b4fa66817c6eb16799cabc85b0e53331d56d1005f50d5839ea24b39067e6017" Jan 28 07:24:19 crc kubenswrapper[4642]: I0128 07:24:19.471385 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b4fa66817c6eb16799cabc85b0e53331d56d1005f50d5839ea24b39067e6017"} err="failed to get container status \"6b4fa66817c6eb16799cabc85b0e53331d56d1005f50d5839ea24b39067e6017\": rpc error: code = NotFound desc = could not find container \"6b4fa66817c6eb16799cabc85b0e53331d56d1005f50d5839ea24b39067e6017\": container with ID starting with 6b4fa66817c6eb16799cabc85b0e53331d56d1005f50d5839ea24b39067e6017 not found: ID does not exist" Jan 28 07:24:19 crc kubenswrapper[4642]: I0128 07:24:19.574805 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pm857" Jan 28 07:24:19 crc kubenswrapper[4642]: I0128 07:24:19.731015 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ecdab48-c9af-40b5-9fbb-8c290986cef1-catalog-content\") pod \"4ecdab48-c9af-40b5-9fbb-8c290986cef1\" (UID: \"4ecdab48-c9af-40b5-9fbb-8c290986cef1\") " Jan 28 07:24:19 crc kubenswrapper[4642]: I0128 07:24:19.731085 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ecdab48-c9af-40b5-9fbb-8c290986cef1-utilities\") pod \"4ecdab48-c9af-40b5-9fbb-8c290986cef1\" (UID: \"4ecdab48-c9af-40b5-9fbb-8c290986cef1\") " Jan 28 07:24:19 crc kubenswrapper[4642]: I0128 07:24:19.731303 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jv48\" (UniqueName: \"kubernetes.io/projected/4ecdab48-c9af-40b5-9fbb-8c290986cef1-kube-api-access-7jv48\") pod \"4ecdab48-c9af-40b5-9fbb-8c290986cef1\" (UID: \"4ecdab48-c9af-40b5-9fbb-8c290986cef1\") " Jan 28 07:24:19 crc kubenswrapper[4642]: I0128 07:24:19.732585 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ecdab48-c9af-40b5-9fbb-8c290986cef1-utilities" (OuterVolumeSpecName: "utilities") pod "4ecdab48-c9af-40b5-9fbb-8c290986cef1" (UID: "4ecdab48-c9af-40b5-9fbb-8c290986cef1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 07:24:19 crc kubenswrapper[4642]: I0128 07:24:19.734832 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ecdab48-c9af-40b5-9fbb-8c290986cef1-kube-api-access-7jv48" (OuterVolumeSpecName: "kube-api-access-7jv48") pod "4ecdab48-c9af-40b5-9fbb-8c290986cef1" (UID: "4ecdab48-c9af-40b5-9fbb-8c290986cef1"). InnerVolumeSpecName "kube-api-access-7jv48". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:24:19 crc kubenswrapper[4642]: I0128 07:24:19.785916 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ecdab48-c9af-40b5-9fbb-8c290986cef1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4ecdab48-c9af-40b5-9fbb-8c290986cef1" (UID: "4ecdab48-c9af-40b5-9fbb-8c290986cef1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 07:24:19 crc kubenswrapper[4642]: I0128 07:24:19.832828 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7jv48\" (UniqueName: \"kubernetes.io/projected/4ecdab48-c9af-40b5-9fbb-8c290986cef1-kube-api-access-7jv48\") on node \"crc\" DevicePath \"\"" Jan 28 07:24:19 crc kubenswrapper[4642]: I0128 07:24:19.832854 4642 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ecdab48-c9af-40b5-9fbb-8c290986cef1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 07:24:19 crc kubenswrapper[4642]: I0128 07:24:19.832863 4642 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ecdab48-c9af-40b5-9fbb-8c290986cef1-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 07:24:19 crc kubenswrapper[4642]: I0128 07:24:19.961049 4642 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qrxf2" podUID="f6c1968f-45d6-47aa-887e-971ca2c44bc2" containerName="registry-server" probeResult="failure" output=< Jan 28 07:24:19 crc kubenswrapper[4642]: timeout: failed to connect service ":50051" within 1s Jan 28 07:24:19 crc kubenswrapper[4642]: > Jan 28 07:24:20 crc kubenswrapper[4642]: I0128 07:24:20.397382 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pm857" event={"ID":"4ecdab48-c9af-40b5-9fbb-8c290986cef1","Type":"ContainerDied","Data":"d5253b10013ba51818338d6ef1765b9b3c85c0fc0ec4ee6d7f9a2eb545c0ab5e"} Jan 28 07:24:20 crc kubenswrapper[4642]: I0128 07:24:20.397434 4642 scope.go:117] "RemoveContainer" containerID="45a63fa4cd5dbd942ee01e7213dc178a94512dbc13a144028444c316cf620c14" Jan 28 07:24:20 crc kubenswrapper[4642]: I0128 07:24:20.397435 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pm857" Jan 28 07:24:20 crc kubenswrapper[4642]: I0128 07:24:20.419352 4642 scope.go:117] "RemoveContainer" containerID="e18fcae8e7768fe229387f4b71f874462e90e40891f3d7eec956040cb86609ad" Jan 28 07:24:20 crc kubenswrapper[4642]: I0128 07:24:20.423578 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pm857"] Jan 28 07:24:20 crc kubenswrapper[4642]: I0128 07:24:20.430947 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pm857"] Jan 28 07:24:20 crc kubenswrapper[4642]: I0128 07:24:20.448144 4642 scope.go:117] "RemoveContainer" containerID="51bb4c902cef575ad21bcf8b15321a4879a77db6bb2300e78e3e6e29f2ca500e" Jan 28 07:24:21 crc kubenswrapper[4642]: I0128 07:24:21.105775 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ecdab48-c9af-40b5-9fbb-8c290986cef1" path="/var/lib/kubelet/pods/4ecdab48-c9af-40b5-9fbb-8c290986cef1/volumes" Jan 28 07:24:21 crc kubenswrapper[4642]: I0128 07:24:21.106686 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d822fa6-1a1e-4fc4-8a2c-7d57744d06e3" path="/var/lib/kubelet/pods/5d822fa6-1a1e-4fc4-8a2c-7d57744d06e3/volumes" Jan 28 07:24:23 crc kubenswrapper[4642]: I0128 07:24:23.098978 4642 scope.go:117] "RemoveContainer" containerID="bba2d80ba7f88f6fda7268412cc8b7c2aef2558774f4bc4008196378f1695212" Jan 28 07:24:23 crc kubenswrapper[4642]: E0128 07:24:23.099472 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdsmf_openshift-machine-config-operator(338ae955-434d-40bd-8519-580badf3e175)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" Jan 28 07:24:28 crc kubenswrapper[4642]: I0128 07:24:28.961846 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qrxf2" Jan 28 07:24:28 crc kubenswrapper[4642]: I0128 07:24:28.991551 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qrxf2" Jan 28 07:24:30 crc kubenswrapper[4642]: I0128 07:24:30.186968 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qrxf2"] Jan 28 07:24:30 crc kubenswrapper[4642]: I0128 07:24:30.459629 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qrxf2" podUID="f6c1968f-45d6-47aa-887e-971ca2c44bc2" containerName="registry-server" containerID="cri-o://4e890babfe5b6330703ab65e80991c9023f97d127dca64ce678ae834a8b7a320" gracePeriod=2 Jan 28 07:24:30 crc kubenswrapper[4642]: I0128 07:24:30.814381 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qrxf2" Jan 28 07:24:30 crc kubenswrapper[4642]: I0128 07:24:30.981646 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6c1968f-45d6-47aa-887e-971ca2c44bc2-catalog-content\") pod \"f6c1968f-45d6-47aa-887e-971ca2c44bc2\" (UID: \"f6c1968f-45d6-47aa-887e-971ca2c44bc2\") " Jan 28 07:24:30 crc kubenswrapper[4642]: I0128 07:24:30.981947 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6c1968f-45d6-47aa-887e-971ca2c44bc2-utilities\") pod \"f6c1968f-45d6-47aa-887e-971ca2c44bc2\" (UID: \"f6c1968f-45d6-47aa-887e-971ca2c44bc2\") " Jan 28 07:24:30 crc kubenswrapper[4642]: I0128 07:24:30.981992 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4rgk2\" (UniqueName: \"kubernetes.io/projected/f6c1968f-45d6-47aa-887e-971ca2c44bc2-kube-api-access-4rgk2\") pod \"f6c1968f-45d6-47aa-887e-971ca2c44bc2\" (UID: \"f6c1968f-45d6-47aa-887e-971ca2c44bc2\") " Jan 28 07:24:30 crc kubenswrapper[4642]: I0128 07:24:30.982596 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6c1968f-45d6-47aa-887e-971ca2c44bc2-utilities" (OuterVolumeSpecName: "utilities") pod "f6c1968f-45d6-47aa-887e-971ca2c44bc2" (UID: "f6c1968f-45d6-47aa-887e-971ca2c44bc2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 07:24:30 crc kubenswrapper[4642]: I0128 07:24:30.986262 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6c1968f-45d6-47aa-887e-971ca2c44bc2-kube-api-access-4rgk2" (OuterVolumeSpecName: "kube-api-access-4rgk2") pod "f6c1968f-45d6-47aa-887e-971ca2c44bc2" (UID: "f6c1968f-45d6-47aa-887e-971ca2c44bc2"). InnerVolumeSpecName "kube-api-access-4rgk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:24:31 crc kubenswrapper[4642]: I0128 07:24:31.063363 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6c1968f-45d6-47aa-887e-971ca2c44bc2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f6c1968f-45d6-47aa-887e-971ca2c44bc2" (UID: "f6c1968f-45d6-47aa-887e-971ca2c44bc2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 07:24:31 crc kubenswrapper[4642]: I0128 07:24:31.083565 4642 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6c1968f-45d6-47aa-887e-971ca2c44bc2-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 07:24:31 crc kubenswrapper[4642]: I0128 07:24:31.083592 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4rgk2\" (UniqueName: \"kubernetes.io/projected/f6c1968f-45d6-47aa-887e-971ca2c44bc2-kube-api-access-4rgk2\") on node \"crc\" DevicePath \"\"" Jan 28 07:24:31 crc kubenswrapper[4642]: I0128 07:24:31.083604 4642 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6c1968f-45d6-47aa-887e-971ca2c44bc2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 07:24:31 crc kubenswrapper[4642]: I0128 07:24:31.467395 4642 generic.go:334] "Generic (PLEG): container finished" podID="f6c1968f-45d6-47aa-887e-971ca2c44bc2" containerID="4e890babfe5b6330703ab65e80991c9023f97d127dca64ce678ae834a8b7a320" exitCode=0 Jan 28 07:24:31 crc kubenswrapper[4642]: I0128 07:24:31.467433 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qrxf2" event={"ID":"f6c1968f-45d6-47aa-887e-971ca2c44bc2","Type":"ContainerDied","Data":"4e890babfe5b6330703ab65e80991c9023f97d127dca64ce678ae834a8b7a320"} Jan 28 07:24:31 crc kubenswrapper[4642]: I0128 07:24:31.467475 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qrxf2" Jan 28 07:24:31 crc kubenswrapper[4642]: I0128 07:24:31.467458 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qrxf2" event={"ID":"f6c1968f-45d6-47aa-887e-971ca2c44bc2","Type":"ContainerDied","Data":"70111618fb3f8e1528d135c106beb2c5b32152af9d087e9b371fcba520ffcf01"} Jan 28 07:24:31 crc kubenswrapper[4642]: I0128 07:24:31.467507 4642 scope.go:117] "RemoveContainer" containerID="4e890babfe5b6330703ab65e80991c9023f97d127dca64ce678ae834a8b7a320" Jan 28 07:24:31 crc kubenswrapper[4642]: I0128 07:24:31.485269 4642 scope.go:117] "RemoveContainer" containerID="9eacc7fe5901f5a257e8c81532c18a017c1a62c62377ab761b5d2d675937013b" Jan 28 07:24:31 crc kubenswrapper[4642]: I0128 07:24:31.485353 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qrxf2"] Jan 28 07:24:31 crc kubenswrapper[4642]: I0128 07:24:31.493053 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qrxf2"] Jan 28 07:24:31 crc kubenswrapper[4642]: I0128 07:24:31.501349 4642 scope.go:117] "RemoveContainer" containerID="8dfc1cb4b1db61a00e248b26ab0a9e0e3a865a0d736b103feaa3d4694a53ca62" Jan 28 07:24:31 crc kubenswrapper[4642]: I0128 07:24:31.529181 4642 scope.go:117] "RemoveContainer" containerID="4e890babfe5b6330703ab65e80991c9023f97d127dca64ce678ae834a8b7a320" Jan 28 07:24:31 crc kubenswrapper[4642]: E0128 07:24:31.529527 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e890babfe5b6330703ab65e80991c9023f97d127dca64ce678ae834a8b7a320\": container with ID starting with 4e890babfe5b6330703ab65e80991c9023f97d127dca64ce678ae834a8b7a320 not found: ID does not exist" containerID="4e890babfe5b6330703ab65e80991c9023f97d127dca64ce678ae834a8b7a320" Jan 28 07:24:31 crc kubenswrapper[4642]: I0128 07:24:31.529574 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e890babfe5b6330703ab65e80991c9023f97d127dca64ce678ae834a8b7a320"} err="failed to get container status \"4e890babfe5b6330703ab65e80991c9023f97d127dca64ce678ae834a8b7a320\": rpc error: code = NotFound desc = could not find container \"4e890babfe5b6330703ab65e80991c9023f97d127dca64ce678ae834a8b7a320\": container with ID starting with 4e890babfe5b6330703ab65e80991c9023f97d127dca64ce678ae834a8b7a320 not found: ID does not exist" Jan 28 07:24:31 crc kubenswrapper[4642]: I0128 07:24:31.529593 4642 scope.go:117] "RemoveContainer" containerID="9eacc7fe5901f5a257e8c81532c18a017c1a62c62377ab761b5d2d675937013b" Jan 28 07:24:31 crc kubenswrapper[4642]: E0128 07:24:31.529839 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9eacc7fe5901f5a257e8c81532c18a017c1a62c62377ab761b5d2d675937013b\": container with ID starting with 9eacc7fe5901f5a257e8c81532c18a017c1a62c62377ab761b5d2d675937013b not found: ID does not exist" containerID="9eacc7fe5901f5a257e8c81532c18a017c1a62c62377ab761b5d2d675937013b" Jan 28 07:24:31 crc kubenswrapper[4642]: I0128 07:24:31.529859 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9eacc7fe5901f5a257e8c81532c18a017c1a62c62377ab761b5d2d675937013b"} err="failed to get container status \"9eacc7fe5901f5a257e8c81532c18a017c1a62c62377ab761b5d2d675937013b\": rpc error: code = NotFound desc = could not find container \"9eacc7fe5901f5a257e8c81532c18a017c1a62c62377ab761b5d2d675937013b\": container with ID starting with 9eacc7fe5901f5a257e8c81532c18a017c1a62c62377ab761b5d2d675937013b not found: ID does not exist" Jan 28 07:24:31 crc kubenswrapper[4642]: I0128 07:24:31.529872 4642 scope.go:117] "RemoveContainer" containerID="8dfc1cb4b1db61a00e248b26ab0a9e0e3a865a0d736b103feaa3d4694a53ca62" Jan 28 07:24:31 crc kubenswrapper[4642]: E0128 07:24:31.530075 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8dfc1cb4b1db61a00e248b26ab0a9e0e3a865a0d736b103feaa3d4694a53ca62\": container with ID starting with 8dfc1cb4b1db61a00e248b26ab0a9e0e3a865a0d736b103feaa3d4694a53ca62 not found: ID does not exist" containerID="8dfc1cb4b1db61a00e248b26ab0a9e0e3a865a0d736b103feaa3d4694a53ca62" Jan 28 07:24:31 crc kubenswrapper[4642]: I0128 07:24:31.530114 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8dfc1cb4b1db61a00e248b26ab0a9e0e3a865a0d736b103feaa3d4694a53ca62"} err="failed to get container status \"8dfc1cb4b1db61a00e248b26ab0a9e0e3a865a0d736b103feaa3d4694a53ca62\": rpc error: code = NotFound desc = could not find container \"8dfc1cb4b1db61a00e248b26ab0a9e0e3a865a0d736b103feaa3d4694a53ca62\": container with ID starting with 8dfc1cb4b1db61a00e248b26ab0a9e0e3a865a0d736b103feaa3d4694a53ca62 not found: ID does not exist" Jan 28 07:24:33 crc kubenswrapper[4642]: I0128 07:24:33.105547 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6c1968f-45d6-47aa-887e-971ca2c44bc2" path="/var/lib/kubelet/pods/f6c1968f-45d6-47aa-887e-971ca2c44bc2/volumes" Jan 28 07:24:35 crc kubenswrapper[4642]: I0128 07:24:35.098085 4642 scope.go:117] "RemoveContainer" containerID="bba2d80ba7f88f6fda7268412cc8b7c2aef2558774f4bc4008196378f1695212" Jan 28 07:24:35 crc kubenswrapper[4642]: E0128 07:24:35.098484 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdsmf_openshift-machine-config-operator(338ae955-434d-40bd-8519-580badf3e175)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" Jan 28 07:24:49 crc kubenswrapper[4642]: I0128 07:24:49.098938 4642 scope.go:117] "RemoveContainer" containerID="bba2d80ba7f88f6fda7268412cc8b7c2aef2558774f4bc4008196378f1695212" Jan 28 07:24:49 crc kubenswrapper[4642]: E0128 07:24:49.099616 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdsmf_openshift-machine-config-operator(338ae955-434d-40bd-8519-580badf3e175)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" Jan 28 07:25:00 crc kubenswrapper[4642]: I0128 07:25:00.098470 4642 scope.go:117] "RemoveContainer" containerID="bba2d80ba7f88f6fda7268412cc8b7c2aef2558774f4bc4008196378f1695212" Jan 28 07:25:00 crc kubenswrapper[4642]: E0128 07:25:00.099123 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdsmf_openshift-machine-config-operator(338ae955-434d-40bd-8519-580badf3e175)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" Jan 28 07:25:15 crc kubenswrapper[4642]: I0128 07:25:15.098236 4642 scope.go:117] "RemoveContainer" containerID="bba2d80ba7f88f6fda7268412cc8b7c2aef2558774f4bc4008196378f1695212" Jan 28 07:25:15 crc kubenswrapper[4642]: E0128 07:25:15.098810 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdsmf_openshift-machine-config-operator(338ae955-434d-40bd-8519-580badf3e175)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" Jan 28 07:25:23 crc kubenswrapper[4642]: I0128 07:25:23.768502 4642 generic.go:334] "Generic (PLEG): container finished" podID="3d342237-d10d-4315-a659-c8f91ecc6d5d" containerID="d3171381a01ab1d24a3b5cca6c6c4d72206f2697877f72bdd91b305a84899fab" exitCode=0 Jan 28 07:25:23 crc kubenswrapper[4642]: I0128 07:25:23.768583 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xb7sb" event={"ID":"3d342237-d10d-4315-a659-c8f91ecc6d5d","Type":"ContainerDied","Data":"d3171381a01ab1d24a3b5cca6c6c4d72206f2697877f72bdd91b305a84899fab"} Jan 28 07:25:25 crc kubenswrapper[4642]: I0128 07:25:25.069906 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xb7sb" Jan 28 07:25:25 crc kubenswrapper[4642]: I0128 07:25:25.201131 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3d342237-d10d-4315-a659-c8f91ecc6d5d-ssh-key-openstack-edpm-ipam\") pod \"3d342237-d10d-4315-a659-c8f91ecc6d5d\" (UID: \"3d342237-d10d-4315-a659-c8f91ecc6d5d\") " Jan 28 07:25:25 crc kubenswrapper[4642]: I0128 07:25:25.201416 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3d342237-d10d-4315-a659-c8f91ecc6d5d-inventory\") pod \"3d342237-d10d-4315-a659-c8f91ecc6d5d\" (UID: \"3d342237-d10d-4315-a659-c8f91ecc6d5d\") " Jan 28 07:25:25 crc kubenswrapper[4642]: I0128 07:25:25.201539 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d342237-d10d-4315-a659-c8f91ecc6d5d-telemetry-combined-ca-bundle\") pod \"3d342237-d10d-4315-a659-c8f91ecc6d5d\" (UID: \"3d342237-d10d-4315-a659-c8f91ecc6d5d\") " Jan 28 07:25:25 crc kubenswrapper[4642]: I0128 07:25:25.201950 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hgng9\" (UniqueName: \"kubernetes.io/projected/3d342237-d10d-4315-a659-c8f91ecc6d5d-kube-api-access-hgng9\") pod \"3d342237-d10d-4315-a659-c8f91ecc6d5d\" (UID: \"3d342237-d10d-4315-a659-c8f91ecc6d5d\") " Jan 28 07:25:25 crc kubenswrapper[4642]: I0128 07:25:25.202072 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/3d342237-d10d-4315-a659-c8f91ecc6d5d-ceilometer-compute-config-data-2\") pod \"3d342237-d10d-4315-a659-c8f91ecc6d5d\" (UID: \"3d342237-d10d-4315-a659-c8f91ecc6d5d\") " Jan 28 07:25:25 crc kubenswrapper[4642]: I0128 07:25:25.202214 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/3d342237-d10d-4315-a659-c8f91ecc6d5d-ceilometer-compute-config-data-0\") pod \"3d342237-d10d-4315-a659-c8f91ecc6d5d\" (UID: \"3d342237-d10d-4315-a659-c8f91ecc6d5d\") " Jan 28 07:25:25 crc kubenswrapper[4642]: I0128 07:25:25.202297 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/3d342237-d10d-4315-a659-c8f91ecc6d5d-ceilometer-compute-config-data-1\") pod \"3d342237-d10d-4315-a659-c8f91ecc6d5d\" (UID: \"3d342237-d10d-4315-a659-c8f91ecc6d5d\") " Jan 28 07:25:25 crc kubenswrapper[4642]: I0128 07:25:25.206307 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d342237-d10d-4315-a659-c8f91ecc6d5d-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "3d342237-d10d-4315-a659-c8f91ecc6d5d" (UID: "3d342237-d10d-4315-a659-c8f91ecc6d5d"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:25:25 crc kubenswrapper[4642]: I0128 07:25:25.206319 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d342237-d10d-4315-a659-c8f91ecc6d5d-kube-api-access-hgng9" (OuterVolumeSpecName: "kube-api-access-hgng9") pod "3d342237-d10d-4315-a659-c8f91ecc6d5d" (UID: "3d342237-d10d-4315-a659-c8f91ecc6d5d"). InnerVolumeSpecName "kube-api-access-hgng9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:25:25 crc kubenswrapper[4642]: I0128 07:25:25.221460 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d342237-d10d-4315-a659-c8f91ecc6d5d-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "3d342237-d10d-4315-a659-c8f91ecc6d5d" (UID: "3d342237-d10d-4315-a659-c8f91ecc6d5d"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:25:25 crc kubenswrapper[4642]: I0128 07:25:25.221875 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d342237-d10d-4315-a659-c8f91ecc6d5d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3d342237-d10d-4315-a659-c8f91ecc6d5d" (UID: "3d342237-d10d-4315-a659-c8f91ecc6d5d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:25:25 crc kubenswrapper[4642]: I0128 07:25:25.222060 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d342237-d10d-4315-a659-c8f91ecc6d5d-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "3d342237-d10d-4315-a659-c8f91ecc6d5d" (UID: "3d342237-d10d-4315-a659-c8f91ecc6d5d"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:25:25 crc kubenswrapper[4642]: I0128 07:25:25.222137 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d342237-d10d-4315-a659-c8f91ecc6d5d-inventory" (OuterVolumeSpecName: "inventory") pod "3d342237-d10d-4315-a659-c8f91ecc6d5d" (UID: "3d342237-d10d-4315-a659-c8f91ecc6d5d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:25:25 crc kubenswrapper[4642]: I0128 07:25:25.222417 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d342237-d10d-4315-a659-c8f91ecc6d5d-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "3d342237-d10d-4315-a659-c8f91ecc6d5d" (UID: "3d342237-d10d-4315-a659-c8f91ecc6d5d"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:25:25 crc kubenswrapper[4642]: I0128 07:25:25.304319 4642 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3d342237-d10d-4315-a659-c8f91ecc6d5d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 07:25:25 crc kubenswrapper[4642]: I0128 07:25:25.304345 4642 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3d342237-d10d-4315-a659-c8f91ecc6d5d-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 07:25:25 crc kubenswrapper[4642]: I0128 07:25:25.304354 4642 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d342237-d10d-4315-a659-c8f91ecc6d5d-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 07:25:25 crc kubenswrapper[4642]: I0128 07:25:25.304363 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hgng9\" (UniqueName: \"kubernetes.io/projected/3d342237-d10d-4315-a659-c8f91ecc6d5d-kube-api-access-hgng9\") on node \"crc\" DevicePath \"\"" Jan 28 07:25:25 crc kubenswrapper[4642]: I0128 07:25:25.304371 4642 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/3d342237-d10d-4315-a659-c8f91ecc6d5d-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 28 07:25:25 crc kubenswrapper[4642]: I0128 07:25:25.304380 4642 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/3d342237-d10d-4315-a659-c8f91ecc6d5d-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 28 07:25:25 crc kubenswrapper[4642]: I0128 07:25:25.304388 4642 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/3d342237-d10d-4315-a659-c8f91ecc6d5d-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 28 07:25:25 crc kubenswrapper[4642]: I0128 07:25:25.781354 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xb7sb" event={"ID":"3d342237-d10d-4315-a659-c8f91ecc6d5d","Type":"ContainerDied","Data":"135ff3f3b6acc39e8852b728fbd76f585fda555e51d70e67ddf5a9af37ed234d"} Jan 28 07:25:25 crc kubenswrapper[4642]: I0128 07:25:25.781575 4642 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="135ff3f3b6acc39e8852b728fbd76f585fda555e51d70e67ddf5a9af37ed234d" Jan 28 07:25:25 crc kubenswrapper[4642]: I0128 07:25:25.781399 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xb7sb" Jan 28 07:25:28 crc kubenswrapper[4642]: I0128 07:25:28.098437 4642 scope.go:117] "RemoveContainer" containerID="bba2d80ba7f88f6fda7268412cc8b7c2aef2558774f4bc4008196378f1695212" Jan 28 07:25:28 crc kubenswrapper[4642]: E0128 07:25:28.098814 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdsmf_openshift-machine-config-operator(338ae955-434d-40bd-8519-580badf3e175)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" Jan 28 07:25:40 crc kubenswrapper[4642]: I0128 07:25:40.098577 4642 scope.go:117] "RemoveContainer" containerID="bba2d80ba7f88f6fda7268412cc8b7c2aef2558774f4bc4008196378f1695212" Jan 28 07:25:40 crc kubenswrapper[4642]: E0128 07:25:40.099247 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdsmf_openshift-machine-config-operator(338ae955-434d-40bd-8519-580badf3e175)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" Jan 28 07:25:54 crc kubenswrapper[4642]: I0128 07:25:54.097936 4642 scope.go:117] "RemoveContainer" containerID="bba2d80ba7f88f6fda7268412cc8b7c2aef2558774f4bc4008196378f1695212" Jan 28 07:25:54 crc kubenswrapper[4642]: E0128 07:25:54.098438 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdsmf_openshift-machine-config-operator(338ae955-434d-40bd-8519-580badf3e175)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" Jan 28 07:26:05 crc kubenswrapper[4642]: I0128 07:26:05.098354 4642 scope.go:117] "RemoveContainer" containerID="bba2d80ba7f88f6fda7268412cc8b7c2aef2558774f4bc4008196378f1695212" Jan 28 07:26:05 crc kubenswrapper[4642]: E0128 07:26:05.098858 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdsmf_openshift-machine-config-operator(338ae955-434d-40bd-8519-580badf3e175)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" Jan 28 07:26:12 crc kubenswrapper[4642]: I0128 07:26:12.226476 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Jan 28 07:26:12 crc kubenswrapper[4642]: E0128 07:26:12.227151 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d822fa6-1a1e-4fc4-8a2c-7d57744d06e3" containerName="registry-server" Jan 28 07:26:12 crc kubenswrapper[4642]: I0128 07:26:12.227165 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d822fa6-1a1e-4fc4-8a2c-7d57744d06e3" containerName="registry-server" Jan 28 07:26:12 crc kubenswrapper[4642]: E0128 07:26:12.227179 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ecdab48-c9af-40b5-9fbb-8c290986cef1" containerName="registry-server" Jan 28 07:26:12 crc kubenswrapper[4642]: I0128 07:26:12.227199 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ecdab48-c9af-40b5-9fbb-8c290986cef1" containerName="registry-server" Jan 28 07:26:12 crc kubenswrapper[4642]: E0128 07:26:12.227227 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ecdab48-c9af-40b5-9fbb-8c290986cef1" containerName="extract-utilities" Jan 28 07:26:12 crc kubenswrapper[4642]: I0128 07:26:12.227234 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ecdab48-c9af-40b5-9fbb-8c290986cef1" containerName="extract-utilities" Jan 28 07:26:12 crc kubenswrapper[4642]: E0128 07:26:12.227241 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d822fa6-1a1e-4fc4-8a2c-7d57744d06e3" containerName="extract-utilities" Jan 28 07:26:12 crc kubenswrapper[4642]: I0128 07:26:12.227247 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d822fa6-1a1e-4fc4-8a2c-7d57744d06e3" containerName="extract-utilities" Jan 28 07:26:12 crc kubenswrapper[4642]: E0128 07:26:12.227256 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6c1968f-45d6-47aa-887e-971ca2c44bc2" containerName="registry-server" Jan 28 07:26:12 crc kubenswrapper[4642]: I0128 07:26:12.227261 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6c1968f-45d6-47aa-887e-971ca2c44bc2" containerName="registry-server" Jan 28 07:26:12 crc kubenswrapper[4642]: E0128 07:26:12.227268 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d342237-d10d-4315-a659-c8f91ecc6d5d" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 28 07:26:12 crc kubenswrapper[4642]: I0128 07:26:12.227274 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d342237-d10d-4315-a659-c8f91ecc6d5d" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 28 07:26:12 crc kubenswrapper[4642]: E0128 07:26:12.227285 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6c1968f-45d6-47aa-887e-971ca2c44bc2" containerName="extract-utilities" Jan 28 07:26:12 crc kubenswrapper[4642]: I0128 07:26:12.227291 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6c1968f-45d6-47aa-887e-971ca2c44bc2" containerName="extract-utilities" Jan 28 07:26:12 crc kubenswrapper[4642]: E0128 07:26:12.227299 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6c1968f-45d6-47aa-887e-971ca2c44bc2" containerName="extract-content" Jan 28 07:26:12 crc kubenswrapper[4642]: I0128 07:26:12.227304 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6c1968f-45d6-47aa-887e-971ca2c44bc2" containerName="extract-content" Jan 28 07:26:12 crc kubenswrapper[4642]: E0128 07:26:12.227317 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d822fa6-1a1e-4fc4-8a2c-7d57744d06e3" containerName="extract-content" Jan 28 07:26:12 crc kubenswrapper[4642]: I0128 07:26:12.227322 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d822fa6-1a1e-4fc4-8a2c-7d57744d06e3" containerName="extract-content" Jan 28 07:26:12 crc kubenswrapper[4642]: E0128 07:26:12.227335 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ecdab48-c9af-40b5-9fbb-8c290986cef1" containerName="extract-content" Jan 28 07:26:12 crc kubenswrapper[4642]: I0128 07:26:12.227341 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ecdab48-c9af-40b5-9fbb-8c290986cef1" containerName="extract-content" Jan 28 07:26:12 crc kubenswrapper[4642]: I0128 07:26:12.227546 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d342237-d10d-4315-a659-c8f91ecc6d5d" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 28 07:26:12 crc kubenswrapper[4642]: I0128 07:26:12.227559 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ecdab48-c9af-40b5-9fbb-8c290986cef1" containerName="registry-server" Jan 28 07:26:12 crc kubenswrapper[4642]: I0128 07:26:12.227567 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6c1968f-45d6-47aa-887e-971ca2c44bc2" containerName="registry-server" Jan 28 07:26:12 crc kubenswrapper[4642]: I0128 07:26:12.227574 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d822fa6-1a1e-4fc4-8a2c-7d57744d06e3" containerName="registry-server" Jan 28 07:26:12 crc kubenswrapper[4642]: I0128 07:26:12.228109 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 28 07:26:12 crc kubenswrapper[4642]: I0128 07:26:12.229828 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Jan 28 07:26:12 crc kubenswrapper[4642]: I0128 07:26:12.230123 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 28 07:26:12 crc kubenswrapper[4642]: I0128 07:26:12.230205 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 28 07:26:12 crc kubenswrapper[4642]: I0128 07:26:12.230379 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-x24mr" Jan 28 07:26:12 crc kubenswrapper[4642]: I0128 07:26:12.233854 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 28 07:26:12 crc kubenswrapper[4642]: I0128 07:26:12.382582 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/7e694181-faba-42ea-a552-04cdb4a7536d-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"7e694181-faba-42ea-a552-04cdb4a7536d\") " pod="openstack/tempest-tests-tempest" Jan 28 07:26:12 crc kubenswrapper[4642]: I0128 07:26:12.382614 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7e694181-faba-42ea-a552-04cdb4a7536d-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"7e694181-faba-42ea-a552-04cdb4a7536d\") " pod="openstack/tempest-tests-tempest" Jan 28 07:26:12 crc kubenswrapper[4642]: I0128 07:26:12.382669 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"tempest-tests-tempest\" (UID: \"7e694181-faba-42ea-a552-04cdb4a7536d\") " pod="openstack/tempest-tests-tempest" Jan 28 07:26:12 crc kubenswrapper[4642]: I0128 07:26:12.382812 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7e694181-faba-42ea-a552-04cdb4a7536d-config-data\") pod \"tempest-tests-tempest\" (UID: \"7e694181-faba-42ea-a552-04cdb4a7536d\") " pod="openstack/tempest-tests-tempest" Jan 28 07:26:12 crc kubenswrapper[4642]: I0128 07:26:12.382874 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/7e694181-faba-42ea-a552-04cdb4a7536d-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"7e694181-faba-42ea-a552-04cdb4a7536d\") " pod="openstack/tempest-tests-tempest" Jan 28 07:26:12 crc kubenswrapper[4642]: I0128 07:26:12.382916 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7e694181-faba-42ea-a552-04cdb4a7536d-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"7e694181-faba-42ea-a552-04cdb4a7536d\") " pod="openstack/tempest-tests-tempest" Jan 28 07:26:12 crc kubenswrapper[4642]: I0128 07:26:12.383050 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7e694181-faba-42ea-a552-04cdb4a7536d-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"7e694181-faba-42ea-a552-04cdb4a7536d\") " pod="openstack/tempest-tests-tempest" Jan 28 07:26:12 crc kubenswrapper[4642]: I0128 07:26:12.383085 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/7e694181-faba-42ea-a552-04cdb4a7536d-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"7e694181-faba-42ea-a552-04cdb4a7536d\") " pod="openstack/tempest-tests-tempest" Jan 28 07:26:12 crc kubenswrapper[4642]: I0128 07:26:12.383150 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drx2n\" (UniqueName: \"kubernetes.io/projected/7e694181-faba-42ea-a552-04cdb4a7536d-kube-api-access-drx2n\") pod \"tempest-tests-tempest\" (UID: \"7e694181-faba-42ea-a552-04cdb4a7536d\") " pod="openstack/tempest-tests-tempest" Jan 28 07:26:12 crc kubenswrapper[4642]: I0128 07:26:12.484957 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/7e694181-faba-42ea-a552-04cdb4a7536d-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"7e694181-faba-42ea-a552-04cdb4a7536d\") " pod="openstack/tempest-tests-tempest" Jan 28 07:26:12 crc kubenswrapper[4642]: I0128 07:26:12.485027 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drx2n\" (UniqueName: \"kubernetes.io/projected/7e694181-faba-42ea-a552-04cdb4a7536d-kube-api-access-drx2n\") pod \"tempest-tests-tempest\" (UID: \"7e694181-faba-42ea-a552-04cdb4a7536d\") " pod="openstack/tempest-tests-tempest" Jan 28 07:26:12 crc kubenswrapper[4642]: I0128 07:26:12.485081 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/7e694181-faba-42ea-a552-04cdb4a7536d-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"7e694181-faba-42ea-a552-04cdb4a7536d\") " pod="openstack/tempest-tests-tempest" Jan 28 07:26:12 crc kubenswrapper[4642]: I0128 07:26:12.485392 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7e694181-faba-42ea-a552-04cdb4a7536d-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"7e694181-faba-42ea-a552-04cdb4a7536d\") " pod="openstack/tempest-tests-tempest" Jan 28 07:26:12 crc kubenswrapper[4642]: I0128 07:26:12.485636 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/7e694181-faba-42ea-a552-04cdb4a7536d-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"7e694181-faba-42ea-a552-04cdb4a7536d\") " pod="openstack/tempest-tests-tempest" Jan 28 07:26:12 crc kubenswrapper[4642]: I0128 07:26:12.486358 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"tempest-tests-tempest\" (UID: \"7e694181-faba-42ea-a552-04cdb4a7536d\") " pod="openstack/tempest-tests-tempest" Jan 28 07:26:12 crc kubenswrapper[4642]: I0128 07:26:12.486467 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7e694181-faba-42ea-a552-04cdb4a7536d-config-data\") pod \"tempest-tests-tempest\" (UID: \"7e694181-faba-42ea-a552-04cdb4a7536d\") " pod="openstack/tempest-tests-tempest" Jan 28 07:26:12 crc kubenswrapper[4642]: I0128 07:26:12.486517 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/7e694181-faba-42ea-a552-04cdb4a7536d-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"7e694181-faba-42ea-a552-04cdb4a7536d\") " pod="openstack/tempest-tests-tempest" Jan 28 07:26:12 crc kubenswrapper[4642]: I0128 07:26:12.486569 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7e694181-faba-42ea-a552-04cdb4a7536d-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"7e694181-faba-42ea-a552-04cdb4a7536d\") " pod="openstack/tempest-tests-tempest" Jan 28 07:26:12 crc kubenswrapper[4642]: I0128 07:26:12.486638 4642 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"tempest-tests-tempest\" (UID: \"7e694181-faba-42ea-a552-04cdb4a7536d\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/tempest-tests-tempest" Jan 28 07:26:12 crc kubenswrapper[4642]: I0128 07:26:12.486947 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7e694181-faba-42ea-a552-04cdb4a7536d-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"7e694181-faba-42ea-a552-04cdb4a7536d\") " pod="openstack/tempest-tests-tempest" Jan 28 07:26:12 crc kubenswrapper[4642]: I0128 07:26:12.486952 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/7e694181-faba-42ea-a552-04cdb4a7536d-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"7e694181-faba-42ea-a552-04cdb4a7536d\") " pod="openstack/tempest-tests-tempest" Jan 28 07:26:12 crc kubenswrapper[4642]: I0128 07:26:12.487441 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7e694181-faba-42ea-a552-04cdb4a7536d-config-data\") pod \"tempest-tests-tempest\" (UID: \"7e694181-faba-42ea-a552-04cdb4a7536d\") " pod="openstack/tempest-tests-tempest" Jan 28 07:26:12 crc kubenswrapper[4642]: I0128 07:26:12.487535 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7e694181-faba-42ea-a552-04cdb4a7536d-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"7e694181-faba-42ea-a552-04cdb4a7536d\") " pod="openstack/tempest-tests-tempest" Jan 28 07:26:12 crc kubenswrapper[4642]: I0128 07:26:12.490222 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7e694181-faba-42ea-a552-04cdb4a7536d-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"7e694181-faba-42ea-a552-04cdb4a7536d\") " pod="openstack/tempest-tests-tempest" Jan 28 07:26:12 crc kubenswrapper[4642]: I0128 07:26:12.490240 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/7e694181-faba-42ea-a552-04cdb4a7536d-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"7e694181-faba-42ea-a552-04cdb4a7536d\") " pod="openstack/tempest-tests-tempest" Jan 28 07:26:12 crc kubenswrapper[4642]: I0128 07:26:12.490892 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7e694181-faba-42ea-a552-04cdb4a7536d-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"7e694181-faba-42ea-a552-04cdb4a7536d\") " pod="openstack/tempest-tests-tempest" Jan 28 07:26:12 crc kubenswrapper[4642]: I0128 07:26:12.499055 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drx2n\" (UniqueName: \"kubernetes.io/projected/7e694181-faba-42ea-a552-04cdb4a7536d-kube-api-access-drx2n\") pod \"tempest-tests-tempest\" (UID: \"7e694181-faba-42ea-a552-04cdb4a7536d\") " pod="openstack/tempest-tests-tempest" Jan 28 07:26:12 crc kubenswrapper[4642]: I0128 07:26:12.505139 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"tempest-tests-tempest\" (UID: \"7e694181-faba-42ea-a552-04cdb4a7536d\") " pod="openstack/tempest-tests-tempest" Jan 28 07:26:12 crc kubenswrapper[4642]: I0128 07:26:12.543797 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 28 07:26:12 crc kubenswrapper[4642]: I0128 07:26:12.900114 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 28 07:26:13 crc kubenswrapper[4642]: I0128 07:26:13.062935 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"7e694181-faba-42ea-a552-04cdb4a7536d","Type":"ContainerStarted","Data":"1cfb9bcec2b33f27a5f12e9f5221c309455796a813565130d8f2d3ea661c5922"} Jan 28 07:26:19 crc kubenswrapper[4642]: I0128 07:26:19.098431 4642 scope.go:117] "RemoveContainer" containerID="bba2d80ba7f88f6fda7268412cc8b7c2aef2558774f4bc4008196378f1695212" Jan 28 07:26:19 crc kubenswrapper[4642]: E0128 07:26:19.098946 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdsmf_openshift-machine-config-operator(338ae955-434d-40bd-8519-580badf3e175)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" Jan 28 07:26:32 crc kubenswrapper[4642]: I0128 07:26:32.098417 4642 scope.go:117] "RemoveContainer" containerID="bba2d80ba7f88f6fda7268412cc8b7c2aef2558774f4bc4008196378f1695212" Jan 28 07:26:32 crc kubenswrapper[4642]: E0128 07:26:32.098926 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdsmf_openshift-machine-config-operator(338ae955-434d-40bd-8519-580badf3e175)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" Jan 28 07:26:44 crc kubenswrapper[4642]: I0128 07:26:44.097847 4642 scope.go:117] "RemoveContainer" containerID="bba2d80ba7f88f6fda7268412cc8b7c2aef2558774f4bc4008196378f1695212" Jan 28 07:26:44 crc kubenswrapper[4642]: E0128 07:26:44.098581 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdsmf_openshift-machine-config-operator(338ae955-434d-40bd-8519-580badf3e175)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" Jan 28 07:26:59 crc kubenswrapper[4642]: I0128 07:26:59.098743 4642 scope.go:117] "RemoveContainer" containerID="bba2d80ba7f88f6fda7268412cc8b7c2aef2558774f4bc4008196378f1695212" Jan 28 07:26:59 crc kubenswrapper[4642]: E0128 07:26:59.099451 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdsmf_openshift-machine-config-operator(338ae955-434d-40bd-8519-580badf3e175)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" Jan 28 07:27:12 crc kubenswrapper[4642]: I0128 07:27:12.098637 4642 scope.go:117] "RemoveContainer" containerID="bba2d80ba7f88f6fda7268412cc8b7c2aef2558774f4bc4008196378f1695212" Jan 28 07:27:12 crc kubenswrapper[4642]: E0128 07:27:12.099216 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdsmf_openshift-machine-config-operator(338ae955-434d-40bd-8519-580badf3e175)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" Jan 28 07:27:20 crc kubenswrapper[4642]: E0128 07:27:20.492875 4642 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-tempest-all:178ac55eec45150f6e175400f28ac55b" Jan 28 07:27:20 crc kubenswrapper[4642]: E0128 07:27:20.493250 4642 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-tempest-all:178ac55eec45150f6e175400f28ac55b" Jan 28 07:27:20 crc kubenswrapper[4642]: E0128 07:27:20.493350 4642 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-tempest-all:178ac55eec45150f6e175400f28ac55b,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-drx2n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(7e694181-faba-42ea-a552-04cdb4a7536d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 07:27:20 crc kubenswrapper[4642]: E0128 07:27:20.495342 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="7e694181-faba-42ea-a552-04cdb4a7536d" Jan 28 07:27:20 crc kubenswrapper[4642]: E0128 07:27:20.530906 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-tempest-all:178ac55eec45150f6e175400f28ac55b\\\"\"" pod="openstack/tempest-tests-tempest" podUID="7e694181-faba-42ea-a552-04cdb4a7536d" Jan 28 07:27:25 crc kubenswrapper[4642]: I0128 07:27:25.098286 4642 scope.go:117] "RemoveContainer" containerID="bba2d80ba7f88f6fda7268412cc8b7c2aef2558774f4bc4008196378f1695212" Jan 28 07:27:25 crc kubenswrapper[4642]: E0128 07:27:25.098936 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdsmf_openshift-machine-config-operator(338ae955-434d-40bd-8519-580badf3e175)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" Jan 28 07:27:32 crc kubenswrapper[4642]: I0128 07:27:32.748892 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 28 07:27:33 crc kubenswrapper[4642]: I0128 07:27:33.604174 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"7e694181-faba-42ea-a552-04cdb4a7536d","Type":"ContainerStarted","Data":"dabfb3b37345f96f215b678341b602af7564a5e244036f63ec16786cdb2543c7"} Jan 28 07:27:33 crc kubenswrapper[4642]: I0128 07:27:33.621128 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=2.777242812 podStartE2EDuration="1m22.621114356s" podCreationTimestamp="2026-01-28 07:26:11 +0000 UTC" firstStartedPulling="2026-01-28 07:26:12.903393638 +0000 UTC m=+2296.135482448" lastFinishedPulling="2026-01-28 07:27:32.747265183 +0000 UTC m=+2375.979353992" observedRunningTime="2026-01-28 07:27:33.615618461 +0000 UTC m=+2376.847707270" watchObservedRunningTime="2026-01-28 07:27:33.621114356 +0000 UTC m=+2376.853203165" Jan 28 07:27:39 crc kubenswrapper[4642]: I0128 07:27:39.098015 4642 scope.go:117] "RemoveContainer" containerID="bba2d80ba7f88f6fda7268412cc8b7c2aef2558774f4bc4008196378f1695212" Jan 28 07:27:39 crc kubenswrapper[4642]: E0128 07:27:39.098569 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdsmf_openshift-machine-config-operator(338ae955-434d-40bd-8519-580badf3e175)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" Jan 28 07:27:51 crc kubenswrapper[4642]: I0128 07:27:51.099100 4642 scope.go:117] "RemoveContainer" containerID="bba2d80ba7f88f6fda7268412cc8b7c2aef2558774f4bc4008196378f1695212" Jan 28 07:27:51 crc kubenswrapper[4642]: E0128 07:27:51.099684 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdsmf_openshift-machine-config-operator(338ae955-434d-40bd-8519-580badf3e175)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" Jan 28 07:28:06 crc kubenswrapper[4642]: I0128 07:28:06.098299 4642 scope.go:117] "RemoveContainer" containerID="bba2d80ba7f88f6fda7268412cc8b7c2aef2558774f4bc4008196378f1695212" Jan 28 07:28:06 crc kubenswrapper[4642]: E0128 07:28:06.098927 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdsmf_openshift-machine-config-operator(338ae955-434d-40bd-8519-580badf3e175)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" Jan 28 07:28:18 crc kubenswrapper[4642]: I0128 07:28:18.098614 4642 scope.go:117] "RemoveContainer" containerID="bba2d80ba7f88f6fda7268412cc8b7c2aef2558774f4bc4008196378f1695212" Jan 28 07:28:18 crc kubenswrapper[4642]: E0128 07:28:18.099222 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdsmf_openshift-machine-config-operator(338ae955-434d-40bd-8519-580badf3e175)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" Jan 28 07:28:31 crc kubenswrapper[4642]: I0128 07:28:31.098342 4642 scope.go:117] "RemoveContainer" containerID="bba2d80ba7f88f6fda7268412cc8b7c2aef2558774f4bc4008196378f1695212" Jan 28 07:28:31 crc kubenswrapper[4642]: E0128 07:28:31.099158 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdsmf_openshift-machine-config-operator(338ae955-434d-40bd-8519-580badf3e175)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" Jan 28 07:28:45 crc kubenswrapper[4642]: I0128 07:28:45.098837 4642 scope.go:117] "RemoveContainer" containerID="bba2d80ba7f88f6fda7268412cc8b7c2aef2558774f4bc4008196378f1695212" Jan 28 07:28:45 crc kubenswrapper[4642]: E0128 07:28:45.099522 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdsmf_openshift-machine-config-operator(338ae955-434d-40bd-8519-580badf3e175)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" Jan 28 07:28:57 crc kubenswrapper[4642]: I0128 07:28:57.102528 4642 scope.go:117] "RemoveContainer" containerID="bba2d80ba7f88f6fda7268412cc8b7c2aef2558774f4bc4008196378f1695212" Jan 28 07:28:57 crc kubenswrapper[4642]: E0128 07:28:57.103090 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdsmf_openshift-machine-config-operator(338ae955-434d-40bd-8519-580badf3e175)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" Jan 28 07:29:12 crc kubenswrapper[4642]: I0128 07:29:12.098917 4642 scope.go:117] "RemoveContainer" containerID="bba2d80ba7f88f6fda7268412cc8b7c2aef2558774f4bc4008196378f1695212" Jan 28 07:29:13 crc kubenswrapper[4642]: I0128 07:29:13.189994 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" event={"ID":"338ae955-434d-40bd-8519-580badf3e175","Type":"ContainerStarted","Data":"d2d0b0db4080cd7c08fb0fdc1ae1eb2cb0c5941919faee3076c5eecd9c22d8e1"} Jan 28 07:30:00 crc kubenswrapper[4642]: I0128 07:30:00.132410 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493090-c9d2q"] Jan 28 07:30:00 crc kubenswrapper[4642]: I0128 07:30:00.133769 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493090-c9d2q" Jan 28 07:30:00 crc kubenswrapper[4642]: I0128 07:30:00.136596 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 28 07:30:00 crc kubenswrapper[4642]: I0128 07:30:00.137981 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 28 07:30:00 crc kubenswrapper[4642]: I0128 07:30:00.140862 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493090-c9d2q"] Jan 28 07:30:00 crc kubenswrapper[4642]: I0128 07:30:00.208769 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msjh9\" (UniqueName: \"kubernetes.io/projected/139e7a4a-4639-4bea-9d3e-50e4b5de6aa5-kube-api-access-msjh9\") pod \"collect-profiles-29493090-c9d2q\" (UID: \"139e7a4a-4639-4bea-9d3e-50e4b5de6aa5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493090-c9d2q" Jan 28 07:30:00 crc kubenswrapper[4642]: I0128 07:30:00.209012 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/139e7a4a-4639-4bea-9d3e-50e4b5de6aa5-config-volume\") pod \"collect-profiles-29493090-c9d2q\" (UID: \"139e7a4a-4639-4bea-9d3e-50e4b5de6aa5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493090-c9d2q" Jan 28 07:30:00 crc kubenswrapper[4642]: I0128 07:30:00.209040 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/139e7a4a-4639-4bea-9d3e-50e4b5de6aa5-secret-volume\") pod \"collect-profiles-29493090-c9d2q\" (UID: \"139e7a4a-4639-4bea-9d3e-50e4b5de6aa5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493090-c9d2q" Jan 28 07:30:00 crc kubenswrapper[4642]: I0128 07:30:00.310073 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msjh9\" (UniqueName: \"kubernetes.io/projected/139e7a4a-4639-4bea-9d3e-50e4b5de6aa5-kube-api-access-msjh9\") pod \"collect-profiles-29493090-c9d2q\" (UID: \"139e7a4a-4639-4bea-9d3e-50e4b5de6aa5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493090-c9d2q" Jan 28 07:30:00 crc kubenswrapper[4642]: I0128 07:30:00.310120 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/139e7a4a-4639-4bea-9d3e-50e4b5de6aa5-config-volume\") pod \"collect-profiles-29493090-c9d2q\" (UID: \"139e7a4a-4639-4bea-9d3e-50e4b5de6aa5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493090-c9d2q" Jan 28 07:30:00 crc kubenswrapper[4642]: I0128 07:30:00.310147 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/139e7a4a-4639-4bea-9d3e-50e4b5de6aa5-secret-volume\") pod \"collect-profiles-29493090-c9d2q\" (UID: \"139e7a4a-4639-4bea-9d3e-50e4b5de6aa5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493090-c9d2q" Jan 28 07:30:00 crc kubenswrapper[4642]: I0128 07:30:00.310995 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/139e7a4a-4639-4bea-9d3e-50e4b5de6aa5-config-volume\") pod \"collect-profiles-29493090-c9d2q\" (UID: \"139e7a4a-4639-4bea-9d3e-50e4b5de6aa5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493090-c9d2q" Jan 28 07:30:00 crc kubenswrapper[4642]: I0128 07:30:00.314890 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/139e7a4a-4639-4bea-9d3e-50e4b5de6aa5-secret-volume\") pod \"collect-profiles-29493090-c9d2q\" (UID: \"139e7a4a-4639-4bea-9d3e-50e4b5de6aa5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493090-c9d2q" Jan 28 07:30:00 crc kubenswrapper[4642]: I0128 07:30:00.322855 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msjh9\" (UniqueName: \"kubernetes.io/projected/139e7a4a-4639-4bea-9d3e-50e4b5de6aa5-kube-api-access-msjh9\") pod \"collect-profiles-29493090-c9d2q\" (UID: \"139e7a4a-4639-4bea-9d3e-50e4b5de6aa5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493090-c9d2q" Jan 28 07:30:00 crc kubenswrapper[4642]: I0128 07:30:00.449629 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493090-c9d2q" Jan 28 07:30:00 crc kubenswrapper[4642]: I0128 07:30:00.798806 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493090-c9d2q"] Jan 28 07:30:01 crc kubenswrapper[4642]: I0128 07:30:01.475871 4642 generic.go:334] "Generic (PLEG): container finished" podID="139e7a4a-4639-4bea-9d3e-50e4b5de6aa5" containerID="883d806d7adaaf921ed98733bdc988cff2d34cf513a9fec598092055d6f289bb" exitCode=0 Jan 28 07:30:01 crc kubenswrapper[4642]: I0128 07:30:01.475913 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493090-c9d2q" event={"ID":"139e7a4a-4639-4bea-9d3e-50e4b5de6aa5","Type":"ContainerDied","Data":"883d806d7adaaf921ed98733bdc988cff2d34cf513a9fec598092055d6f289bb"} Jan 28 07:30:01 crc kubenswrapper[4642]: I0128 07:30:01.476099 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493090-c9d2q" event={"ID":"139e7a4a-4639-4bea-9d3e-50e4b5de6aa5","Type":"ContainerStarted","Data":"7dff691ea0553b09ce7fde734fb3b38223ef31d9bdabfd59dae40fdb0e3536de"} Jan 28 07:30:02 crc kubenswrapper[4642]: I0128 07:30:02.754580 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493090-c9d2q" Jan 28 07:30:02 crc kubenswrapper[4642]: I0128 07:30:02.947351 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-msjh9\" (UniqueName: \"kubernetes.io/projected/139e7a4a-4639-4bea-9d3e-50e4b5de6aa5-kube-api-access-msjh9\") pod \"139e7a4a-4639-4bea-9d3e-50e4b5de6aa5\" (UID: \"139e7a4a-4639-4bea-9d3e-50e4b5de6aa5\") " Jan 28 07:30:02 crc kubenswrapper[4642]: I0128 07:30:02.947421 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/139e7a4a-4639-4bea-9d3e-50e4b5de6aa5-config-volume\") pod \"139e7a4a-4639-4bea-9d3e-50e4b5de6aa5\" (UID: \"139e7a4a-4639-4bea-9d3e-50e4b5de6aa5\") " Jan 28 07:30:02 crc kubenswrapper[4642]: I0128 07:30:02.947590 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/139e7a4a-4639-4bea-9d3e-50e4b5de6aa5-secret-volume\") pod \"139e7a4a-4639-4bea-9d3e-50e4b5de6aa5\" (UID: \"139e7a4a-4639-4bea-9d3e-50e4b5de6aa5\") " Jan 28 07:30:02 crc kubenswrapper[4642]: I0128 07:30:02.947988 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/139e7a4a-4639-4bea-9d3e-50e4b5de6aa5-config-volume" (OuterVolumeSpecName: "config-volume") pod "139e7a4a-4639-4bea-9d3e-50e4b5de6aa5" (UID: "139e7a4a-4639-4bea-9d3e-50e4b5de6aa5"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:30:02 crc kubenswrapper[4642]: I0128 07:30:02.951795 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/139e7a4a-4639-4bea-9d3e-50e4b5de6aa5-kube-api-access-msjh9" (OuterVolumeSpecName: "kube-api-access-msjh9") pod "139e7a4a-4639-4bea-9d3e-50e4b5de6aa5" (UID: "139e7a4a-4639-4bea-9d3e-50e4b5de6aa5"). InnerVolumeSpecName "kube-api-access-msjh9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:30:02 crc kubenswrapper[4642]: I0128 07:30:02.951874 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/139e7a4a-4639-4bea-9d3e-50e4b5de6aa5-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "139e7a4a-4639-4bea-9d3e-50e4b5de6aa5" (UID: "139e7a4a-4639-4bea-9d3e-50e4b5de6aa5"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:30:03 crc kubenswrapper[4642]: I0128 07:30:03.048880 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-msjh9\" (UniqueName: \"kubernetes.io/projected/139e7a4a-4639-4bea-9d3e-50e4b5de6aa5-kube-api-access-msjh9\") on node \"crc\" DevicePath \"\"" Jan 28 07:30:03 crc kubenswrapper[4642]: I0128 07:30:03.048902 4642 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/139e7a4a-4639-4bea-9d3e-50e4b5de6aa5-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 07:30:03 crc kubenswrapper[4642]: I0128 07:30:03.048914 4642 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/139e7a4a-4639-4bea-9d3e-50e4b5de6aa5-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 28 07:30:03 crc kubenswrapper[4642]: I0128 07:30:03.489582 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493090-c9d2q" event={"ID":"139e7a4a-4639-4bea-9d3e-50e4b5de6aa5","Type":"ContainerDied","Data":"7dff691ea0553b09ce7fde734fb3b38223ef31d9bdabfd59dae40fdb0e3536de"} Jan 28 07:30:03 crc kubenswrapper[4642]: I0128 07:30:03.489758 4642 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7dff691ea0553b09ce7fde734fb3b38223ef31d9bdabfd59dae40fdb0e3536de" Jan 28 07:30:03 crc kubenswrapper[4642]: I0128 07:30:03.489627 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493090-c9d2q" Jan 28 07:30:03 crc kubenswrapper[4642]: I0128 07:30:03.801716 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493045-ndfr5"] Jan 28 07:30:03 crc kubenswrapper[4642]: I0128 07:30:03.806923 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493045-ndfr5"] Jan 28 07:30:05 crc kubenswrapper[4642]: I0128 07:30:05.106358 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d980a7da-b442-446b-8bde-56e17d70b28b" path="/var/lib/kubelet/pods/d980a7da-b442-446b-8bde-56e17d70b28b/volumes" Jan 28 07:31:03 crc kubenswrapper[4642]: I0128 07:31:03.726138 4642 scope.go:117] "RemoveContainer" containerID="83b475246567f73e3201e910ebf0c39bd1b9da1544ed5785ebc3d776eb47d479" Jan 28 07:31:19 crc kubenswrapper[4642]: I0128 07:31:19.170942 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fcnt8"] Jan 28 07:31:19 crc kubenswrapper[4642]: E0128 07:31:19.171630 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="139e7a4a-4639-4bea-9d3e-50e4b5de6aa5" containerName="collect-profiles" Jan 28 07:31:19 crc kubenswrapper[4642]: I0128 07:31:19.171641 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="139e7a4a-4639-4bea-9d3e-50e4b5de6aa5" containerName="collect-profiles" Jan 28 07:31:19 crc kubenswrapper[4642]: I0128 07:31:19.171802 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="139e7a4a-4639-4bea-9d3e-50e4b5de6aa5" containerName="collect-profiles" Jan 28 07:31:19 crc kubenswrapper[4642]: I0128 07:31:19.172849 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fcnt8" Jan 28 07:31:19 crc kubenswrapper[4642]: I0128 07:31:19.179146 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fcnt8"] Jan 28 07:31:19 crc kubenswrapper[4642]: I0128 07:31:19.181681 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndmnq\" (UniqueName: \"kubernetes.io/projected/4549e294-1f4e-43b9-92fd-ec43ca274a7c-kube-api-access-ndmnq\") pod \"redhat-marketplace-fcnt8\" (UID: \"4549e294-1f4e-43b9-92fd-ec43ca274a7c\") " pod="openshift-marketplace/redhat-marketplace-fcnt8" Jan 28 07:31:19 crc kubenswrapper[4642]: I0128 07:31:19.181802 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4549e294-1f4e-43b9-92fd-ec43ca274a7c-catalog-content\") pod \"redhat-marketplace-fcnt8\" (UID: \"4549e294-1f4e-43b9-92fd-ec43ca274a7c\") " pod="openshift-marketplace/redhat-marketplace-fcnt8" Jan 28 07:31:19 crc kubenswrapper[4642]: I0128 07:31:19.181871 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4549e294-1f4e-43b9-92fd-ec43ca274a7c-utilities\") pod \"redhat-marketplace-fcnt8\" (UID: \"4549e294-1f4e-43b9-92fd-ec43ca274a7c\") " pod="openshift-marketplace/redhat-marketplace-fcnt8" Jan 28 07:31:19 crc kubenswrapper[4642]: I0128 07:31:19.283891 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndmnq\" (UniqueName: \"kubernetes.io/projected/4549e294-1f4e-43b9-92fd-ec43ca274a7c-kube-api-access-ndmnq\") pod \"redhat-marketplace-fcnt8\" (UID: \"4549e294-1f4e-43b9-92fd-ec43ca274a7c\") " pod="openshift-marketplace/redhat-marketplace-fcnt8" Jan 28 07:31:19 crc kubenswrapper[4642]: I0128 07:31:19.284083 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4549e294-1f4e-43b9-92fd-ec43ca274a7c-catalog-content\") pod \"redhat-marketplace-fcnt8\" (UID: \"4549e294-1f4e-43b9-92fd-ec43ca274a7c\") " pod="openshift-marketplace/redhat-marketplace-fcnt8" Jan 28 07:31:19 crc kubenswrapper[4642]: I0128 07:31:19.284328 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4549e294-1f4e-43b9-92fd-ec43ca274a7c-utilities\") pod \"redhat-marketplace-fcnt8\" (UID: \"4549e294-1f4e-43b9-92fd-ec43ca274a7c\") " pod="openshift-marketplace/redhat-marketplace-fcnt8" Jan 28 07:31:19 crc kubenswrapper[4642]: I0128 07:31:19.284557 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4549e294-1f4e-43b9-92fd-ec43ca274a7c-catalog-content\") pod \"redhat-marketplace-fcnt8\" (UID: \"4549e294-1f4e-43b9-92fd-ec43ca274a7c\") " pod="openshift-marketplace/redhat-marketplace-fcnt8" Jan 28 07:31:19 crc kubenswrapper[4642]: I0128 07:31:19.284672 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4549e294-1f4e-43b9-92fd-ec43ca274a7c-utilities\") pod \"redhat-marketplace-fcnt8\" (UID: \"4549e294-1f4e-43b9-92fd-ec43ca274a7c\") " pod="openshift-marketplace/redhat-marketplace-fcnt8" Jan 28 07:31:19 crc kubenswrapper[4642]: I0128 07:31:19.298128 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndmnq\" (UniqueName: \"kubernetes.io/projected/4549e294-1f4e-43b9-92fd-ec43ca274a7c-kube-api-access-ndmnq\") pod \"redhat-marketplace-fcnt8\" (UID: \"4549e294-1f4e-43b9-92fd-ec43ca274a7c\") " pod="openshift-marketplace/redhat-marketplace-fcnt8" Jan 28 07:31:19 crc kubenswrapper[4642]: I0128 07:31:19.487822 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fcnt8" Jan 28 07:31:19 crc kubenswrapper[4642]: I0128 07:31:19.861450 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fcnt8"] Jan 28 07:31:19 crc kubenswrapper[4642]: I0128 07:31:19.928771 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fcnt8" event={"ID":"4549e294-1f4e-43b9-92fd-ec43ca274a7c","Type":"ContainerStarted","Data":"750870bb50102fe783b19d41fbd1f3155a00e393cdc6b0081b039fb34cc06be4"} Jan 28 07:31:20 crc kubenswrapper[4642]: I0128 07:31:20.935821 4642 generic.go:334] "Generic (PLEG): container finished" podID="4549e294-1f4e-43b9-92fd-ec43ca274a7c" containerID="fa697bc9a3ef89b762be9b06836ee99ee8112fb43dc38b39db7bf515164fac44" exitCode=0 Jan 28 07:31:20 crc kubenswrapper[4642]: I0128 07:31:20.935979 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fcnt8" event={"ID":"4549e294-1f4e-43b9-92fd-ec43ca274a7c","Type":"ContainerDied","Data":"fa697bc9a3ef89b762be9b06836ee99ee8112fb43dc38b39db7bf515164fac44"} Jan 28 07:31:20 crc kubenswrapper[4642]: I0128 07:31:20.937595 4642 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 07:31:21 crc kubenswrapper[4642]: I0128 07:31:21.943848 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fcnt8" event={"ID":"4549e294-1f4e-43b9-92fd-ec43ca274a7c","Type":"ContainerStarted","Data":"da7107db47a5283a343cc133a90ec0ec4878262c3082b85bc62c5250090ecf40"} Jan 28 07:31:22 crc kubenswrapper[4642]: I0128 07:31:22.952280 4642 generic.go:334] "Generic (PLEG): container finished" podID="4549e294-1f4e-43b9-92fd-ec43ca274a7c" containerID="da7107db47a5283a343cc133a90ec0ec4878262c3082b85bc62c5250090ecf40" exitCode=0 Jan 28 07:31:22 crc kubenswrapper[4642]: I0128 07:31:22.952318 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fcnt8" event={"ID":"4549e294-1f4e-43b9-92fd-ec43ca274a7c","Type":"ContainerDied","Data":"da7107db47a5283a343cc133a90ec0ec4878262c3082b85bc62c5250090ecf40"} Jan 28 07:31:23 crc kubenswrapper[4642]: I0128 07:31:23.960019 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fcnt8" event={"ID":"4549e294-1f4e-43b9-92fd-ec43ca274a7c","Type":"ContainerStarted","Data":"76c9338f98eae06d0fa9a70320fd4eb16f296e876d7cb9e79a4b645a466d20e8"} Jan 28 07:31:23 crc kubenswrapper[4642]: I0128 07:31:23.977239 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fcnt8" podStartSLOduration=2.47619695 podStartE2EDuration="4.977222672s" podCreationTimestamp="2026-01-28 07:31:19 +0000 UTC" firstStartedPulling="2026-01-28 07:31:20.937322196 +0000 UTC m=+2604.169411004" lastFinishedPulling="2026-01-28 07:31:23.438347917 +0000 UTC m=+2606.670436726" observedRunningTime="2026-01-28 07:31:23.972244693 +0000 UTC m=+2607.204333502" watchObservedRunningTime="2026-01-28 07:31:23.977222672 +0000 UTC m=+2607.209311481" Jan 28 07:31:29 crc kubenswrapper[4642]: I0128 07:31:29.488730 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fcnt8" Jan 28 07:31:29 crc kubenswrapper[4642]: I0128 07:31:29.489116 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fcnt8" Jan 28 07:31:29 crc kubenswrapper[4642]: I0128 07:31:29.520425 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fcnt8" Jan 28 07:31:30 crc kubenswrapper[4642]: I0128 07:31:30.026587 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fcnt8" Jan 28 07:31:30 crc kubenswrapper[4642]: I0128 07:31:30.059312 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fcnt8"] Jan 28 07:31:32 crc kubenswrapper[4642]: I0128 07:31:32.006491 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fcnt8" podUID="4549e294-1f4e-43b9-92fd-ec43ca274a7c" containerName="registry-server" containerID="cri-o://76c9338f98eae06d0fa9a70320fd4eb16f296e876d7cb9e79a4b645a466d20e8" gracePeriod=2 Jan 28 07:31:32 crc kubenswrapper[4642]: I0128 07:31:32.410330 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fcnt8" Jan 28 07:31:32 crc kubenswrapper[4642]: I0128 07:31:32.592162 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4549e294-1f4e-43b9-92fd-ec43ca274a7c-utilities\") pod \"4549e294-1f4e-43b9-92fd-ec43ca274a7c\" (UID: \"4549e294-1f4e-43b9-92fd-ec43ca274a7c\") " Jan 28 07:31:32 crc kubenswrapper[4642]: I0128 07:31:32.592229 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ndmnq\" (UniqueName: \"kubernetes.io/projected/4549e294-1f4e-43b9-92fd-ec43ca274a7c-kube-api-access-ndmnq\") pod \"4549e294-1f4e-43b9-92fd-ec43ca274a7c\" (UID: \"4549e294-1f4e-43b9-92fd-ec43ca274a7c\") " Jan 28 07:31:32 crc kubenswrapper[4642]: I0128 07:31:32.592333 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4549e294-1f4e-43b9-92fd-ec43ca274a7c-catalog-content\") pod \"4549e294-1f4e-43b9-92fd-ec43ca274a7c\" (UID: \"4549e294-1f4e-43b9-92fd-ec43ca274a7c\") " Jan 28 07:31:32 crc kubenswrapper[4642]: I0128 07:31:32.592885 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4549e294-1f4e-43b9-92fd-ec43ca274a7c-utilities" (OuterVolumeSpecName: "utilities") pod "4549e294-1f4e-43b9-92fd-ec43ca274a7c" (UID: "4549e294-1f4e-43b9-92fd-ec43ca274a7c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 07:31:32 crc kubenswrapper[4642]: I0128 07:31:32.596677 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4549e294-1f4e-43b9-92fd-ec43ca274a7c-kube-api-access-ndmnq" (OuterVolumeSpecName: "kube-api-access-ndmnq") pod "4549e294-1f4e-43b9-92fd-ec43ca274a7c" (UID: "4549e294-1f4e-43b9-92fd-ec43ca274a7c"). InnerVolumeSpecName "kube-api-access-ndmnq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:31:32 crc kubenswrapper[4642]: I0128 07:31:32.607217 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4549e294-1f4e-43b9-92fd-ec43ca274a7c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4549e294-1f4e-43b9-92fd-ec43ca274a7c" (UID: "4549e294-1f4e-43b9-92fd-ec43ca274a7c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 07:31:32 crc kubenswrapper[4642]: I0128 07:31:32.694401 4642 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4549e294-1f4e-43b9-92fd-ec43ca274a7c-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 07:31:32 crc kubenswrapper[4642]: I0128 07:31:32.694427 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ndmnq\" (UniqueName: \"kubernetes.io/projected/4549e294-1f4e-43b9-92fd-ec43ca274a7c-kube-api-access-ndmnq\") on node \"crc\" DevicePath \"\"" Jan 28 07:31:32 crc kubenswrapper[4642]: I0128 07:31:32.694438 4642 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4549e294-1f4e-43b9-92fd-ec43ca274a7c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 07:31:33 crc kubenswrapper[4642]: I0128 07:31:33.014940 4642 generic.go:334] "Generic (PLEG): container finished" podID="4549e294-1f4e-43b9-92fd-ec43ca274a7c" containerID="76c9338f98eae06d0fa9a70320fd4eb16f296e876d7cb9e79a4b645a466d20e8" exitCode=0 Jan 28 07:31:33 crc kubenswrapper[4642]: I0128 07:31:33.014985 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fcnt8" event={"ID":"4549e294-1f4e-43b9-92fd-ec43ca274a7c","Type":"ContainerDied","Data":"76c9338f98eae06d0fa9a70320fd4eb16f296e876d7cb9e79a4b645a466d20e8"} Jan 28 07:31:33 crc kubenswrapper[4642]: I0128 07:31:33.014996 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fcnt8" Jan 28 07:31:33 crc kubenswrapper[4642]: I0128 07:31:33.015014 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fcnt8" event={"ID":"4549e294-1f4e-43b9-92fd-ec43ca274a7c","Type":"ContainerDied","Data":"750870bb50102fe783b19d41fbd1f3155a00e393cdc6b0081b039fb34cc06be4"} Jan 28 07:31:33 crc kubenswrapper[4642]: I0128 07:31:33.015030 4642 scope.go:117] "RemoveContainer" containerID="76c9338f98eae06d0fa9a70320fd4eb16f296e876d7cb9e79a4b645a466d20e8" Jan 28 07:31:33 crc kubenswrapper[4642]: I0128 07:31:33.029490 4642 scope.go:117] "RemoveContainer" containerID="da7107db47a5283a343cc133a90ec0ec4878262c3082b85bc62c5250090ecf40" Jan 28 07:31:33 crc kubenswrapper[4642]: I0128 07:31:33.037819 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fcnt8"] Jan 28 07:31:33 crc kubenswrapper[4642]: I0128 07:31:33.043708 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fcnt8"] Jan 28 07:31:33 crc kubenswrapper[4642]: I0128 07:31:33.054468 4642 scope.go:117] "RemoveContainer" containerID="fa697bc9a3ef89b762be9b06836ee99ee8112fb43dc38b39db7bf515164fac44" Jan 28 07:31:33 crc kubenswrapper[4642]: I0128 07:31:33.076115 4642 scope.go:117] "RemoveContainer" containerID="76c9338f98eae06d0fa9a70320fd4eb16f296e876d7cb9e79a4b645a466d20e8" Jan 28 07:31:33 crc kubenswrapper[4642]: E0128 07:31:33.076506 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76c9338f98eae06d0fa9a70320fd4eb16f296e876d7cb9e79a4b645a466d20e8\": container with ID starting with 76c9338f98eae06d0fa9a70320fd4eb16f296e876d7cb9e79a4b645a466d20e8 not found: ID does not exist" containerID="76c9338f98eae06d0fa9a70320fd4eb16f296e876d7cb9e79a4b645a466d20e8" Jan 28 07:31:33 crc kubenswrapper[4642]: I0128 07:31:33.076538 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76c9338f98eae06d0fa9a70320fd4eb16f296e876d7cb9e79a4b645a466d20e8"} err="failed to get container status \"76c9338f98eae06d0fa9a70320fd4eb16f296e876d7cb9e79a4b645a466d20e8\": rpc error: code = NotFound desc = could not find container \"76c9338f98eae06d0fa9a70320fd4eb16f296e876d7cb9e79a4b645a466d20e8\": container with ID starting with 76c9338f98eae06d0fa9a70320fd4eb16f296e876d7cb9e79a4b645a466d20e8 not found: ID does not exist" Jan 28 07:31:33 crc kubenswrapper[4642]: I0128 07:31:33.076556 4642 scope.go:117] "RemoveContainer" containerID="da7107db47a5283a343cc133a90ec0ec4878262c3082b85bc62c5250090ecf40" Jan 28 07:31:33 crc kubenswrapper[4642]: E0128 07:31:33.076801 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da7107db47a5283a343cc133a90ec0ec4878262c3082b85bc62c5250090ecf40\": container with ID starting with da7107db47a5283a343cc133a90ec0ec4878262c3082b85bc62c5250090ecf40 not found: ID does not exist" containerID="da7107db47a5283a343cc133a90ec0ec4878262c3082b85bc62c5250090ecf40" Jan 28 07:31:33 crc kubenswrapper[4642]: I0128 07:31:33.076830 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da7107db47a5283a343cc133a90ec0ec4878262c3082b85bc62c5250090ecf40"} err="failed to get container status \"da7107db47a5283a343cc133a90ec0ec4878262c3082b85bc62c5250090ecf40\": rpc error: code = NotFound desc = could not find container \"da7107db47a5283a343cc133a90ec0ec4878262c3082b85bc62c5250090ecf40\": container with ID starting with da7107db47a5283a343cc133a90ec0ec4878262c3082b85bc62c5250090ecf40 not found: ID does not exist" Jan 28 07:31:33 crc kubenswrapper[4642]: I0128 07:31:33.076852 4642 scope.go:117] "RemoveContainer" containerID="fa697bc9a3ef89b762be9b06836ee99ee8112fb43dc38b39db7bf515164fac44" Jan 28 07:31:33 crc kubenswrapper[4642]: E0128 07:31:33.077119 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa697bc9a3ef89b762be9b06836ee99ee8112fb43dc38b39db7bf515164fac44\": container with ID starting with fa697bc9a3ef89b762be9b06836ee99ee8112fb43dc38b39db7bf515164fac44 not found: ID does not exist" containerID="fa697bc9a3ef89b762be9b06836ee99ee8112fb43dc38b39db7bf515164fac44" Jan 28 07:31:33 crc kubenswrapper[4642]: I0128 07:31:33.077141 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa697bc9a3ef89b762be9b06836ee99ee8112fb43dc38b39db7bf515164fac44"} err="failed to get container status \"fa697bc9a3ef89b762be9b06836ee99ee8112fb43dc38b39db7bf515164fac44\": rpc error: code = NotFound desc = could not find container \"fa697bc9a3ef89b762be9b06836ee99ee8112fb43dc38b39db7bf515164fac44\": container with ID starting with fa697bc9a3ef89b762be9b06836ee99ee8112fb43dc38b39db7bf515164fac44 not found: ID does not exist" Jan 28 07:31:33 crc kubenswrapper[4642]: I0128 07:31:33.105719 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4549e294-1f4e-43b9-92fd-ec43ca274a7c" path="/var/lib/kubelet/pods/4549e294-1f4e-43b9-92fd-ec43ca274a7c/volumes" Jan 28 07:31:38 crc kubenswrapper[4642]: I0128 07:31:38.199492 4642 patch_prober.go:28] interesting pod/machine-config-daemon-hdsmf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 07:31:38 crc kubenswrapper[4642]: I0128 07:31:38.199887 4642 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 07:32:08 crc kubenswrapper[4642]: I0128 07:32:08.199350 4642 patch_prober.go:28] interesting pod/machine-config-daemon-hdsmf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 07:32:08 crc kubenswrapper[4642]: I0128 07:32:08.199731 4642 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 07:32:38 crc kubenswrapper[4642]: I0128 07:32:38.199420 4642 patch_prober.go:28] interesting pod/machine-config-daemon-hdsmf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 07:32:38 crc kubenswrapper[4642]: I0128 07:32:38.200722 4642 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 07:32:38 crc kubenswrapper[4642]: I0128 07:32:38.200867 4642 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" Jan 28 07:32:38 crc kubenswrapper[4642]: I0128 07:32:38.202666 4642 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d2d0b0db4080cd7c08fb0fdc1ae1eb2cb0c5941919faee3076c5eecd9c22d8e1"} pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 07:32:38 crc kubenswrapper[4642]: I0128 07:32:38.202752 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" containerName="machine-config-daemon" containerID="cri-o://d2d0b0db4080cd7c08fb0fdc1ae1eb2cb0c5941919faee3076c5eecd9c22d8e1" gracePeriod=600 Jan 28 07:32:38 crc kubenswrapper[4642]: I0128 07:32:38.423913 4642 generic.go:334] "Generic (PLEG): container finished" podID="338ae955-434d-40bd-8519-580badf3e175" containerID="d2d0b0db4080cd7c08fb0fdc1ae1eb2cb0c5941919faee3076c5eecd9c22d8e1" exitCode=0 Jan 28 07:32:38 crc kubenswrapper[4642]: I0128 07:32:38.423991 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" event={"ID":"338ae955-434d-40bd-8519-580badf3e175","Type":"ContainerDied","Data":"d2d0b0db4080cd7c08fb0fdc1ae1eb2cb0c5941919faee3076c5eecd9c22d8e1"} Jan 28 07:32:38 crc kubenswrapper[4642]: I0128 07:32:38.424181 4642 scope.go:117] "RemoveContainer" containerID="bba2d80ba7f88f6fda7268412cc8b7c2aef2558774f4bc4008196378f1695212" Jan 28 07:32:39 crc kubenswrapper[4642]: I0128 07:32:39.432815 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" event={"ID":"338ae955-434d-40bd-8519-580badf3e175","Type":"ContainerStarted","Data":"0dcd78f9e568ada3af5f90732ccb64d686cd1d0ba16db578037a8ae26e1efe3d"} Jan 28 07:34:10 crc kubenswrapper[4642]: I0128 07:34:10.640608 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pdnt6"] Jan 28 07:34:10 crc kubenswrapper[4642]: E0128 07:34:10.641273 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4549e294-1f4e-43b9-92fd-ec43ca274a7c" containerName="registry-server" Jan 28 07:34:10 crc kubenswrapper[4642]: I0128 07:34:10.641285 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="4549e294-1f4e-43b9-92fd-ec43ca274a7c" containerName="registry-server" Jan 28 07:34:10 crc kubenswrapper[4642]: E0128 07:34:10.641300 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4549e294-1f4e-43b9-92fd-ec43ca274a7c" containerName="extract-utilities" Jan 28 07:34:10 crc kubenswrapper[4642]: I0128 07:34:10.641305 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="4549e294-1f4e-43b9-92fd-ec43ca274a7c" containerName="extract-utilities" Jan 28 07:34:10 crc kubenswrapper[4642]: E0128 07:34:10.641327 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4549e294-1f4e-43b9-92fd-ec43ca274a7c" containerName="extract-content" Jan 28 07:34:10 crc kubenswrapper[4642]: I0128 07:34:10.641332 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="4549e294-1f4e-43b9-92fd-ec43ca274a7c" containerName="extract-content" Jan 28 07:34:10 crc kubenswrapper[4642]: I0128 07:34:10.641480 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="4549e294-1f4e-43b9-92fd-ec43ca274a7c" containerName="registry-server" Jan 28 07:34:10 crc kubenswrapper[4642]: I0128 07:34:10.642558 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pdnt6" Jan 28 07:34:10 crc kubenswrapper[4642]: I0128 07:34:10.649538 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pdnt6"] Jan 28 07:34:10 crc kubenswrapper[4642]: I0128 07:34:10.766253 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37dcc5ae-05d0-41c4-8a34-38054aaca7f7-catalog-content\") pod \"community-operators-pdnt6\" (UID: \"37dcc5ae-05d0-41c4-8a34-38054aaca7f7\") " pod="openshift-marketplace/community-operators-pdnt6" Jan 28 07:34:10 crc kubenswrapper[4642]: I0128 07:34:10.766308 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-789nz\" (UniqueName: \"kubernetes.io/projected/37dcc5ae-05d0-41c4-8a34-38054aaca7f7-kube-api-access-789nz\") pod \"community-operators-pdnt6\" (UID: \"37dcc5ae-05d0-41c4-8a34-38054aaca7f7\") " pod="openshift-marketplace/community-operators-pdnt6" Jan 28 07:34:10 crc kubenswrapper[4642]: I0128 07:34:10.766578 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37dcc5ae-05d0-41c4-8a34-38054aaca7f7-utilities\") pod \"community-operators-pdnt6\" (UID: \"37dcc5ae-05d0-41c4-8a34-38054aaca7f7\") " pod="openshift-marketplace/community-operators-pdnt6" Jan 28 07:34:10 crc kubenswrapper[4642]: I0128 07:34:10.868029 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37dcc5ae-05d0-41c4-8a34-38054aaca7f7-utilities\") pod \"community-operators-pdnt6\" (UID: \"37dcc5ae-05d0-41c4-8a34-38054aaca7f7\") " pod="openshift-marketplace/community-operators-pdnt6" Jan 28 07:34:10 crc kubenswrapper[4642]: I0128 07:34:10.868148 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37dcc5ae-05d0-41c4-8a34-38054aaca7f7-catalog-content\") pod \"community-operators-pdnt6\" (UID: \"37dcc5ae-05d0-41c4-8a34-38054aaca7f7\") " pod="openshift-marketplace/community-operators-pdnt6" Jan 28 07:34:10 crc kubenswrapper[4642]: I0128 07:34:10.868204 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-789nz\" (UniqueName: \"kubernetes.io/projected/37dcc5ae-05d0-41c4-8a34-38054aaca7f7-kube-api-access-789nz\") pod \"community-operators-pdnt6\" (UID: \"37dcc5ae-05d0-41c4-8a34-38054aaca7f7\") " pod="openshift-marketplace/community-operators-pdnt6" Jan 28 07:34:10 crc kubenswrapper[4642]: I0128 07:34:10.868525 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37dcc5ae-05d0-41c4-8a34-38054aaca7f7-utilities\") pod \"community-operators-pdnt6\" (UID: \"37dcc5ae-05d0-41c4-8a34-38054aaca7f7\") " pod="openshift-marketplace/community-operators-pdnt6" Jan 28 07:34:10 crc kubenswrapper[4642]: I0128 07:34:10.868582 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37dcc5ae-05d0-41c4-8a34-38054aaca7f7-catalog-content\") pod \"community-operators-pdnt6\" (UID: \"37dcc5ae-05d0-41c4-8a34-38054aaca7f7\") " pod="openshift-marketplace/community-operators-pdnt6" Jan 28 07:34:10 crc kubenswrapper[4642]: I0128 07:34:10.884018 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-789nz\" (UniqueName: \"kubernetes.io/projected/37dcc5ae-05d0-41c4-8a34-38054aaca7f7-kube-api-access-789nz\") pod \"community-operators-pdnt6\" (UID: \"37dcc5ae-05d0-41c4-8a34-38054aaca7f7\") " pod="openshift-marketplace/community-operators-pdnt6" Jan 28 07:34:10 crc kubenswrapper[4642]: I0128 07:34:10.956634 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pdnt6" Jan 28 07:34:11 crc kubenswrapper[4642]: I0128 07:34:11.239063 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2gq6n"] Jan 28 07:34:11 crc kubenswrapper[4642]: I0128 07:34:11.240706 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2gq6n" Jan 28 07:34:11 crc kubenswrapper[4642]: I0128 07:34:11.244785 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2gq6n"] Jan 28 07:34:11 crc kubenswrapper[4642]: I0128 07:34:11.274371 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50deb047-2f51-412b-9e83-3a2e034ad879-utilities\") pod \"certified-operators-2gq6n\" (UID: \"50deb047-2f51-412b-9e83-3a2e034ad879\") " pod="openshift-marketplace/certified-operators-2gq6n" Jan 28 07:34:11 crc kubenswrapper[4642]: I0128 07:34:11.274449 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lt6b6\" (UniqueName: \"kubernetes.io/projected/50deb047-2f51-412b-9e83-3a2e034ad879-kube-api-access-lt6b6\") pod \"certified-operators-2gq6n\" (UID: \"50deb047-2f51-412b-9e83-3a2e034ad879\") " pod="openshift-marketplace/certified-operators-2gq6n" Jan 28 07:34:11 crc kubenswrapper[4642]: I0128 07:34:11.274609 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50deb047-2f51-412b-9e83-3a2e034ad879-catalog-content\") pod \"certified-operators-2gq6n\" (UID: \"50deb047-2f51-412b-9e83-3a2e034ad879\") " pod="openshift-marketplace/certified-operators-2gq6n" Jan 28 07:34:11 crc kubenswrapper[4642]: I0128 07:34:11.337640 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pdnt6"] Jan 28 07:34:11 crc kubenswrapper[4642]: I0128 07:34:11.375811 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50deb047-2f51-412b-9e83-3a2e034ad879-utilities\") pod \"certified-operators-2gq6n\" (UID: \"50deb047-2f51-412b-9e83-3a2e034ad879\") " pod="openshift-marketplace/certified-operators-2gq6n" Jan 28 07:34:11 crc kubenswrapper[4642]: I0128 07:34:11.375861 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lt6b6\" (UniqueName: \"kubernetes.io/projected/50deb047-2f51-412b-9e83-3a2e034ad879-kube-api-access-lt6b6\") pod \"certified-operators-2gq6n\" (UID: \"50deb047-2f51-412b-9e83-3a2e034ad879\") " pod="openshift-marketplace/certified-operators-2gq6n" Jan 28 07:34:11 crc kubenswrapper[4642]: I0128 07:34:11.375920 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50deb047-2f51-412b-9e83-3a2e034ad879-catalog-content\") pod \"certified-operators-2gq6n\" (UID: \"50deb047-2f51-412b-9e83-3a2e034ad879\") " pod="openshift-marketplace/certified-operators-2gq6n" Jan 28 07:34:11 crc kubenswrapper[4642]: I0128 07:34:11.376229 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50deb047-2f51-412b-9e83-3a2e034ad879-utilities\") pod \"certified-operators-2gq6n\" (UID: \"50deb047-2f51-412b-9e83-3a2e034ad879\") " pod="openshift-marketplace/certified-operators-2gq6n" Jan 28 07:34:11 crc kubenswrapper[4642]: I0128 07:34:11.376266 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50deb047-2f51-412b-9e83-3a2e034ad879-catalog-content\") pod \"certified-operators-2gq6n\" (UID: \"50deb047-2f51-412b-9e83-3a2e034ad879\") " pod="openshift-marketplace/certified-operators-2gq6n" Jan 28 07:34:11 crc kubenswrapper[4642]: I0128 07:34:11.390736 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lt6b6\" (UniqueName: \"kubernetes.io/projected/50deb047-2f51-412b-9e83-3a2e034ad879-kube-api-access-lt6b6\") pod \"certified-operators-2gq6n\" (UID: \"50deb047-2f51-412b-9e83-3a2e034ad879\") " pod="openshift-marketplace/certified-operators-2gq6n" Jan 28 07:34:11 crc kubenswrapper[4642]: I0128 07:34:11.560304 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2gq6n" Jan 28 07:34:11 crc kubenswrapper[4642]: I0128 07:34:11.962918 4642 generic.go:334] "Generic (PLEG): container finished" podID="37dcc5ae-05d0-41c4-8a34-38054aaca7f7" containerID="f4bd32f38f9ab92e1333545d65fbe7eb720217cc2e0e32f5bf7f0971f8b7b06a" exitCode=0 Jan 28 07:34:11 crc kubenswrapper[4642]: I0128 07:34:11.963122 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pdnt6" event={"ID":"37dcc5ae-05d0-41c4-8a34-38054aaca7f7","Type":"ContainerDied","Data":"f4bd32f38f9ab92e1333545d65fbe7eb720217cc2e0e32f5bf7f0971f8b7b06a"} Jan 28 07:34:11 crc kubenswrapper[4642]: I0128 07:34:11.963144 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pdnt6" event={"ID":"37dcc5ae-05d0-41c4-8a34-38054aaca7f7","Type":"ContainerStarted","Data":"e47d0f01a36c56442961fefea43fa01ad466dced3138fdbbfff7a4a7530f2953"} Jan 28 07:34:11 crc kubenswrapper[4642]: I0128 07:34:11.988817 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2gq6n"] Jan 28 07:34:11 crc kubenswrapper[4642]: W0128 07:34:11.991067 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod50deb047_2f51_412b_9e83_3a2e034ad879.slice/crio-e03b8f8bdde87ddda299c701cd45ce8a15723403b9fd875c3e4c644e9058fcfa WatchSource:0}: Error finding container e03b8f8bdde87ddda299c701cd45ce8a15723403b9fd875c3e4c644e9058fcfa: Status 404 returned error can't find the container with id e03b8f8bdde87ddda299c701cd45ce8a15723403b9fd875c3e4c644e9058fcfa Jan 28 07:34:12 crc kubenswrapper[4642]: I0128 07:34:12.970661 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pdnt6" event={"ID":"37dcc5ae-05d0-41c4-8a34-38054aaca7f7","Type":"ContainerStarted","Data":"35cff89ba4b1779da7095855bda3296a5907d3be37d4008d3f02d687df129bdd"} Jan 28 07:34:12 crc kubenswrapper[4642]: I0128 07:34:12.972391 4642 generic.go:334] "Generic (PLEG): container finished" podID="50deb047-2f51-412b-9e83-3a2e034ad879" containerID="5f51c3b5849797fc6935b9ed9d4b7ee5ac06cd98b96e6cb0ebb7797dd4b39623" exitCode=0 Jan 28 07:34:12 crc kubenswrapper[4642]: I0128 07:34:12.972429 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2gq6n" event={"ID":"50deb047-2f51-412b-9e83-3a2e034ad879","Type":"ContainerDied","Data":"5f51c3b5849797fc6935b9ed9d4b7ee5ac06cd98b96e6cb0ebb7797dd4b39623"} Jan 28 07:34:12 crc kubenswrapper[4642]: I0128 07:34:12.972450 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2gq6n" event={"ID":"50deb047-2f51-412b-9e83-3a2e034ad879","Type":"ContainerStarted","Data":"e03b8f8bdde87ddda299c701cd45ce8a15723403b9fd875c3e4c644e9058fcfa"} Jan 28 07:34:13 crc kubenswrapper[4642]: I0128 07:34:13.640417 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4h7lb"] Jan 28 07:34:13 crc kubenswrapper[4642]: I0128 07:34:13.642175 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4h7lb" Jan 28 07:34:13 crc kubenswrapper[4642]: I0128 07:34:13.649706 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4h7lb"] Jan 28 07:34:13 crc kubenswrapper[4642]: I0128 07:34:13.813172 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpkxb\" (UniqueName: \"kubernetes.io/projected/68b68ba7-e2bd-4ddb-8b63-59c264975733-kube-api-access-wpkxb\") pod \"redhat-operators-4h7lb\" (UID: \"68b68ba7-e2bd-4ddb-8b63-59c264975733\") " pod="openshift-marketplace/redhat-operators-4h7lb" Jan 28 07:34:13 crc kubenswrapper[4642]: I0128 07:34:13.813274 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68b68ba7-e2bd-4ddb-8b63-59c264975733-utilities\") pod \"redhat-operators-4h7lb\" (UID: \"68b68ba7-e2bd-4ddb-8b63-59c264975733\") " pod="openshift-marketplace/redhat-operators-4h7lb" Jan 28 07:34:13 crc kubenswrapper[4642]: I0128 07:34:13.813305 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68b68ba7-e2bd-4ddb-8b63-59c264975733-catalog-content\") pod \"redhat-operators-4h7lb\" (UID: \"68b68ba7-e2bd-4ddb-8b63-59c264975733\") " pod="openshift-marketplace/redhat-operators-4h7lb" Jan 28 07:34:13 crc kubenswrapper[4642]: I0128 07:34:13.914696 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpkxb\" (UniqueName: \"kubernetes.io/projected/68b68ba7-e2bd-4ddb-8b63-59c264975733-kube-api-access-wpkxb\") pod \"redhat-operators-4h7lb\" (UID: \"68b68ba7-e2bd-4ddb-8b63-59c264975733\") " pod="openshift-marketplace/redhat-operators-4h7lb" Jan 28 07:34:13 crc kubenswrapper[4642]: I0128 07:34:13.914762 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68b68ba7-e2bd-4ddb-8b63-59c264975733-utilities\") pod \"redhat-operators-4h7lb\" (UID: \"68b68ba7-e2bd-4ddb-8b63-59c264975733\") " pod="openshift-marketplace/redhat-operators-4h7lb" Jan 28 07:34:13 crc kubenswrapper[4642]: I0128 07:34:13.914794 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68b68ba7-e2bd-4ddb-8b63-59c264975733-catalog-content\") pod \"redhat-operators-4h7lb\" (UID: \"68b68ba7-e2bd-4ddb-8b63-59c264975733\") " pod="openshift-marketplace/redhat-operators-4h7lb" Jan 28 07:34:13 crc kubenswrapper[4642]: I0128 07:34:13.915323 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68b68ba7-e2bd-4ddb-8b63-59c264975733-catalog-content\") pod \"redhat-operators-4h7lb\" (UID: \"68b68ba7-e2bd-4ddb-8b63-59c264975733\") " pod="openshift-marketplace/redhat-operators-4h7lb" Jan 28 07:34:13 crc kubenswrapper[4642]: I0128 07:34:13.915377 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68b68ba7-e2bd-4ddb-8b63-59c264975733-utilities\") pod \"redhat-operators-4h7lb\" (UID: \"68b68ba7-e2bd-4ddb-8b63-59c264975733\") " pod="openshift-marketplace/redhat-operators-4h7lb" Jan 28 07:34:13 crc kubenswrapper[4642]: I0128 07:34:13.930534 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpkxb\" (UniqueName: \"kubernetes.io/projected/68b68ba7-e2bd-4ddb-8b63-59c264975733-kube-api-access-wpkxb\") pod \"redhat-operators-4h7lb\" (UID: \"68b68ba7-e2bd-4ddb-8b63-59c264975733\") " pod="openshift-marketplace/redhat-operators-4h7lb" Jan 28 07:34:13 crc kubenswrapper[4642]: I0128 07:34:13.972969 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4h7lb" Jan 28 07:34:13 crc kubenswrapper[4642]: I0128 07:34:13.979910 4642 generic.go:334] "Generic (PLEG): container finished" podID="37dcc5ae-05d0-41c4-8a34-38054aaca7f7" containerID="35cff89ba4b1779da7095855bda3296a5907d3be37d4008d3f02d687df129bdd" exitCode=0 Jan 28 07:34:13 crc kubenswrapper[4642]: I0128 07:34:13.979989 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pdnt6" event={"ID":"37dcc5ae-05d0-41c4-8a34-38054aaca7f7","Type":"ContainerDied","Data":"35cff89ba4b1779da7095855bda3296a5907d3be37d4008d3f02d687df129bdd"} Jan 28 07:34:13 crc kubenswrapper[4642]: I0128 07:34:13.982001 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2gq6n" event={"ID":"50deb047-2f51-412b-9e83-3a2e034ad879","Type":"ContainerStarted","Data":"13318f558d37110cfa916460efeec9211c6675caf3335d127723c5e8fe1f9798"} Jan 28 07:34:14 crc kubenswrapper[4642]: I0128 07:34:14.358359 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4h7lb"] Jan 28 07:34:14 crc kubenswrapper[4642]: W0128 07:34:14.360151 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod68b68ba7_e2bd_4ddb_8b63_59c264975733.slice/crio-5d8e1a11ac005105c0fd99d636310cf99a6f7ed9c2a40ac7c84e9c188d04686e WatchSource:0}: Error finding container 5d8e1a11ac005105c0fd99d636310cf99a6f7ed9c2a40ac7c84e9c188d04686e: Status 404 returned error can't find the container with id 5d8e1a11ac005105c0fd99d636310cf99a6f7ed9c2a40ac7c84e9c188d04686e Jan 28 07:34:14 crc kubenswrapper[4642]: I0128 07:34:14.989106 4642 generic.go:334] "Generic (PLEG): container finished" podID="68b68ba7-e2bd-4ddb-8b63-59c264975733" containerID="434aee1922add39a67461c8eb7c50d171a5748ebc998eef0eb63b28ba57120cd" exitCode=0 Jan 28 07:34:14 crc kubenswrapper[4642]: I0128 07:34:14.989207 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4h7lb" event={"ID":"68b68ba7-e2bd-4ddb-8b63-59c264975733","Type":"ContainerDied","Data":"434aee1922add39a67461c8eb7c50d171a5748ebc998eef0eb63b28ba57120cd"} Jan 28 07:34:14 crc kubenswrapper[4642]: I0128 07:34:14.989416 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4h7lb" event={"ID":"68b68ba7-e2bd-4ddb-8b63-59c264975733","Type":"ContainerStarted","Data":"5d8e1a11ac005105c0fd99d636310cf99a6f7ed9c2a40ac7c84e9c188d04686e"} Jan 28 07:34:14 crc kubenswrapper[4642]: I0128 07:34:14.991874 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pdnt6" event={"ID":"37dcc5ae-05d0-41c4-8a34-38054aaca7f7","Type":"ContainerStarted","Data":"8c4577910754cfd2aec8f894e1cbf1216da3eb7ba9ec1bb1ee0105658eb08820"} Jan 28 07:34:14 crc kubenswrapper[4642]: I0128 07:34:14.993577 4642 generic.go:334] "Generic (PLEG): container finished" podID="50deb047-2f51-412b-9e83-3a2e034ad879" containerID="13318f558d37110cfa916460efeec9211c6675caf3335d127723c5e8fe1f9798" exitCode=0 Jan 28 07:34:14 crc kubenswrapper[4642]: I0128 07:34:14.993606 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2gq6n" event={"ID":"50deb047-2f51-412b-9e83-3a2e034ad879","Type":"ContainerDied","Data":"13318f558d37110cfa916460efeec9211c6675caf3335d127723c5e8fe1f9798"} Jan 28 07:34:15 crc kubenswrapper[4642]: I0128 07:34:15.024278 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pdnt6" podStartSLOduration=2.481673787 podStartE2EDuration="5.024263718s" podCreationTimestamp="2026-01-28 07:34:10 +0000 UTC" firstStartedPulling="2026-01-28 07:34:11.964943687 +0000 UTC m=+2775.197032496" lastFinishedPulling="2026-01-28 07:34:14.507533618 +0000 UTC m=+2777.739622427" observedRunningTime="2026-01-28 07:34:15.019251866 +0000 UTC m=+2778.251340676" watchObservedRunningTime="2026-01-28 07:34:15.024263718 +0000 UTC m=+2778.256352527" Jan 28 07:34:16 crc kubenswrapper[4642]: I0128 07:34:16.002265 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2gq6n" event={"ID":"50deb047-2f51-412b-9e83-3a2e034ad879","Type":"ContainerStarted","Data":"4835e803568b2f62d8e7e8f897cb935daecbe80c0ba3ee4e175fa82f9a99a418"} Jan 28 07:34:16 crc kubenswrapper[4642]: I0128 07:34:16.021143 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2gq6n" podStartSLOduration=2.419834256 podStartE2EDuration="5.021132291s" podCreationTimestamp="2026-01-28 07:34:11 +0000 UTC" firstStartedPulling="2026-01-28 07:34:12.973671213 +0000 UTC m=+2776.205760011" lastFinishedPulling="2026-01-28 07:34:15.574969237 +0000 UTC m=+2778.807058046" observedRunningTime="2026-01-28 07:34:16.015113657 +0000 UTC m=+2779.247202465" watchObservedRunningTime="2026-01-28 07:34:16.021132291 +0000 UTC m=+2779.253221100" Jan 28 07:34:17 crc kubenswrapper[4642]: I0128 07:34:17.008958 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4h7lb" event={"ID":"68b68ba7-e2bd-4ddb-8b63-59c264975733","Type":"ContainerStarted","Data":"d330a5b62efd18ccc064032049ec13afbd810ee5c19dc9ade0f31b056fe87f19"} Jan 28 07:34:18 crc kubenswrapper[4642]: I0128 07:34:18.015531 4642 generic.go:334] "Generic (PLEG): container finished" podID="68b68ba7-e2bd-4ddb-8b63-59c264975733" containerID="d330a5b62efd18ccc064032049ec13afbd810ee5c19dc9ade0f31b056fe87f19" exitCode=0 Jan 28 07:34:18 crc kubenswrapper[4642]: I0128 07:34:18.015617 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4h7lb" event={"ID":"68b68ba7-e2bd-4ddb-8b63-59c264975733","Type":"ContainerDied","Data":"d330a5b62efd18ccc064032049ec13afbd810ee5c19dc9ade0f31b056fe87f19"} Jan 28 07:34:19 crc kubenswrapper[4642]: I0128 07:34:19.023710 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4h7lb" event={"ID":"68b68ba7-e2bd-4ddb-8b63-59c264975733","Type":"ContainerStarted","Data":"edade801b300304e60972a1942a068c74518f7433fdf35ef1f20be39b8339cb7"} Jan 28 07:34:19 crc kubenswrapper[4642]: I0128 07:34:19.041675 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4h7lb" podStartSLOduration=2.525084081 podStartE2EDuration="6.04166098s" podCreationTimestamp="2026-01-28 07:34:13 +0000 UTC" firstStartedPulling="2026-01-28 07:34:14.990982525 +0000 UTC m=+2778.223071334" lastFinishedPulling="2026-01-28 07:34:18.507559424 +0000 UTC m=+2781.739648233" observedRunningTime="2026-01-28 07:34:19.035451126 +0000 UTC m=+2782.267539935" watchObservedRunningTime="2026-01-28 07:34:19.04166098 +0000 UTC m=+2782.273749789" Jan 28 07:34:20 crc kubenswrapper[4642]: I0128 07:34:20.957388 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pdnt6" Jan 28 07:34:20 crc kubenswrapper[4642]: I0128 07:34:20.957620 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pdnt6" Jan 28 07:34:20 crc kubenswrapper[4642]: I0128 07:34:20.988326 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pdnt6" Jan 28 07:34:21 crc kubenswrapper[4642]: I0128 07:34:21.067631 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pdnt6" Jan 28 07:34:21 crc kubenswrapper[4642]: I0128 07:34:21.560616 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2gq6n" Jan 28 07:34:21 crc kubenswrapper[4642]: I0128 07:34:21.560907 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2gq6n" Jan 28 07:34:21 crc kubenswrapper[4642]: I0128 07:34:21.590658 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2gq6n" Jan 28 07:34:21 crc kubenswrapper[4642]: I0128 07:34:21.632515 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pdnt6"] Jan 28 07:34:22 crc kubenswrapper[4642]: I0128 07:34:22.074449 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2gq6n" Jan 28 07:34:23 crc kubenswrapper[4642]: I0128 07:34:23.048552 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pdnt6" podUID="37dcc5ae-05d0-41c4-8a34-38054aaca7f7" containerName="registry-server" containerID="cri-o://8c4577910754cfd2aec8f894e1cbf1216da3eb7ba9ec1bb1ee0105658eb08820" gracePeriod=2 Jan 28 07:34:23 crc kubenswrapper[4642]: I0128 07:34:23.428853 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pdnt6" Jan 28 07:34:23 crc kubenswrapper[4642]: I0128 07:34:23.560263 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37dcc5ae-05d0-41c4-8a34-38054aaca7f7-utilities\") pod \"37dcc5ae-05d0-41c4-8a34-38054aaca7f7\" (UID: \"37dcc5ae-05d0-41c4-8a34-38054aaca7f7\") " Jan 28 07:34:23 crc kubenswrapper[4642]: I0128 07:34:23.560457 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-789nz\" (UniqueName: \"kubernetes.io/projected/37dcc5ae-05d0-41c4-8a34-38054aaca7f7-kube-api-access-789nz\") pod \"37dcc5ae-05d0-41c4-8a34-38054aaca7f7\" (UID: \"37dcc5ae-05d0-41c4-8a34-38054aaca7f7\") " Jan 28 07:34:23 crc kubenswrapper[4642]: I0128 07:34:23.560568 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37dcc5ae-05d0-41c4-8a34-38054aaca7f7-catalog-content\") pod \"37dcc5ae-05d0-41c4-8a34-38054aaca7f7\" (UID: \"37dcc5ae-05d0-41c4-8a34-38054aaca7f7\") " Jan 28 07:34:23 crc kubenswrapper[4642]: I0128 07:34:23.560751 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37dcc5ae-05d0-41c4-8a34-38054aaca7f7-utilities" (OuterVolumeSpecName: "utilities") pod "37dcc5ae-05d0-41c4-8a34-38054aaca7f7" (UID: "37dcc5ae-05d0-41c4-8a34-38054aaca7f7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 07:34:23 crc kubenswrapper[4642]: I0128 07:34:23.560964 4642 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37dcc5ae-05d0-41c4-8a34-38054aaca7f7-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 07:34:23 crc kubenswrapper[4642]: I0128 07:34:23.565042 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37dcc5ae-05d0-41c4-8a34-38054aaca7f7-kube-api-access-789nz" (OuterVolumeSpecName: "kube-api-access-789nz") pod "37dcc5ae-05d0-41c4-8a34-38054aaca7f7" (UID: "37dcc5ae-05d0-41c4-8a34-38054aaca7f7"). InnerVolumeSpecName "kube-api-access-789nz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:34:23 crc kubenswrapper[4642]: I0128 07:34:23.597481 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37dcc5ae-05d0-41c4-8a34-38054aaca7f7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "37dcc5ae-05d0-41c4-8a34-38054aaca7f7" (UID: "37dcc5ae-05d0-41c4-8a34-38054aaca7f7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 07:34:23 crc kubenswrapper[4642]: I0128 07:34:23.662988 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-789nz\" (UniqueName: \"kubernetes.io/projected/37dcc5ae-05d0-41c4-8a34-38054aaca7f7-kube-api-access-789nz\") on node \"crc\" DevicePath \"\"" Jan 28 07:34:23 crc kubenswrapper[4642]: I0128 07:34:23.663015 4642 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37dcc5ae-05d0-41c4-8a34-38054aaca7f7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 07:34:23 crc kubenswrapper[4642]: I0128 07:34:23.973922 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4h7lb" Jan 28 07:34:23 crc kubenswrapper[4642]: I0128 07:34:23.974118 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4h7lb" Jan 28 07:34:24 crc kubenswrapper[4642]: I0128 07:34:24.006435 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4h7lb" Jan 28 07:34:24 crc kubenswrapper[4642]: I0128 07:34:24.056103 4642 generic.go:334] "Generic (PLEG): container finished" podID="37dcc5ae-05d0-41c4-8a34-38054aaca7f7" containerID="8c4577910754cfd2aec8f894e1cbf1216da3eb7ba9ec1bb1ee0105658eb08820" exitCode=0 Jan 28 07:34:24 crc kubenswrapper[4642]: I0128 07:34:24.056145 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pdnt6" Jan 28 07:34:24 crc kubenswrapper[4642]: I0128 07:34:24.056211 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pdnt6" event={"ID":"37dcc5ae-05d0-41c4-8a34-38054aaca7f7","Type":"ContainerDied","Data":"8c4577910754cfd2aec8f894e1cbf1216da3eb7ba9ec1bb1ee0105658eb08820"} Jan 28 07:34:24 crc kubenswrapper[4642]: I0128 07:34:24.056254 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pdnt6" event={"ID":"37dcc5ae-05d0-41c4-8a34-38054aaca7f7","Type":"ContainerDied","Data":"e47d0f01a36c56442961fefea43fa01ad466dced3138fdbbfff7a4a7530f2953"} Jan 28 07:34:24 crc kubenswrapper[4642]: I0128 07:34:24.056275 4642 scope.go:117] "RemoveContainer" containerID="8c4577910754cfd2aec8f894e1cbf1216da3eb7ba9ec1bb1ee0105658eb08820" Jan 28 07:34:24 crc kubenswrapper[4642]: I0128 07:34:24.072767 4642 scope.go:117] "RemoveContainer" containerID="35cff89ba4b1779da7095855bda3296a5907d3be37d4008d3f02d687df129bdd" Jan 28 07:34:24 crc kubenswrapper[4642]: I0128 07:34:24.084349 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pdnt6"] Jan 28 07:34:24 crc kubenswrapper[4642]: I0128 07:34:24.089474 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4h7lb" Jan 28 07:34:24 crc kubenswrapper[4642]: I0128 07:34:24.091636 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pdnt6"] Jan 28 07:34:24 crc kubenswrapper[4642]: I0128 07:34:24.116361 4642 scope.go:117] "RemoveContainer" containerID="f4bd32f38f9ab92e1333545d65fbe7eb720217cc2e0e32f5bf7f0971f8b7b06a" Jan 28 07:34:24 crc kubenswrapper[4642]: I0128 07:34:24.131140 4642 scope.go:117] "RemoveContainer" containerID="8c4577910754cfd2aec8f894e1cbf1216da3eb7ba9ec1bb1ee0105658eb08820" Jan 28 07:34:24 crc kubenswrapper[4642]: E0128 07:34:24.131472 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c4577910754cfd2aec8f894e1cbf1216da3eb7ba9ec1bb1ee0105658eb08820\": container with ID starting with 8c4577910754cfd2aec8f894e1cbf1216da3eb7ba9ec1bb1ee0105658eb08820 not found: ID does not exist" containerID="8c4577910754cfd2aec8f894e1cbf1216da3eb7ba9ec1bb1ee0105658eb08820" Jan 28 07:34:24 crc kubenswrapper[4642]: I0128 07:34:24.131504 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c4577910754cfd2aec8f894e1cbf1216da3eb7ba9ec1bb1ee0105658eb08820"} err="failed to get container status \"8c4577910754cfd2aec8f894e1cbf1216da3eb7ba9ec1bb1ee0105658eb08820\": rpc error: code = NotFound desc = could not find container \"8c4577910754cfd2aec8f894e1cbf1216da3eb7ba9ec1bb1ee0105658eb08820\": container with ID starting with 8c4577910754cfd2aec8f894e1cbf1216da3eb7ba9ec1bb1ee0105658eb08820 not found: ID does not exist" Jan 28 07:34:24 crc kubenswrapper[4642]: I0128 07:34:24.131522 4642 scope.go:117] "RemoveContainer" containerID="35cff89ba4b1779da7095855bda3296a5907d3be37d4008d3f02d687df129bdd" Jan 28 07:34:24 crc kubenswrapper[4642]: E0128 07:34:24.131785 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35cff89ba4b1779da7095855bda3296a5907d3be37d4008d3f02d687df129bdd\": container with ID starting with 35cff89ba4b1779da7095855bda3296a5907d3be37d4008d3f02d687df129bdd not found: ID does not exist" containerID="35cff89ba4b1779da7095855bda3296a5907d3be37d4008d3f02d687df129bdd" Jan 28 07:34:24 crc kubenswrapper[4642]: I0128 07:34:24.131807 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35cff89ba4b1779da7095855bda3296a5907d3be37d4008d3f02d687df129bdd"} err="failed to get container status \"35cff89ba4b1779da7095855bda3296a5907d3be37d4008d3f02d687df129bdd\": rpc error: code = NotFound desc = could not find container \"35cff89ba4b1779da7095855bda3296a5907d3be37d4008d3f02d687df129bdd\": container with ID starting with 35cff89ba4b1779da7095855bda3296a5907d3be37d4008d3f02d687df129bdd not found: ID does not exist" Jan 28 07:34:24 crc kubenswrapper[4642]: I0128 07:34:24.131822 4642 scope.go:117] "RemoveContainer" containerID="f4bd32f38f9ab92e1333545d65fbe7eb720217cc2e0e32f5bf7f0971f8b7b06a" Jan 28 07:34:24 crc kubenswrapper[4642]: E0128 07:34:24.132027 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4bd32f38f9ab92e1333545d65fbe7eb720217cc2e0e32f5bf7f0971f8b7b06a\": container with ID starting with f4bd32f38f9ab92e1333545d65fbe7eb720217cc2e0e32f5bf7f0971f8b7b06a not found: ID does not exist" containerID="f4bd32f38f9ab92e1333545d65fbe7eb720217cc2e0e32f5bf7f0971f8b7b06a" Jan 28 07:34:24 crc kubenswrapper[4642]: I0128 07:34:24.132052 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4bd32f38f9ab92e1333545d65fbe7eb720217cc2e0e32f5bf7f0971f8b7b06a"} err="failed to get container status \"f4bd32f38f9ab92e1333545d65fbe7eb720217cc2e0e32f5bf7f0971f8b7b06a\": rpc error: code = NotFound desc = could not find container \"f4bd32f38f9ab92e1333545d65fbe7eb720217cc2e0e32f5bf7f0971f8b7b06a\": container with ID starting with f4bd32f38f9ab92e1333545d65fbe7eb720217cc2e0e32f5bf7f0971f8b7b06a not found: ID does not exist" Jan 28 07:34:24 crc kubenswrapper[4642]: I0128 07:34:24.834264 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2gq6n"] Jan 28 07:34:25 crc kubenswrapper[4642]: I0128 07:34:25.065272 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-2gq6n" podUID="50deb047-2f51-412b-9e83-3a2e034ad879" containerName="registry-server" containerID="cri-o://4835e803568b2f62d8e7e8f897cb935daecbe80c0ba3ee4e175fa82f9a99a418" gracePeriod=2 Jan 28 07:34:25 crc kubenswrapper[4642]: I0128 07:34:25.109059 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37dcc5ae-05d0-41c4-8a34-38054aaca7f7" path="/var/lib/kubelet/pods/37dcc5ae-05d0-41c4-8a34-38054aaca7f7/volumes" Jan 28 07:34:25 crc kubenswrapper[4642]: I0128 07:34:25.449473 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2gq6n" Jan 28 07:34:25 crc kubenswrapper[4642]: I0128 07:34:25.592880 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50deb047-2f51-412b-9e83-3a2e034ad879-utilities\") pod \"50deb047-2f51-412b-9e83-3a2e034ad879\" (UID: \"50deb047-2f51-412b-9e83-3a2e034ad879\") " Jan 28 07:34:25 crc kubenswrapper[4642]: I0128 07:34:25.592915 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lt6b6\" (UniqueName: \"kubernetes.io/projected/50deb047-2f51-412b-9e83-3a2e034ad879-kube-api-access-lt6b6\") pod \"50deb047-2f51-412b-9e83-3a2e034ad879\" (UID: \"50deb047-2f51-412b-9e83-3a2e034ad879\") " Jan 28 07:34:25 crc kubenswrapper[4642]: I0128 07:34:25.593156 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50deb047-2f51-412b-9e83-3a2e034ad879-catalog-content\") pod \"50deb047-2f51-412b-9e83-3a2e034ad879\" (UID: \"50deb047-2f51-412b-9e83-3a2e034ad879\") " Jan 28 07:34:25 crc kubenswrapper[4642]: I0128 07:34:25.593535 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50deb047-2f51-412b-9e83-3a2e034ad879-utilities" (OuterVolumeSpecName: "utilities") pod "50deb047-2f51-412b-9e83-3a2e034ad879" (UID: "50deb047-2f51-412b-9e83-3a2e034ad879"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 07:34:25 crc kubenswrapper[4642]: I0128 07:34:25.597382 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50deb047-2f51-412b-9e83-3a2e034ad879-kube-api-access-lt6b6" (OuterVolumeSpecName: "kube-api-access-lt6b6") pod "50deb047-2f51-412b-9e83-3a2e034ad879" (UID: "50deb047-2f51-412b-9e83-3a2e034ad879"). InnerVolumeSpecName "kube-api-access-lt6b6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:34:25 crc kubenswrapper[4642]: I0128 07:34:25.626049 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50deb047-2f51-412b-9e83-3a2e034ad879-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "50deb047-2f51-412b-9e83-3a2e034ad879" (UID: "50deb047-2f51-412b-9e83-3a2e034ad879"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 07:34:25 crc kubenswrapper[4642]: I0128 07:34:25.695094 4642 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50deb047-2f51-412b-9e83-3a2e034ad879-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 07:34:25 crc kubenswrapper[4642]: I0128 07:34:25.695122 4642 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50deb047-2f51-412b-9e83-3a2e034ad879-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 07:34:25 crc kubenswrapper[4642]: I0128 07:34:25.695132 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lt6b6\" (UniqueName: \"kubernetes.io/projected/50deb047-2f51-412b-9e83-3a2e034ad879-kube-api-access-lt6b6\") on node \"crc\" DevicePath \"\"" Jan 28 07:34:26 crc kubenswrapper[4642]: I0128 07:34:26.073699 4642 generic.go:334] "Generic (PLEG): container finished" podID="50deb047-2f51-412b-9e83-3a2e034ad879" containerID="4835e803568b2f62d8e7e8f897cb935daecbe80c0ba3ee4e175fa82f9a99a418" exitCode=0 Jan 28 07:34:26 crc kubenswrapper[4642]: I0128 07:34:26.073761 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2gq6n" Jan 28 07:34:26 crc kubenswrapper[4642]: I0128 07:34:26.073792 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2gq6n" event={"ID":"50deb047-2f51-412b-9e83-3a2e034ad879","Type":"ContainerDied","Data":"4835e803568b2f62d8e7e8f897cb935daecbe80c0ba3ee4e175fa82f9a99a418"} Jan 28 07:34:26 crc kubenswrapper[4642]: I0128 07:34:26.073841 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2gq6n" event={"ID":"50deb047-2f51-412b-9e83-3a2e034ad879","Type":"ContainerDied","Data":"e03b8f8bdde87ddda299c701cd45ce8a15723403b9fd875c3e4c644e9058fcfa"} Jan 28 07:34:26 crc kubenswrapper[4642]: I0128 07:34:26.073859 4642 scope.go:117] "RemoveContainer" containerID="4835e803568b2f62d8e7e8f897cb935daecbe80c0ba3ee4e175fa82f9a99a418" Jan 28 07:34:26 crc kubenswrapper[4642]: I0128 07:34:26.090213 4642 scope.go:117] "RemoveContainer" containerID="13318f558d37110cfa916460efeec9211c6675caf3335d127723c5e8fe1f9798" Jan 28 07:34:26 crc kubenswrapper[4642]: I0128 07:34:26.096658 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2gq6n"] Jan 28 07:34:26 crc kubenswrapper[4642]: I0128 07:34:26.101588 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-2gq6n"] Jan 28 07:34:26 crc kubenswrapper[4642]: I0128 07:34:26.107454 4642 scope.go:117] "RemoveContainer" containerID="5f51c3b5849797fc6935b9ed9d4b7ee5ac06cd98b96e6cb0ebb7797dd4b39623" Jan 28 07:34:26 crc kubenswrapper[4642]: I0128 07:34:26.138010 4642 scope.go:117] "RemoveContainer" containerID="4835e803568b2f62d8e7e8f897cb935daecbe80c0ba3ee4e175fa82f9a99a418" Jan 28 07:34:26 crc kubenswrapper[4642]: E0128 07:34:26.138425 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4835e803568b2f62d8e7e8f897cb935daecbe80c0ba3ee4e175fa82f9a99a418\": container with ID starting with 4835e803568b2f62d8e7e8f897cb935daecbe80c0ba3ee4e175fa82f9a99a418 not found: ID does not exist" containerID="4835e803568b2f62d8e7e8f897cb935daecbe80c0ba3ee4e175fa82f9a99a418" Jan 28 07:34:26 crc kubenswrapper[4642]: I0128 07:34:26.138457 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4835e803568b2f62d8e7e8f897cb935daecbe80c0ba3ee4e175fa82f9a99a418"} err="failed to get container status \"4835e803568b2f62d8e7e8f897cb935daecbe80c0ba3ee4e175fa82f9a99a418\": rpc error: code = NotFound desc = could not find container \"4835e803568b2f62d8e7e8f897cb935daecbe80c0ba3ee4e175fa82f9a99a418\": container with ID starting with 4835e803568b2f62d8e7e8f897cb935daecbe80c0ba3ee4e175fa82f9a99a418 not found: ID does not exist" Jan 28 07:34:26 crc kubenswrapper[4642]: I0128 07:34:26.138479 4642 scope.go:117] "RemoveContainer" containerID="13318f558d37110cfa916460efeec9211c6675caf3335d127723c5e8fe1f9798" Jan 28 07:34:26 crc kubenswrapper[4642]: E0128 07:34:26.138805 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13318f558d37110cfa916460efeec9211c6675caf3335d127723c5e8fe1f9798\": container with ID starting with 13318f558d37110cfa916460efeec9211c6675caf3335d127723c5e8fe1f9798 not found: ID does not exist" containerID="13318f558d37110cfa916460efeec9211c6675caf3335d127723c5e8fe1f9798" Jan 28 07:34:26 crc kubenswrapper[4642]: I0128 07:34:26.138833 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13318f558d37110cfa916460efeec9211c6675caf3335d127723c5e8fe1f9798"} err="failed to get container status \"13318f558d37110cfa916460efeec9211c6675caf3335d127723c5e8fe1f9798\": rpc error: code = NotFound desc = could not find container \"13318f558d37110cfa916460efeec9211c6675caf3335d127723c5e8fe1f9798\": container with ID starting with 13318f558d37110cfa916460efeec9211c6675caf3335d127723c5e8fe1f9798 not found: ID does not exist" Jan 28 07:34:26 crc kubenswrapper[4642]: I0128 07:34:26.138854 4642 scope.go:117] "RemoveContainer" containerID="5f51c3b5849797fc6935b9ed9d4b7ee5ac06cd98b96e6cb0ebb7797dd4b39623" Jan 28 07:34:26 crc kubenswrapper[4642]: E0128 07:34:26.139124 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f51c3b5849797fc6935b9ed9d4b7ee5ac06cd98b96e6cb0ebb7797dd4b39623\": container with ID starting with 5f51c3b5849797fc6935b9ed9d4b7ee5ac06cd98b96e6cb0ebb7797dd4b39623 not found: ID does not exist" containerID="5f51c3b5849797fc6935b9ed9d4b7ee5ac06cd98b96e6cb0ebb7797dd4b39623" Jan 28 07:34:26 crc kubenswrapper[4642]: I0128 07:34:26.139159 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f51c3b5849797fc6935b9ed9d4b7ee5ac06cd98b96e6cb0ebb7797dd4b39623"} err="failed to get container status \"5f51c3b5849797fc6935b9ed9d4b7ee5ac06cd98b96e6cb0ebb7797dd4b39623\": rpc error: code = NotFound desc = could not find container \"5f51c3b5849797fc6935b9ed9d4b7ee5ac06cd98b96e6cb0ebb7797dd4b39623\": container with ID starting with 5f51c3b5849797fc6935b9ed9d4b7ee5ac06cd98b96e6cb0ebb7797dd4b39623 not found: ID does not exist" Jan 28 07:34:27 crc kubenswrapper[4642]: I0128 07:34:27.106295 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50deb047-2f51-412b-9e83-3a2e034ad879" path="/var/lib/kubelet/pods/50deb047-2f51-412b-9e83-3a2e034ad879/volumes" Jan 28 07:34:27 crc kubenswrapper[4642]: I0128 07:34:27.234092 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4h7lb"] Jan 28 07:34:27 crc kubenswrapper[4642]: I0128 07:34:27.234313 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4h7lb" podUID="68b68ba7-e2bd-4ddb-8b63-59c264975733" containerName="registry-server" containerID="cri-o://edade801b300304e60972a1942a068c74518f7433fdf35ef1f20be39b8339cb7" gracePeriod=2 Jan 28 07:34:27 crc kubenswrapper[4642]: I0128 07:34:27.615952 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4h7lb" Jan 28 07:34:27 crc kubenswrapper[4642]: I0128 07:34:27.725235 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wpkxb\" (UniqueName: \"kubernetes.io/projected/68b68ba7-e2bd-4ddb-8b63-59c264975733-kube-api-access-wpkxb\") pod \"68b68ba7-e2bd-4ddb-8b63-59c264975733\" (UID: \"68b68ba7-e2bd-4ddb-8b63-59c264975733\") " Jan 28 07:34:27 crc kubenswrapper[4642]: I0128 07:34:27.725517 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68b68ba7-e2bd-4ddb-8b63-59c264975733-catalog-content\") pod \"68b68ba7-e2bd-4ddb-8b63-59c264975733\" (UID: \"68b68ba7-e2bd-4ddb-8b63-59c264975733\") " Jan 28 07:34:27 crc kubenswrapper[4642]: I0128 07:34:27.725729 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68b68ba7-e2bd-4ddb-8b63-59c264975733-utilities\") pod \"68b68ba7-e2bd-4ddb-8b63-59c264975733\" (UID: \"68b68ba7-e2bd-4ddb-8b63-59c264975733\") " Jan 28 07:34:27 crc kubenswrapper[4642]: I0128 07:34:27.726241 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68b68ba7-e2bd-4ddb-8b63-59c264975733-utilities" (OuterVolumeSpecName: "utilities") pod "68b68ba7-e2bd-4ddb-8b63-59c264975733" (UID: "68b68ba7-e2bd-4ddb-8b63-59c264975733"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 07:34:27 crc kubenswrapper[4642]: I0128 07:34:27.726420 4642 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68b68ba7-e2bd-4ddb-8b63-59c264975733-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 07:34:27 crc kubenswrapper[4642]: I0128 07:34:27.729780 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68b68ba7-e2bd-4ddb-8b63-59c264975733-kube-api-access-wpkxb" (OuterVolumeSpecName: "kube-api-access-wpkxb") pod "68b68ba7-e2bd-4ddb-8b63-59c264975733" (UID: "68b68ba7-e2bd-4ddb-8b63-59c264975733"). InnerVolumeSpecName "kube-api-access-wpkxb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:34:27 crc kubenswrapper[4642]: I0128 07:34:27.809245 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68b68ba7-e2bd-4ddb-8b63-59c264975733-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "68b68ba7-e2bd-4ddb-8b63-59c264975733" (UID: "68b68ba7-e2bd-4ddb-8b63-59c264975733"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 07:34:27 crc kubenswrapper[4642]: I0128 07:34:27.827579 4642 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68b68ba7-e2bd-4ddb-8b63-59c264975733-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 07:34:27 crc kubenswrapper[4642]: I0128 07:34:27.827606 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wpkxb\" (UniqueName: \"kubernetes.io/projected/68b68ba7-e2bd-4ddb-8b63-59c264975733-kube-api-access-wpkxb\") on node \"crc\" DevicePath \"\"" Jan 28 07:34:28 crc kubenswrapper[4642]: I0128 07:34:28.087394 4642 generic.go:334] "Generic (PLEG): container finished" podID="68b68ba7-e2bd-4ddb-8b63-59c264975733" containerID="edade801b300304e60972a1942a068c74518f7433fdf35ef1f20be39b8339cb7" exitCode=0 Jan 28 07:34:28 crc kubenswrapper[4642]: I0128 07:34:28.087437 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4h7lb" Jan 28 07:34:28 crc kubenswrapper[4642]: I0128 07:34:28.087477 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4h7lb" event={"ID":"68b68ba7-e2bd-4ddb-8b63-59c264975733","Type":"ContainerDied","Data":"edade801b300304e60972a1942a068c74518f7433fdf35ef1f20be39b8339cb7"} Jan 28 07:34:28 crc kubenswrapper[4642]: I0128 07:34:28.087714 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4h7lb" event={"ID":"68b68ba7-e2bd-4ddb-8b63-59c264975733","Type":"ContainerDied","Data":"5d8e1a11ac005105c0fd99d636310cf99a6f7ed9c2a40ac7c84e9c188d04686e"} Jan 28 07:34:28 crc kubenswrapper[4642]: I0128 07:34:28.087735 4642 scope.go:117] "RemoveContainer" containerID="edade801b300304e60972a1942a068c74518f7433fdf35ef1f20be39b8339cb7" Jan 28 07:34:28 crc kubenswrapper[4642]: I0128 07:34:28.102977 4642 scope.go:117] "RemoveContainer" containerID="d330a5b62efd18ccc064032049ec13afbd810ee5c19dc9ade0f31b056fe87f19" Jan 28 07:34:28 crc kubenswrapper[4642]: I0128 07:34:28.126990 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4h7lb"] Jan 28 07:34:28 crc kubenswrapper[4642]: I0128 07:34:28.135345 4642 scope.go:117] "RemoveContainer" containerID="434aee1922add39a67461c8eb7c50d171a5748ebc998eef0eb63b28ba57120cd" Jan 28 07:34:28 crc kubenswrapper[4642]: I0128 07:34:28.135900 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4h7lb"] Jan 28 07:34:28 crc kubenswrapper[4642]: I0128 07:34:28.157777 4642 scope.go:117] "RemoveContainer" containerID="edade801b300304e60972a1942a068c74518f7433fdf35ef1f20be39b8339cb7" Jan 28 07:34:28 crc kubenswrapper[4642]: E0128 07:34:28.158617 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"edade801b300304e60972a1942a068c74518f7433fdf35ef1f20be39b8339cb7\": container with ID starting with edade801b300304e60972a1942a068c74518f7433fdf35ef1f20be39b8339cb7 not found: ID does not exist" containerID="edade801b300304e60972a1942a068c74518f7433fdf35ef1f20be39b8339cb7" Jan 28 07:34:28 crc kubenswrapper[4642]: I0128 07:34:28.158654 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edade801b300304e60972a1942a068c74518f7433fdf35ef1f20be39b8339cb7"} err="failed to get container status \"edade801b300304e60972a1942a068c74518f7433fdf35ef1f20be39b8339cb7\": rpc error: code = NotFound desc = could not find container \"edade801b300304e60972a1942a068c74518f7433fdf35ef1f20be39b8339cb7\": container with ID starting with edade801b300304e60972a1942a068c74518f7433fdf35ef1f20be39b8339cb7 not found: ID does not exist" Jan 28 07:34:28 crc kubenswrapper[4642]: I0128 07:34:28.158677 4642 scope.go:117] "RemoveContainer" containerID="d330a5b62efd18ccc064032049ec13afbd810ee5c19dc9ade0f31b056fe87f19" Jan 28 07:34:28 crc kubenswrapper[4642]: E0128 07:34:28.159301 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d330a5b62efd18ccc064032049ec13afbd810ee5c19dc9ade0f31b056fe87f19\": container with ID starting with d330a5b62efd18ccc064032049ec13afbd810ee5c19dc9ade0f31b056fe87f19 not found: ID does not exist" containerID="d330a5b62efd18ccc064032049ec13afbd810ee5c19dc9ade0f31b056fe87f19" Jan 28 07:34:28 crc kubenswrapper[4642]: I0128 07:34:28.159331 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d330a5b62efd18ccc064032049ec13afbd810ee5c19dc9ade0f31b056fe87f19"} err="failed to get container status \"d330a5b62efd18ccc064032049ec13afbd810ee5c19dc9ade0f31b056fe87f19\": rpc error: code = NotFound desc = could not find container \"d330a5b62efd18ccc064032049ec13afbd810ee5c19dc9ade0f31b056fe87f19\": container with ID starting with d330a5b62efd18ccc064032049ec13afbd810ee5c19dc9ade0f31b056fe87f19 not found: ID does not exist" Jan 28 07:34:28 crc kubenswrapper[4642]: I0128 07:34:28.159351 4642 scope.go:117] "RemoveContainer" containerID="434aee1922add39a67461c8eb7c50d171a5748ebc998eef0eb63b28ba57120cd" Jan 28 07:34:28 crc kubenswrapper[4642]: E0128 07:34:28.159661 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"434aee1922add39a67461c8eb7c50d171a5748ebc998eef0eb63b28ba57120cd\": container with ID starting with 434aee1922add39a67461c8eb7c50d171a5748ebc998eef0eb63b28ba57120cd not found: ID does not exist" containerID="434aee1922add39a67461c8eb7c50d171a5748ebc998eef0eb63b28ba57120cd" Jan 28 07:34:28 crc kubenswrapper[4642]: I0128 07:34:28.159697 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"434aee1922add39a67461c8eb7c50d171a5748ebc998eef0eb63b28ba57120cd"} err="failed to get container status \"434aee1922add39a67461c8eb7c50d171a5748ebc998eef0eb63b28ba57120cd\": rpc error: code = NotFound desc = could not find container \"434aee1922add39a67461c8eb7c50d171a5748ebc998eef0eb63b28ba57120cd\": container with ID starting with 434aee1922add39a67461c8eb7c50d171a5748ebc998eef0eb63b28ba57120cd not found: ID does not exist" Jan 28 07:34:29 crc kubenswrapper[4642]: I0128 07:34:29.104949 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68b68ba7-e2bd-4ddb-8b63-59c264975733" path="/var/lib/kubelet/pods/68b68ba7-e2bd-4ddb-8b63-59c264975733/volumes" Jan 28 07:34:38 crc kubenswrapper[4642]: I0128 07:34:38.199605 4642 patch_prober.go:28] interesting pod/machine-config-daemon-hdsmf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 07:34:38 crc kubenswrapper[4642]: I0128 07:34:38.199969 4642 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 07:35:08 crc kubenswrapper[4642]: I0128 07:35:08.199646 4642 patch_prober.go:28] interesting pod/machine-config-daemon-hdsmf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 07:35:08 crc kubenswrapper[4642]: I0128 07:35:08.200056 4642 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 07:35:38 crc kubenswrapper[4642]: I0128 07:35:38.199586 4642 patch_prober.go:28] interesting pod/machine-config-daemon-hdsmf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 07:35:38 crc kubenswrapper[4642]: I0128 07:35:38.199977 4642 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 07:35:38 crc kubenswrapper[4642]: I0128 07:35:38.200015 4642 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" Jan 28 07:35:38 crc kubenswrapper[4642]: I0128 07:35:38.200480 4642 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0dcd78f9e568ada3af5f90732ccb64d686cd1d0ba16db578037a8ae26e1efe3d"} pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 07:35:38 crc kubenswrapper[4642]: I0128 07:35:38.200535 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" containerName="machine-config-daemon" containerID="cri-o://0dcd78f9e568ada3af5f90732ccb64d686cd1d0ba16db578037a8ae26e1efe3d" gracePeriod=600 Jan 28 07:35:38 crc kubenswrapper[4642]: E0128 07:35:38.313941 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdsmf_openshift-machine-config-operator(338ae955-434d-40bd-8519-580badf3e175)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" Jan 28 07:35:38 crc kubenswrapper[4642]: I0128 07:35:38.491459 4642 generic.go:334] "Generic (PLEG): container finished" podID="338ae955-434d-40bd-8519-580badf3e175" containerID="0dcd78f9e568ada3af5f90732ccb64d686cd1d0ba16db578037a8ae26e1efe3d" exitCode=0 Jan 28 07:35:38 crc kubenswrapper[4642]: I0128 07:35:38.491523 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" event={"ID":"338ae955-434d-40bd-8519-580badf3e175","Type":"ContainerDied","Data":"0dcd78f9e568ada3af5f90732ccb64d686cd1d0ba16db578037a8ae26e1efe3d"} Jan 28 07:35:38 crc kubenswrapper[4642]: I0128 07:35:38.491641 4642 scope.go:117] "RemoveContainer" containerID="d2d0b0db4080cd7c08fb0fdc1ae1eb2cb0c5941919faee3076c5eecd9c22d8e1" Jan 28 07:35:38 crc kubenswrapper[4642]: I0128 07:35:38.492149 4642 scope.go:117] "RemoveContainer" containerID="0dcd78f9e568ada3af5f90732ccb64d686cd1d0ba16db578037a8ae26e1efe3d" Jan 28 07:35:38 crc kubenswrapper[4642]: E0128 07:35:38.492379 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdsmf_openshift-machine-config-operator(338ae955-434d-40bd-8519-580badf3e175)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" Jan 28 07:35:49 crc kubenswrapper[4642]: I0128 07:35:49.098795 4642 scope.go:117] "RemoveContainer" containerID="0dcd78f9e568ada3af5f90732ccb64d686cd1d0ba16db578037a8ae26e1efe3d" Jan 28 07:35:49 crc kubenswrapper[4642]: E0128 07:35:49.099471 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdsmf_openshift-machine-config-operator(338ae955-434d-40bd-8519-580badf3e175)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" Jan 28 07:36:01 crc kubenswrapper[4642]: I0128 07:36:01.098505 4642 scope.go:117] "RemoveContainer" containerID="0dcd78f9e568ada3af5f90732ccb64d686cd1d0ba16db578037a8ae26e1efe3d" Jan 28 07:36:01 crc kubenswrapper[4642]: E0128 07:36:01.099322 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdsmf_openshift-machine-config-operator(338ae955-434d-40bd-8519-580badf3e175)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" Jan 28 07:36:13 crc kubenswrapper[4642]: I0128 07:36:13.101857 4642 scope.go:117] "RemoveContainer" containerID="0dcd78f9e568ada3af5f90732ccb64d686cd1d0ba16db578037a8ae26e1efe3d" Jan 28 07:36:13 crc kubenswrapper[4642]: E0128 07:36:13.102443 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdsmf_openshift-machine-config-operator(338ae955-434d-40bd-8519-580badf3e175)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" Jan 28 07:36:18 crc kubenswrapper[4642]: I0128 07:36:18.759979 4642 generic.go:334] "Generic (PLEG): container finished" podID="7e694181-faba-42ea-a552-04cdb4a7536d" containerID="dabfb3b37345f96f215b678341b602af7564a5e244036f63ec16786cdb2543c7" exitCode=0 Jan 28 07:36:18 crc kubenswrapper[4642]: I0128 07:36:18.760254 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"7e694181-faba-42ea-a552-04cdb4a7536d","Type":"ContainerDied","Data":"dabfb3b37345f96f215b678341b602af7564a5e244036f63ec16786cdb2543c7"} Jan 28 07:36:20 crc kubenswrapper[4642]: I0128 07:36:20.047470 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 28 07:36:20 crc kubenswrapper[4642]: I0128 07:36:20.147968 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7e694181-faba-42ea-a552-04cdb4a7536d-openstack-config\") pod \"7e694181-faba-42ea-a552-04cdb4a7536d\" (UID: \"7e694181-faba-42ea-a552-04cdb4a7536d\") " Jan 28 07:36:20 crc kubenswrapper[4642]: I0128 07:36:20.148025 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/7e694181-faba-42ea-a552-04cdb4a7536d-test-operator-ephemeral-temporary\") pod \"7e694181-faba-42ea-a552-04cdb4a7536d\" (UID: \"7e694181-faba-42ea-a552-04cdb4a7536d\") " Jan 28 07:36:20 crc kubenswrapper[4642]: I0128 07:36:20.148106 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/7e694181-faba-42ea-a552-04cdb4a7536d-ca-certs\") pod \"7e694181-faba-42ea-a552-04cdb4a7536d\" (UID: \"7e694181-faba-42ea-a552-04cdb4a7536d\") " Jan 28 07:36:20 crc kubenswrapper[4642]: I0128 07:36:20.148127 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7e694181-faba-42ea-a552-04cdb4a7536d-openstack-config-secret\") pod \"7e694181-faba-42ea-a552-04cdb4a7536d\" (UID: \"7e694181-faba-42ea-a552-04cdb4a7536d\") " Jan 28 07:36:20 crc kubenswrapper[4642]: I0128 07:36:20.148162 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/7e694181-faba-42ea-a552-04cdb4a7536d-test-operator-ephemeral-workdir\") pod \"7e694181-faba-42ea-a552-04cdb4a7536d\" (UID: \"7e694181-faba-42ea-a552-04cdb4a7536d\") " Jan 28 07:36:20 crc kubenswrapper[4642]: I0128 07:36:20.148223 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7e694181-faba-42ea-a552-04cdb4a7536d-config-data\") pod \"7e694181-faba-42ea-a552-04cdb4a7536d\" (UID: \"7e694181-faba-42ea-a552-04cdb4a7536d\") " Jan 28 07:36:20 crc kubenswrapper[4642]: I0128 07:36:20.148260 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drx2n\" (UniqueName: \"kubernetes.io/projected/7e694181-faba-42ea-a552-04cdb4a7536d-kube-api-access-drx2n\") pod \"7e694181-faba-42ea-a552-04cdb4a7536d\" (UID: \"7e694181-faba-42ea-a552-04cdb4a7536d\") " Jan 28 07:36:20 crc kubenswrapper[4642]: I0128 07:36:20.148281 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"7e694181-faba-42ea-a552-04cdb4a7536d\" (UID: \"7e694181-faba-42ea-a552-04cdb4a7536d\") " Jan 28 07:36:20 crc kubenswrapper[4642]: I0128 07:36:20.148302 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7e694181-faba-42ea-a552-04cdb4a7536d-ssh-key\") pod \"7e694181-faba-42ea-a552-04cdb4a7536d\" (UID: \"7e694181-faba-42ea-a552-04cdb4a7536d\") " Jan 28 07:36:20 crc kubenswrapper[4642]: I0128 07:36:20.148988 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e694181-faba-42ea-a552-04cdb4a7536d-config-data" (OuterVolumeSpecName: "config-data") pod "7e694181-faba-42ea-a552-04cdb4a7536d" (UID: "7e694181-faba-42ea-a552-04cdb4a7536d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:36:20 crc kubenswrapper[4642]: I0128 07:36:20.148988 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e694181-faba-42ea-a552-04cdb4a7536d-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "7e694181-faba-42ea-a552-04cdb4a7536d" (UID: "7e694181-faba-42ea-a552-04cdb4a7536d"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 07:36:20 crc kubenswrapper[4642]: I0128 07:36:20.153342 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e694181-faba-42ea-a552-04cdb4a7536d-kube-api-access-drx2n" (OuterVolumeSpecName: "kube-api-access-drx2n") pod "7e694181-faba-42ea-a552-04cdb4a7536d" (UID: "7e694181-faba-42ea-a552-04cdb4a7536d"). InnerVolumeSpecName "kube-api-access-drx2n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:36:20 crc kubenswrapper[4642]: I0128 07:36:20.153351 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e694181-faba-42ea-a552-04cdb4a7536d-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "7e694181-faba-42ea-a552-04cdb4a7536d" (UID: "7e694181-faba-42ea-a552-04cdb4a7536d"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 07:36:20 crc kubenswrapper[4642]: I0128 07:36:20.154216 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "test-operator-logs") pod "7e694181-faba-42ea-a552-04cdb4a7536d" (UID: "7e694181-faba-42ea-a552-04cdb4a7536d"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 28 07:36:20 crc kubenswrapper[4642]: I0128 07:36:20.168550 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e694181-faba-42ea-a552-04cdb4a7536d-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "7e694181-faba-42ea-a552-04cdb4a7536d" (UID: "7e694181-faba-42ea-a552-04cdb4a7536d"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:36:20 crc kubenswrapper[4642]: I0128 07:36:20.168946 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e694181-faba-42ea-a552-04cdb4a7536d-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "7e694181-faba-42ea-a552-04cdb4a7536d" (UID: "7e694181-faba-42ea-a552-04cdb4a7536d"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:36:20 crc kubenswrapper[4642]: I0128 07:36:20.169397 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e694181-faba-42ea-a552-04cdb4a7536d-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "7e694181-faba-42ea-a552-04cdb4a7536d" (UID: "7e694181-faba-42ea-a552-04cdb4a7536d"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:36:20 crc kubenswrapper[4642]: I0128 07:36:20.180304 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e694181-faba-42ea-a552-04cdb4a7536d-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "7e694181-faba-42ea-a552-04cdb4a7536d" (UID: "7e694181-faba-42ea-a552-04cdb4a7536d"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:36:20 crc kubenswrapper[4642]: I0128 07:36:20.249815 4642 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/7e694181-faba-42ea-a552-04cdb4a7536d-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 28 07:36:20 crc kubenswrapper[4642]: I0128 07:36:20.249837 4642 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/7e694181-faba-42ea-a552-04cdb4a7536d-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 28 07:36:20 crc kubenswrapper[4642]: I0128 07:36:20.249847 4642 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7e694181-faba-42ea-a552-04cdb4a7536d-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 28 07:36:20 crc kubenswrapper[4642]: I0128 07:36:20.249856 4642 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/7e694181-faba-42ea-a552-04cdb4a7536d-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 28 07:36:20 crc kubenswrapper[4642]: I0128 07:36:20.249865 4642 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7e694181-faba-42ea-a552-04cdb4a7536d-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 07:36:20 crc kubenswrapper[4642]: I0128 07:36:20.249873 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-drx2n\" (UniqueName: \"kubernetes.io/projected/7e694181-faba-42ea-a552-04cdb4a7536d-kube-api-access-drx2n\") on node \"crc\" DevicePath \"\"" Jan 28 07:36:20 crc kubenswrapper[4642]: I0128 07:36:20.249900 4642 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 28 07:36:20 crc kubenswrapper[4642]: I0128 07:36:20.249908 4642 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7e694181-faba-42ea-a552-04cdb4a7536d-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 28 07:36:20 crc kubenswrapper[4642]: I0128 07:36:20.249917 4642 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7e694181-faba-42ea-a552-04cdb4a7536d-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 28 07:36:20 crc kubenswrapper[4642]: I0128 07:36:20.263734 4642 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 28 07:36:20 crc kubenswrapper[4642]: I0128 07:36:20.351158 4642 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 28 07:36:20 crc kubenswrapper[4642]: I0128 07:36:20.772589 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"7e694181-faba-42ea-a552-04cdb4a7536d","Type":"ContainerDied","Data":"1cfb9bcec2b33f27a5f12e9f5221c309455796a813565130d8f2d3ea661c5922"} Jan 28 07:36:20 crc kubenswrapper[4642]: I0128 07:36:20.772807 4642 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1cfb9bcec2b33f27a5f12e9f5221c309455796a813565130d8f2d3ea661c5922" Jan 28 07:36:20 crc kubenswrapper[4642]: I0128 07:36:20.772639 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 28 07:36:26 crc kubenswrapper[4642]: I0128 07:36:26.098600 4642 scope.go:117] "RemoveContainer" containerID="0dcd78f9e568ada3af5f90732ccb64d686cd1d0ba16db578037a8ae26e1efe3d" Jan 28 07:36:26 crc kubenswrapper[4642]: E0128 07:36:26.099078 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdsmf_openshift-machine-config-operator(338ae955-434d-40bd-8519-580badf3e175)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" Jan 28 07:36:29 crc kubenswrapper[4642]: I0128 07:36:29.027690 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 28 07:36:29 crc kubenswrapper[4642]: E0128 07:36:29.028444 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37dcc5ae-05d0-41c4-8a34-38054aaca7f7" containerName="registry-server" Jan 28 07:36:29 crc kubenswrapper[4642]: I0128 07:36:29.028457 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="37dcc5ae-05d0-41c4-8a34-38054aaca7f7" containerName="registry-server" Jan 28 07:36:29 crc kubenswrapper[4642]: E0128 07:36:29.028468 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50deb047-2f51-412b-9e83-3a2e034ad879" containerName="extract-content" Jan 28 07:36:29 crc kubenswrapper[4642]: I0128 07:36:29.028473 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="50deb047-2f51-412b-9e83-3a2e034ad879" containerName="extract-content" Jan 28 07:36:29 crc kubenswrapper[4642]: E0128 07:36:29.028485 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68b68ba7-e2bd-4ddb-8b63-59c264975733" containerName="extract-content" Jan 28 07:36:29 crc kubenswrapper[4642]: I0128 07:36:29.028491 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="68b68ba7-e2bd-4ddb-8b63-59c264975733" containerName="extract-content" Jan 28 07:36:29 crc kubenswrapper[4642]: E0128 07:36:29.028501 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50deb047-2f51-412b-9e83-3a2e034ad879" containerName="registry-server" Jan 28 07:36:29 crc kubenswrapper[4642]: I0128 07:36:29.028508 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="50deb047-2f51-412b-9e83-3a2e034ad879" containerName="registry-server" Jan 28 07:36:29 crc kubenswrapper[4642]: E0128 07:36:29.028515 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68b68ba7-e2bd-4ddb-8b63-59c264975733" containerName="registry-server" Jan 28 07:36:29 crc kubenswrapper[4642]: I0128 07:36:29.028520 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="68b68ba7-e2bd-4ddb-8b63-59c264975733" containerName="registry-server" Jan 28 07:36:29 crc kubenswrapper[4642]: E0128 07:36:29.028531 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37dcc5ae-05d0-41c4-8a34-38054aaca7f7" containerName="extract-utilities" Jan 28 07:36:29 crc kubenswrapper[4642]: I0128 07:36:29.028537 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="37dcc5ae-05d0-41c4-8a34-38054aaca7f7" containerName="extract-utilities" Jan 28 07:36:29 crc kubenswrapper[4642]: E0128 07:36:29.028545 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37dcc5ae-05d0-41c4-8a34-38054aaca7f7" containerName="extract-content" Jan 28 07:36:29 crc kubenswrapper[4642]: I0128 07:36:29.028550 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="37dcc5ae-05d0-41c4-8a34-38054aaca7f7" containerName="extract-content" Jan 28 07:36:29 crc kubenswrapper[4642]: E0128 07:36:29.028566 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68b68ba7-e2bd-4ddb-8b63-59c264975733" containerName="extract-utilities" Jan 28 07:36:29 crc kubenswrapper[4642]: I0128 07:36:29.028571 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="68b68ba7-e2bd-4ddb-8b63-59c264975733" containerName="extract-utilities" Jan 28 07:36:29 crc kubenswrapper[4642]: E0128 07:36:29.028584 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50deb047-2f51-412b-9e83-3a2e034ad879" containerName="extract-utilities" Jan 28 07:36:29 crc kubenswrapper[4642]: I0128 07:36:29.028590 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="50deb047-2f51-412b-9e83-3a2e034ad879" containerName="extract-utilities" Jan 28 07:36:29 crc kubenswrapper[4642]: E0128 07:36:29.028604 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e694181-faba-42ea-a552-04cdb4a7536d" containerName="tempest-tests-tempest-tests-runner" Jan 28 07:36:29 crc kubenswrapper[4642]: I0128 07:36:29.028609 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e694181-faba-42ea-a552-04cdb4a7536d" containerName="tempest-tests-tempest-tests-runner" Jan 28 07:36:29 crc kubenswrapper[4642]: I0128 07:36:29.028765 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e694181-faba-42ea-a552-04cdb4a7536d" containerName="tempest-tests-tempest-tests-runner" Jan 28 07:36:29 crc kubenswrapper[4642]: I0128 07:36:29.028779 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="68b68ba7-e2bd-4ddb-8b63-59c264975733" containerName="registry-server" Jan 28 07:36:29 crc kubenswrapper[4642]: I0128 07:36:29.028793 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="37dcc5ae-05d0-41c4-8a34-38054aaca7f7" containerName="registry-server" Jan 28 07:36:29 crc kubenswrapper[4642]: I0128 07:36:29.028801 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="50deb047-2f51-412b-9e83-3a2e034ad879" containerName="registry-server" Jan 28 07:36:29 crc kubenswrapper[4642]: I0128 07:36:29.029378 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 28 07:36:29 crc kubenswrapper[4642]: I0128 07:36:29.030785 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-x24mr" Jan 28 07:36:29 crc kubenswrapper[4642]: I0128 07:36:29.034336 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 28 07:36:29 crc kubenswrapper[4642]: I0128 07:36:29.084136 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"57fc1bb3-928f-4db7-81b2-6fe911be8403\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 28 07:36:29 crc kubenswrapper[4642]: I0128 07:36:29.084413 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmj4d\" (UniqueName: \"kubernetes.io/projected/57fc1bb3-928f-4db7-81b2-6fe911be8403-kube-api-access-gmj4d\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"57fc1bb3-928f-4db7-81b2-6fe911be8403\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 28 07:36:29 crc kubenswrapper[4642]: I0128 07:36:29.185360 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"57fc1bb3-928f-4db7-81b2-6fe911be8403\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 28 07:36:29 crc kubenswrapper[4642]: I0128 07:36:29.185455 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmj4d\" (UniqueName: \"kubernetes.io/projected/57fc1bb3-928f-4db7-81b2-6fe911be8403-kube-api-access-gmj4d\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"57fc1bb3-928f-4db7-81b2-6fe911be8403\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 28 07:36:29 crc kubenswrapper[4642]: I0128 07:36:29.185667 4642 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"57fc1bb3-928f-4db7-81b2-6fe911be8403\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 28 07:36:29 crc kubenswrapper[4642]: I0128 07:36:29.201445 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmj4d\" (UniqueName: \"kubernetes.io/projected/57fc1bb3-928f-4db7-81b2-6fe911be8403-kube-api-access-gmj4d\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"57fc1bb3-928f-4db7-81b2-6fe911be8403\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 28 07:36:29 crc kubenswrapper[4642]: I0128 07:36:29.203703 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"57fc1bb3-928f-4db7-81b2-6fe911be8403\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 28 07:36:29 crc kubenswrapper[4642]: I0128 07:36:29.343942 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 28 07:36:29 crc kubenswrapper[4642]: I0128 07:36:29.696667 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 28 07:36:29 crc kubenswrapper[4642]: I0128 07:36:29.699503 4642 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 07:36:29 crc kubenswrapper[4642]: I0128 07:36:29.823845 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"57fc1bb3-928f-4db7-81b2-6fe911be8403","Type":"ContainerStarted","Data":"b4c043c710dcf2032f1a7e50a61483e473b6645c3db180e01c516b718fab1225"} Jan 28 07:36:30 crc kubenswrapper[4642]: I0128 07:36:30.831407 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"57fc1bb3-928f-4db7-81b2-6fe911be8403","Type":"ContainerStarted","Data":"faa5b2a47568f1fb2f77cdf5c41c4a66dc26079d3f15c2d0898b89dd16227c1d"} Jan 28 07:36:30 crc kubenswrapper[4642]: I0128 07:36:30.842610 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=0.965380916 podStartE2EDuration="1.842594945s" podCreationTimestamp="2026-01-28 07:36:29 +0000 UTC" firstStartedPulling="2026-01-28 07:36:29.699288289 +0000 UTC m=+2912.931377098" lastFinishedPulling="2026-01-28 07:36:30.576502318 +0000 UTC m=+2913.808591127" observedRunningTime="2026-01-28 07:36:30.841956645 +0000 UTC m=+2914.074045454" watchObservedRunningTime="2026-01-28 07:36:30.842594945 +0000 UTC m=+2914.074683754" Jan 28 07:36:41 crc kubenswrapper[4642]: I0128 07:36:41.097966 4642 scope.go:117] "RemoveContainer" containerID="0dcd78f9e568ada3af5f90732ccb64d686cd1d0ba16db578037a8ae26e1efe3d" Jan 28 07:36:41 crc kubenswrapper[4642]: E0128 07:36:41.098572 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdsmf_openshift-machine-config-operator(338ae955-434d-40bd-8519-580badf3e175)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" Jan 28 07:36:49 crc kubenswrapper[4642]: I0128 07:36:49.153855 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-xtv7h/must-gather-szpf5"] Jan 28 07:36:49 crc kubenswrapper[4642]: I0128 07:36:49.155688 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xtv7h/must-gather-szpf5" Jan 28 07:36:49 crc kubenswrapper[4642]: I0128 07:36:49.157503 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-xtv7h"/"openshift-service-ca.crt" Jan 28 07:36:49 crc kubenswrapper[4642]: I0128 07:36:49.157785 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-xtv7h"/"default-dockercfg-h7cg4" Jan 28 07:36:49 crc kubenswrapper[4642]: I0128 07:36:49.157846 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-xtv7h"/"kube-root-ca.crt" Jan 28 07:36:49 crc kubenswrapper[4642]: I0128 07:36:49.161118 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-xtv7h/must-gather-szpf5"] Jan 28 07:36:49 crc kubenswrapper[4642]: I0128 07:36:49.266326 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/93151dcc-9130-4c41-9721-035e1d183478-must-gather-output\") pod \"must-gather-szpf5\" (UID: \"93151dcc-9130-4c41-9721-035e1d183478\") " pod="openshift-must-gather-xtv7h/must-gather-szpf5" Jan 28 07:36:49 crc kubenswrapper[4642]: I0128 07:36:49.266645 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khp9r\" (UniqueName: \"kubernetes.io/projected/93151dcc-9130-4c41-9721-035e1d183478-kube-api-access-khp9r\") pod \"must-gather-szpf5\" (UID: \"93151dcc-9130-4c41-9721-035e1d183478\") " pod="openshift-must-gather-xtv7h/must-gather-szpf5" Jan 28 07:36:49 crc kubenswrapper[4642]: I0128 07:36:49.368371 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khp9r\" (UniqueName: \"kubernetes.io/projected/93151dcc-9130-4c41-9721-035e1d183478-kube-api-access-khp9r\") pod \"must-gather-szpf5\" (UID: \"93151dcc-9130-4c41-9721-035e1d183478\") " pod="openshift-must-gather-xtv7h/must-gather-szpf5" Jan 28 07:36:49 crc kubenswrapper[4642]: I0128 07:36:49.368450 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/93151dcc-9130-4c41-9721-035e1d183478-must-gather-output\") pod \"must-gather-szpf5\" (UID: \"93151dcc-9130-4c41-9721-035e1d183478\") " pod="openshift-must-gather-xtv7h/must-gather-szpf5" Jan 28 07:36:49 crc kubenswrapper[4642]: I0128 07:36:49.368895 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/93151dcc-9130-4c41-9721-035e1d183478-must-gather-output\") pod \"must-gather-szpf5\" (UID: \"93151dcc-9130-4c41-9721-035e1d183478\") " pod="openshift-must-gather-xtv7h/must-gather-szpf5" Jan 28 07:36:49 crc kubenswrapper[4642]: I0128 07:36:49.383672 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khp9r\" (UniqueName: \"kubernetes.io/projected/93151dcc-9130-4c41-9721-035e1d183478-kube-api-access-khp9r\") pod \"must-gather-szpf5\" (UID: \"93151dcc-9130-4c41-9721-035e1d183478\") " pod="openshift-must-gather-xtv7h/must-gather-szpf5" Jan 28 07:36:49 crc kubenswrapper[4642]: I0128 07:36:49.474919 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xtv7h/must-gather-szpf5" Jan 28 07:36:49 crc kubenswrapper[4642]: I0128 07:36:49.907978 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-xtv7h/must-gather-szpf5"] Jan 28 07:36:49 crc kubenswrapper[4642]: W0128 07:36:49.909234 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod93151dcc_9130_4c41_9721_035e1d183478.slice/crio-3ee548c1631d7b52bd6f76704a3de219c94116eeed7dce89bbb54be1c43a8035 WatchSource:0}: Error finding container 3ee548c1631d7b52bd6f76704a3de219c94116eeed7dce89bbb54be1c43a8035: Status 404 returned error can't find the container with id 3ee548c1631d7b52bd6f76704a3de219c94116eeed7dce89bbb54be1c43a8035 Jan 28 07:36:49 crc kubenswrapper[4642]: I0128 07:36:49.947322 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xtv7h/must-gather-szpf5" event={"ID":"93151dcc-9130-4c41-9721-035e1d183478","Type":"ContainerStarted","Data":"3ee548c1631d7b52bd6f76704a3de219c94116eeed7dce89bbb54be1c43a8035"} Jan 28 07:36:53 crc kubenswrapper[4642]: I0128 07:36:53.099050 4642 scope.go:117] "RemoveContainer" containerID="0dcd78f9e568ada3af5f90732ccb64d686cd1d0ba16db578037a8ae26e1efe3d" Jan 28 07:36:53 crc kubenswrapper[4642]: E0128 07:36:53.099431 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdsmf_openshift-machine-config-operator(338ae955-434d-40bd-8519-580badf3e175)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" Jan 28 07:36:54 crc kubenswrapper[4642]: I0128 07:36:54.977937 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xtv7h/must-gather-szpf5" event={"ID":"93151dcc-9130-4c41-9721-035e1d183478","Type":"ContainerStarted","Data":"f7e4bb6792368919f1e6f89a40d13b67cefd8ebdc747bb45a3dfed0698e9ca13"} Jan 28 07:36:54 crc kubenswrapper[4642]: I0128 07:36:54.978152 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xtv7h/must-gather-szpf5" event={"ID":"93151dcc-9130-4c41-9721-035e1d183478","Type":"ContainerStarted","Data":"89b48ccde97f03fb4132292404ffb04cce0fca8cd710783ca2518c8116e59654"} Jan 28 07:36:54 crc kubenswrapper[4642]: I0128 07:36:54.996046 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-xtv7h/must-gather-szpf5" podStartSLOduration=1.563930268 podStartE2EDuration="5.996035424s" podCreationTimestamp="2026-01-28 07:36:49 +0000 UTC" firstStartedPulling="2026-01-28 07:36:49.911278009 +0000 UTC m=+2933.143366817" lastFinishedPulling="2026-01-28 07:36:54.343383173 +0000 UTC m=+2937.575471973" observedRunningTime="2026-01-28 07:36:54.991708744 +0000 UTC m=+2938.223797553" watchObservedRunningTime="2026-01-28 07:36:54.996035424 +0000 UTC m=+2938.228124233" Jan 28 07:36:57 crc kubenswrapper[4642]: I0128 07:36:57.093974 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-xtv7h/crc-debug-f989j"] Jan 28 07:36:57 crc kubenswrapper[4642]: I0128 07:36:57.095278 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xtv7h/crc-debug-f989j" Jan 28 07:36:57 crc kubenswrapper[4642]: I0128 07:36:57.222144 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7gfk\" (UniqueName: \"kubernetes.io/projected/c5065948-60ea-4b7b-94ab-bef7ca763192-kube-api-access-f7gfk\") pod \"crc-debug-f989j\" (UID: \"c5065948-60ea-4b7b-94ab-bef7ca763192\") " pod="openshift-must-gather-xtv7h/crc-debug-f989j" Jan 28 07:36:57 crc kubenswrapper[4642]: I0128 07:36:57.222271 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c5065948-60ea-4b7b-94ab-bef7ca763192-host\") pod \"crc-debug-f989j\" (UID: \"c5065948-60ea-4b7b-94ab-bef7ca763192\") " pod="openshift-must-gather-xtv7h/crc-debug-f989j" Jan 28 07:36:57 crc kubenswrapper[4642]: I0128 07:36:57.323765 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7gfk\" (UniqueName: \"kubernetes.io/projected/c5065948-60ea-4b7b-94ab-bef7ca763192-kube-api-access-f7gfk\") pod \"crc-debug-f989j\" (UID: \"c5065948-60ea-4b7b-94ab-bef7ca763192\") " pod="openshift-must-gather-xtv7h/crc-debug-f989j" Jan 28 07:36:57 crc kubenswrapper[4642]: I0128 07:36:57.324000 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c5065948-60ea-4b7b-94ab-bef7ca763192-host\") pod \"crc-debug-f989j\" (UID: \"c5065948-60ea-4b7b-94ab-bef7ca763192\") " pod="openshift-must-gather-xtv7h/crc-debug-f989j" Jan 28 07:36:57 crc kubenswrapper[4642]: I0128 07:36:57.324060 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c5065948-60ea-4b7b-94ab-bef7ca763192-host\") pod \"crc-debug-f989j\" (UID: \"c5065948-60ea-4b7b-94ab-bef7ca763192\") " pod="openshift-must-gather-xtv7h/crc-debug-f989j" Jan 28 07:36:57 crc kubenswrapper[4642]: I0128 07:36:57.338138 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7gfk\" (UniqueName: \"kubernetes.io/projected/c5065948-60ea-4b7b-94ab-bef7ca763192-kube-api-access-f7gfk\") pod \"crc-debug-f989j\" (UID: \"c5065948-60ea-4b7b-94ab-bef7ca763192\") " pod="openshift-must-gather-xtv7h/crc-debug-f989j" Jan 28 07:36:57 crc kubenswrapper[4642]: I0128 07:36:57.410598 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xtv7h/crc-debug-f989j" Jan 28 07:36:57 crc kubenswrapper[4642]: W0128 07:36:57.440662 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc5065948_60ea_4b7b_94ab_bef7ca763192.slice/crio-f9b887465b4a1d560728fcc0ec5e533fc7e68ba33425788c645ffa1fc81f0360 WatchSource:0}: Error finding container f9b887465b4a1d560728fcc0ec5e533fc7e68ba33425788c645ffa1fc81f0360: Status 404 returned error can't find the container with id f9b887465b4a1d560728fcc0ec5e533fc7e68ba33425788c645ffa1fc81f0360 Jan 28 07:36:58 crc kubenswrapper[4642]: I0128 07:36:58.017113 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xtv7h/crc-debug-f989j" event={"ID":"c5065948-60ea-4b7b-94ab-bef7ca763192","Type":"ContainerStarted","Data":"f9b887465b4a1d560728fcc0ec5e533fc7e68ba33425788c645ffa1fc81f0360"} Jan 28 07:37:06 crc kubenswrapper[4642]: I0128 07:37:06.073261 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xtv7h/crc-debug-f989j" event={"ID":"c5065948-60ea-4b7b-94ab-bef7ca763192","Type":"ContainerStarted","Data":"afae51055b23c9ded43a8b5b4b0a5ec69a1b7684d11466aaa1012703de2c964b"} Jan 28 07:37:06 crc kubenswrapper[4642]: I0128 07:37:06.088087 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-xtv7h/crc-debug-f989j" podStartSLOduration=1.422718099 podStartE2EDuration="9.088072398s" podCreationTimestamp="2026-01-28 07:36:57 +0000 UTC" firstStartedPulling="2026-01-28 07:36:57.44377306 +0000 UTC m=+2940.675861869" lastFinishedPulling="2026-01-28 07:37:05.109127359 +0000 UTC m=+2948.341216168" observedRunningTime="2026-01-28 07:37:06.083779492 +0000 UTC m=+2949.315868301" watchObservedRunningTime="2026-01-28 07:37:06.088072398 +0000 UTC m=+2949.320161206" Jan 28 07:37:07 crc kubenswrapper[4642]: I0128 07:37:07.103626 4642 scope.go:117] "RemoveContainer" containerID="0dcd78f9e568ada3af5f90732ccb64d686cd1d0ba16db578037a8ae26e1efe3d" Jan 28 07:37:07 crc kubenswrapper[4642]: E0128 07:37:07.104154 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdsmf_openshift-machine-config-operator(338ae955-434d-40bd-8519-580badf3e175)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" Jan 28 07:37:21 crc kubenswrapper[4642]: I0128 07:37:21.098525 4642 scope.go:117] "RemoveContainer" containerID="0dcd78f9e568ada3af5f90732ccb64d686cd1d0ba16db578037a8ae26e1efe3d" Jan 28 07:37:21 crc kubenswrapper[4642]: E0128 07:37:21.099078 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdsmf_openshift-machine-config-operator(338ae955-434d-40bd-8519-580badf3e175)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" Jan 28 07:37:36 crc kubenswrapper[4642]: I0128 07:37:36.098241 4642 scope.go:117] "RemoveContainer" containerID="0dcd78f9e568ada3af5f90732ccb64d686cd1d0ba16db578037a8ae26e1efe3d" Jan 28 07:37:36 crc kubenswrapper[4642]: E0128 07:37:36.098847 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdsmf_openshift-machine-config-operator(338ae955-434d-40bd-8519-580badf3e175)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" Jan 28 07:37:36 crc kubenswrapper[4642]: I0128 07:37:36.262621 4642 generic.go:334] "Generic (PLEG): container finished" podID="c5065948-60ea-4b7b-94ab-bef7ca763192" containerID="afae51055b23c9ded43a8b5b4b0a5ec69a1b7684d11466aaa1012703de2c964b" exitCode=0 Jan 28 07:37:36 crc kubenswrapper[4642]: I0128 07:37:36.262701 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xtv7h/crc-debug-f989j" event={"ID":"c5065948-60ea-4b7b-94ab-bef7ca763192","Type":"ContainerDied","Data":"afae51055b23c9ded43a8b5b4b0a5ec69a1b7684d11466aaa1012703de2c964b"} Jan 28 07:37:37 crc kubenswrapper[4642]: I0128 07:37:37.341609 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xtv7h/crc-debug-f989j" Jan 28 07:37:37 crc kubenswrapper[4642]: I0128 07:37:37.371797 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-xtv7h/crc-debug-f989j"] Jan 28 07:37:37 crc kubenswrapper[4642]: I0128 07:37:37.377429 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-xtv7h/crc-debug-f989j"] Jan 28 07:37:37 crc kubenswrapper[4642]: I0128 07:37:37.452463 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c5065948-60ea-4b7b-94ab-bef7ca763192-host\") pod \"c5065948-60ea-4b7b-94ab-bef7ca763192\" (UID: \"c5065948-60ea-4b7b-94ab-bef7ca763192\") " Jan 28 07:37:37 crc kubenswrapper[4642]: I0128 07:37:37.452561 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c5065948-60ea-4b7b-94ab-bef7ca763192-host" (OuterVolumeSpecName: "host") pod "c5065948-60ea-4b7b-94ab-bef7ca763192" (UID: "c5065948-60ea-4b7b-94ab-bef7ca763192"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 07:37:37 crc kubenswrapper[4642]: I0128 07:37:37.452599 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f7gfk\" (UniqueName: \"kubernetes.io/projected/c5065948-60ea-4b7b-94ab-bef7ca763192-kube-api-access-f7gfk\") pod \"c5065948-60ea-4b7b-94ab-bef7ca763192\" (UID: \"c5065948-60ea-4b7b-94ab-bef7ca763192\") " Jan 28 07:37:37 crc kubenswrapper[4642]: I0128 07:37:37.453163 4642 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c5065948-60ea-4b7b-94ab-bef7ca763192-host\") on node \"crc\" DevicePath \"\"" Jan 28 07:37:37 crc kubenswrapper[4642]: I0128 07:37:37.456838 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5065948-60ea-4b7b-94ab-bef7ca763192-kube-api-access-f7gfk" (OuterVolumeSpecName: "kube-api-access-f7gfk") pod "c5065948-60ea-4b7b-94ab-bef7ca763192" (UID: "c5065948-60ea-4b7b-94ab-bef7ca763192"). InnerVolumeSpecName "kube-api-access-f7gfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:37:37 crc kubenswrapper[4642]: I0128 07:37:37.555002 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f7gfk\" (UniqueName: \"kubernetes.io/projected/c5065948-60ea-4b7b-94ab-bef7ca763192-kube-api-access-f7gfk\") on node \"crc\" DevicePath \"\"" Jan 28 07:37:38 crc kubenswrapper[4642]: I0128 07:37:38.275398 4642 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9b887465b4a1d560728fcc0ec5e533fc7e68ba33425788c645ffa1fc81f0360" Jan 28 07:37:38 crc kubenswrapper[4642]: I0128 07:37:38.275448 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xtv7h/crc-debug-f989j" Jan 28 07:37:38 crc kubenswrapper[4642]: I0128 07:37:38.474907 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-xtv7h/crc-debug-f5t8p"] Jan 28 07:37:38 crc kubenswrapper[4642]: E0128 07:37:38.475212 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5065948-60ea-4b7b-94ab-bef7ca763192" containerName="container-00" Jan 28 07:37:38 crc kubenswrapper[4642]: I0128 07:37:38.475224 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5065948-60ea-4b7b-94ab-bef7ca763192" containerName="container-00" Jan 28 07:37:38 crc kubenswrapper[4642]: I0128 07:37:38.475394 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5065948-60ea-4b7b-94ab-bef7ca763192" containerName="container-00" Jan 28 07:37:38 crc kubenswrapper[4642]: I0128 07:37:38.475893 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xtv7h/crc-debug-f5t8p" Jan 28 07:37:38 crc kubenswrapper[4642]: I0128 07:37:38.567815 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qc9wh\" (UniqueName: \"kubernetes.io/projected/0114aebf-4991-4ddd-8097-ce22d6d1010e-kube-api-access-qc9wh\") pod \"crc-debug-f5t8p\" (UID: \"0114aebf-4991-4ddd-8097-ce22d6d1010e\") " pod="openshift-must-gather-xtv7h/crc-debug-f5t8p" Jan 28 07:37:38 crc kubenswrapper[4642]: I0128 07:37:38.567890 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0114aebf-4991-4ddd-8097-ce22d6d1010e-host\") pod \"crc-debug-f5t8p\" (UID: \"0114aebf-4991-4ddd-8097-ce22d6d1010e\") " pod="openshift-must-gather-xtv7h/crc-debug-f5t8p" Jan 28 07:37:38 crc kubenswrapper[4642]: I0128 07:37:38.669220 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qc9wh\" (UniqueName: \"kubernetes.io/projected/0114aebf-4991-4ddd-8097-ce22d6d1010e-kube-api-access-qc9wh\") pod \"crc-debug-f5t8p\" (UID: \"0114aebf-4991-4ddd-8097-ce22d6d1010e\") " pod="openshift-must-gather-xtv7h/crc-debug-f5t8p" Jan 28 07:37:38 crc kubenswrapper[4642]: I0128 07:37:38.669307 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0114aebf-4991-4ddd-8097-ce22d6d1010e-host\") pod \"crc-debug-f5t8p\" (UID: \"0114aebf-4991-4ddd-8097-ce22d6d1010e\") " pod="openshift-must-gather-xtv7h/crc-debug-f5t8p" Jan 28 07:37:38 crc kubenswrapper[4642]: I0128 07:37:38.669445 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0114aebf-4991-4ddd-8097-ce22d6d1010e-host\") pod \"crc-debug-f5t8p\" (UID: \"0114aebf-4991-4ddd-8097-ce22d6d1010e\") " pod="openshift-must-gather-xtv7h/crc-debug-f5t8p" Jan 28 07:37:38 crc kubenswrapper[4642]: I0128 07:37:38.682550 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qc9wh\" (UniqueName: \"kubernetes.io/projected/0114aebf-4991-4ddd-8097-ce22d6d1010e-kube-api-access-qc9wh\") pod \"crc-debug-f5t8p\" (UID: \"0114aebf-4991-4ddd-8097-ce22d6d1010e\") " pod="openshift-must-gather-xtv7h/crc-debug-f5t8p" Jan 28 07:37:38 crc kubenswrapper[4642]: I0128 07:37:38.789690 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xtv7h/crc-debug-f5t8p" Jan 28 07:37:39 crc kubenswrapper[4642]: I0128 07:37:39.105484 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5065948-60ea-4b7b-94ab-bef7ca763192" path="/var/lib/kubelet/pods/c5065948-60ea-4b7b-94ab-bef7ca763192/volumes" Jan 28 07:37:39 crc kubenswrapper[4642]: I0128 07:37:39.282648 4642 generic.go:334] "Generic (PLEG): container finished" podID="0114aebf-4991-4ddd-8097-ce22d6d1010e" containerID="e19c76b6d396a8526daa3a33c1d65982c5729b89ef605901f5ce38a1d6bc6ba4" exitCode=0 Jan 28 07:37:39 crc kubenswrapper[4642]: I0128 07:37:39.282682 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xtv7h/crc-debug-f5t8p" event={"ID":"0114aebf-4991-4ddd-8097-ce22d6d1010e","Type":"ContainerDied","Data":"e19c76b6d396a8526daa3a33c1d65982c5729b89ef605901f5ce38a1d6bc6ba4"} Jan 28 07:37:39 crc kubenswrapper[4642]: I0128 07:37:39.282705 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xtv7h/crc-debug-f5t8p" event={"ID":"0114aebf-4991-4ddd-8097-ce22d6d1010e","Type":"ContainerStarted","Data":"2b3b88d304fc181a21aa999af78bab58b98cb8888581a5546145c40cd04391be"} Jan 28 07:37:39 crc kubenswrapper[4642]: I0128 07:37:39.653028 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-xtv7h/crc-debug-f5t8p"] Jan 28 07:37:39 crc kubenswrapper[4642]: I0128 07:37:39.659164 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-xtv7h/crc-debug-f5t8p"] Jan 28 07:37:40 crc kubenswrapper[4642]: I0128 07:37:40.353039 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xtv7h/crc-debug-f5t8p" Jan 28 07:37:40 crc kubenswrapper[4642]: I0128 07:37:40.492763 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0114aebf-4991-4ddd-8097-ce22d6d1010e-host\") pod \"0114aebf-4991-4ddd-8097-ce22d6d1010e\" (UID: \"0114aebf-4991-4ddd-8097-ce22d6d1010e\") " Jan 28 07:37:40 crc kubenswrapper[4642]: I0128 07:37:40.492900 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0114aebf-4991-4ddd-8097-ce22d6d1010e-host" (OuterVolumeSpecName: "host") pod "0114aebf-4991-4ddd-8097-ce22d6d1010e" (UID: "0114aebf-4991-4ddd-8097-ce22d6d1010e"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 07:37:40 crc kubenswrapper[4642]: I0128 07:37:40.493087 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qc9wh\" (UniqueName: \"kubernetes.io/projected/0114aebf-4991-4ddd-8097-ce22d6d1010e-kube-api-access-qc9wh\") pod \"0114aebf-4991-4ddd-8097-ce22d6d1010e\" (UID: \"0114aebf-4991-4ddd-8097-ce22d6d1010e\") " Jan 28 07:37:40 crc kubenswrapper[4642]: I0128 07:37:40.493514 4642 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0114aebf-4991-4ddd-8097-ce22d6d1010e-host\") on node \"crc\" DevicePath \"\"" Jan 28 07:37:40 crc kubenswrapper[4642]: I0128 07:37:40.497278 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0114aebf-4991-4ddd-8097-ce22d6d1010e-kube-api-access-qc9wh" (OuterVolumeSpecName: "kube-api-access-qc9wh") pod "0114aebf-4991-4ddd-8097-ce22d6d1010e" (UID: "0114aebf-4991-4ddd-8097-ce22d6d1010e"). InnerVolumeSpecName "kube-api-access-qc9wh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:37:40 crc kubenswrapper[4642]: I0128 07:37:40.594333 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qc9wh\" (UniqueName: \"kubernetes.io/projected/0114aebf-4991-4ddd-8097-ce22d6d1010e-kube-api-access-qc9wh\") on node \"crc\" DevicePath \"\"" Jan 28 07:37:40 crc kubenswrapper[4642]: I0128 07:37:40.759031 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-xtv7h/crc-debug-z2w4z"] Jan 28 07:37:40 crc kubenswrapper[4642]: E0128 07:37:40.759344 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0114aebf-4991-4ddd-8097-ce22d6d1010e" containerName="container-00" Jan 28 07:37:40 crc kubenswrapper[4642]: I0128 07:37:40.759356 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="0114aebf-4991-4ddd-8097-ce22d6d1010e" containerName="container-00" Jan 28 07:37:40 crc kubenswrapper[4642]: I0128 07:37:40.759510 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="0114aebf-4991-4ddd-8097-ce22d6d1010e" containerName="container-00" Jan 28 07:37:40 crc kubenswrapper[4642]: I0128 07:37:40.759973 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xtv7h/crc-debug-z2w4z" Jan 28 07:37:40 crc kubenswrapper[4642]: I0128 07:37:40.797416 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7ec3c60b-e7e1-44d2-b23b-1549c5c349bf-host\") pod \"crc-debug-z2w4z\" (UID: \"7ec3c60b-e7e1-44d2-b23b-1549c5c349bf\") " pod="openshift-must-gather-xtv7h/crc-debug-z2w4z" Jan 28 07:37:40 crc kubenswrapper[4642]: I0128 07:37:40.797456 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7npvb\" (UniqueName: \"kubernetes.io/projected/7ec3c60b-e7e1-44d2-b23b-1549c5c349bf-kube-api-access-7npvb\") pod \"crc-debug-z2w4z\" (UID: \"7ec3c60b-e7e1-44d2-b23b-1549c5c349bf\") " pod="openshift-must-gather-xtv7h/crc-debug-z2w4z" Jan 28 07:37:40 crc kubenswrapper[4642]: I0128 07:37:40.898542 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7ec3c60b-e7e1-44d2-b23b-1549c5c349bf-host\") pod \"crc-debug-z2w4z\" (UID: \"7ec3c60b-e7e1-44d2-b23b-1549c5c349bf\") " pod="openshift-must-gather-xtv7h/crc-debug-z2w4z" Jan 28 07:37:40 crc kubenswrapper[4642]: I0128 07:37:40.898571 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7npvb\" (UniqueName: \"kubernetes.io/projected/7ec3c60b-e7e1-44d2-b23b-1549c5c349bf-kube-api-access-7npvb\") pod \"crc-debug-z2w4z\" (UID: \"7ec3c60b-e7e1-44d2-b23b-1549c5c349bf\") " pod="openshift-must-gather-xtv7h/crc-debug-z2w4z" Jan 28 07:37:40 crc kubenswrapper[4642]: I0128 07:37:40.898898 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7ec3c60b-e7e1-44d2-b23b-1549c5c349bf-host\") pod \"crc-debug-z2w4z\" (UID: \"7ec3c60b-e7e1-44d2-b23b-1549c5c349bf\") " pod="openshift-must-gather-xtv7h/crc-debug-z2w4z" Jan 28 07:37:40 crc kubenswrapper[4642]: I0128 07:37:40.911427 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7npvb\" (UniqueName: \"kubernetes.io/projected/7ec3c60b-e7e1-44d2-b23b-1549c5c349bf-kube-api-access-7npvb\") pod \"crc-debug-z2w4z\" (UID: \"7ec3c60b-e7e1-44d2-b23b-1549c5c349bf\") " pod="openshift-must-gather-xtv7h/crc-debug-z2w4z" Jan 28 07:37:41 crc kubenswrapper[4642]: I0128 07:37:41.072313 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xtv7h/crc-debug-z2w4z" Jan 28 07:37:41 crc kubenswrapper[4642]: W0128 07:37:41.097682 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ec3c60b_e7e1_44d2_b23b_1549c5c349bf.slice/crio-052bcbfd8fe8de917a87733266fcd3d9eb98cdccd4c081d49e500a29cc5d0332 WatchSource:0}: Error finding container 052bcbfd8fe8de917a87733266fcd3d9eb98cdccd4c081d49e500a29cc5d0332: Status 404 returned error can't find the container with id 052bcbfd8fe8de917a87733266fcd3d9eb98cdccd4c081d49e500a29cc5d0332 Jan 28 07:37:41 crc kubenswrapper[4642]: I0128 07:37:41.107126 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0114aebf-4991-4ddd-8097-ce22d6d1010e" path="/var/lib/kubelet/pods/0114aebf-4991-4ddd-8097-ce22d6d1010e/volumes" Jan 28 07:37:41 crc kubenswrapper[4642]: I0128 07:37:41.296124 4642 generic.go:334] "Generic (PLEG): container finished" podID="7ec3c60b-e7e1-44d2-b23b-1549c5c349bf" containerID="13dfe94f4d08201f1b15239638ddd6fec9b59da777319267b2dc316df0298aba" exitCode=0 Jan 28 07:37:41 crc kubenswrapper[4642]: I0128 07:37:41.296179 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xtv7h/crc-debug-z2w4z" event={"ID":"7ec3c60b-e7e1-44d2-b23b-1549c5c349bf","Type":"ContainerDied","Data":"13dfe94f4d08201f1b15239638ddd6fec9b59da777319267b2dc316df0298aba"} Jan 28 07:37:41 crc kubenswrapper[4642]: I0128 07:37:41.296219 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xtv7h/crc-debug-z2w4z" event={"ID":"7ec3c60b-e7e1-44d2-b23b-1549c5c349bf","Type":"ContainerStarted","Data":"052bcbfd8fe8de917a87733266fcd3d9eb98cdccd4c081d49e500a29cc5d0332"} Jan 28 07:37:41 crc kubenswrapper[4642]: I0128 07:37:41.297504 4642 scope.go:117] "RemoveContainer" containerID="e19c76b6d396a8526daa3a33c1d65982c5729b89ef605901f5ce38a1d6bc6ba4" Jan 28 07:37:41 crc kubenswrapper[4642]: I0128 07:37:41.297507 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xtv7h/crc-debug-f5t8p" Jan 28 07:37:41 crc kubenswrapper[4642]: I0128 07:37:41.325540 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-xtv7h/crc-debug-z2w4z"] Jan 28 07:37:41 crc kubenswrapper[4642]: I0128 07:37:41.334698 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-xtv7h/crc-debug-z2w4z"] Jan 28 07:37:42 crc kubenswrapper[4642]: I0128 07:37:42.370314 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xtv7h/crc-debug-z2w4z" Jan 28 07:37:42 crc kubenswrapper[4642]: I0128 07:37:42.525415 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7ec3c60b-e7e1-44d2-b23b-1549c5c349bf-host\") pod \"7ec3c60b-e7e1-44d2-b23b-1549c5c349bf\" (UID: \"7ec3c60b-e7e1-44d2-b23b-1549c5c349bf\") " Jan 28 07:37:42 crc kubenswrapper[4642]: I0128 07:37:42.525536 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ec3c60b-e7e1-44d2-b23b-1549c5c349bf-host" (OuterVolumeSpecName: "host") pod "7ec3c60b-e7e1-44d2-b23b-1549c5c349bf" (UID: "7ec3c60b-e7e1-44d2-b23b-1549c5c349bf"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 07:37:42 crc kubenswrapper[4642]: I0128 07:37:42.525556 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7npvb\" (UniqueName: \"kubernetes.io/projected/7ec3c60b-e7e1-44d2-b23b-1549c5c349bf-kube-api-access-7npvb\") pod \"7ec3c60b-e7e1-44d2-b23b-1549c5c349bf\" (UID: \"7ec3c60b-e7e1-44d2-b23b-1549c5c349bf\") " Jan 28 07:37:42 crc kubenswrapper[4642]: I0128 07:37:42.526009 4642 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7ec3c60b-e7e1-44d2-b23b-1549c5c349bf-host\") on node \"crc\" DevicePath \"\"" Jan 28 07:37:42 crc kubenswrapper[4642]: I0128 07:37:42.529664 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ec3c60b-e7e1-44d2-b23b-1549c5c349bf-kube-api-access-7npvb" (OuterVolumeSpecName: "kube-api-access-7npvb") pod "7ec3c60b-e7e1-44d2-b23b-1549c5c349bf" (UID: "7ec3c60b-e7e1-44d2-b23b-1549c5c349bf"). InnerVolumeSpecName "kube-api-access-7npvb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:37:42 crc kubenswrapper[4642]: I0128 07:37:42.627369 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7npvb\" (UniqueName: \"kubernetes.io/projected/7ec3c60b-e7e1-44d2-b23b-1549c5c349bf-kube-api-access-7npvb\") on node \"crc\" DevicePath \"\"" Jan 28 07:37:43 crc kubenswrapper[4642]: I0128 07:37:43.105217 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ec3c60b-e7e1-44d2-b23b-1549c5c349bf" path="/var/lib/kubelet/pods/7ec3c60b-e7e1-44d2-b23b-1549c5c349bf/volumes" Jan 28 07:37:43 crc kubenswrapper[4642]: I0128 07:37:43.313115 4642 scope.go:117] "RemoveContainer" containerID="13dfe94f4d08201f1b15239638ddd6fec9b59da777319267b2dc316df0298aba" Jan 28 07:37:43 crc kubenswrapper[4642]: I0128 07:37:43.313134 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xtv7h/crc-debug-z2w4z" Jan 28 07:37:51 crc kubenswrapper[4642]: I0128 07:37:51.098390 4642 scope.go:117] "RemoveContainer" containerID="0dcd78f9e568ada3af5f90732ccb64d686cd1d0ba16db578037a8ae26e1efe3d" Jan 28 07:37:51 crc kubenswrapper[4642]: E0128 07:37:51.099259 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdsmf_openshift-machine-config-operator(338ae955-434d-40bd-8519-580badf3e175)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" Jan 28 07:37:51 crc kubenswrapper[4642]: I0128 07:37:51.878730 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-65b856d58d-bqqcd_9e8d2ea1-c13f-48c7-8481-284676407f2b/barbican-api/0.log" Jan 28 07:37:51 crc kubenswrapper[4642]: I0128 07:37:51.955366 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-65b856d58d-bqqcd_9e8d2ea1-c13f-48c7-8481-284676407f2b/barbican-api-log/0.log" Jan 28 07:37:51 crc kubenswrapper[4642]: I0128 07:37:51.991276 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6dcc495f5d-rjtzn_14fef196-27d3-4eb2-8347-80de82035a9d/barbican-keystone-listener/0.log" Jan 28 07:37:52 crc kubenswrapper[4642]: I0128 07:37:52.048217 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6dcc495f5d-rjtzn_14fef196-27d3-4eb2-8347-80de82035a9d/barbican-keystone-listener-log/0.log" Jan 28 07:37:52 crc kubenswrapper[4642]: I0128 07:37:52.135355 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6d89d45779-ghhkc_b7df992e-0fe2-4c0b-b217-10bc93d786ac/barbican-worker/0.log" Jan 28 07:37:52 crc kubenswrapper[4642]: I0128 07:37:52.155977 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6d89d45779-ghhkc_b7df992e-0fe2-4c0b-b217-10bc93d786ac/barbican-worker-log/0.log" Jan 28 07:37:52 crc kubenswrapper[4642]: I0128 07:37:52.268470 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-2624h_71799297-5b25-4d16-97d8-5cd3b6e9c52e/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 07:37:52 crc kubenswrapper[4642]: I0128 07:37:52.312088 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f050f220-7652-43cd-8de4-fe0f291f46cc/ceilometer-central-agent/0.log" Jan 28 07:37:52 crc kubenswrapper[4642]: I0128 07:37:52.388284 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f050f220-7652-43cd-8de4-fe0f291f46cc/ceilometer-notification-agent/0.log" Jan 28 07:37:52 crc kubenswrapper[4642]: I0128 07:37:52.398589 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f050f220-7652-43cd-8de4-fe0f291f46cc/proxy-httpd/0.log" Jan 28 07:37:52 crc kubenswrapper[4642]: I0128 07:37:52.445490 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f050f220-7652-43cd-8de4-fe0f291f46cc/sg-core/0.log" Jan 28 07:37:52 crc kubenswrapper[4642]: I0128 07:37:52.542691 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_7cb1c066-b293-4f15-8056-5422fe062a98/cinder-api/0.log" Jan 28 07:37:52 crc kubenswrapper[4642]: I0128 07:37:52.547292 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_7cb1c066-b293-4f15-8056-5422fe062a98/cinder-api-log/0.log" Jan 28 07:37:52 crc kubenswrapper[4642]: I0128 07:37:52.653537 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_57740542-145e-4f7e-a313-ef87683e27cd/cinder-scheduler/0.log" Jan 28 07:37:52 crc kubenswrapper[4642]: I0128 07:37:52.711817 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_57740542-145e-4f7e-a313-ef87683e27cd/probe/0.log" Jan 28 07:37:52 crc kubenswrapper[4642]: I0128 07:37:52.778401 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-xqnkj_a99427f8-9376-4fe9-81ed-cfe6740f4581/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 07:37:52 crc kubenswrapper[4642]: I0128 07:37:52.890097 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-k65rv_e8972141-a9ad-40b1-abb7-5e0fbdf8feda/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 07:37:52 crc kubenswrapper[4642]: I0128 07:37:52.943267 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-774c5cf667-7hfrh_592db514-d1a6-421d-87f4-60ab08a05885/init/0.log" Jan 28 07:37:53 crc kubenswrapper[4642]: I0128 07:37:53.056091 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-774c5cf667-7hfrh_592db514-d1a6-421d-87f4-60ab08a05885/init/0.log" Jan 28 07:37:53 crc kubenswrapper[4642]: I0128 07:37:53.084382 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-5s2mm_f6bd9dfb-a07e-4082-ab80-e7de0f582617/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 07:37:53 crc kubenswrapper[4642]: I0128 07:37:53.115665 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-774c5cf667-7hfrh_592db514-d1a6-421d-87f4-60ab08a05885/dnsmasq-dns/0.log" Jan 28 07:37:53 crc kubenswrapper[4642]: I0128 07:37:53.240352 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_e2b7c0c6-df21-4342-9aec-f6b7ba5188be/glance-httpd/0.log" Jan 28 07:37:53 crc kubenswrapper[4642]: I0128 07:37:53.271625 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_e2b7c0c6-df21-4342-9aec-f6b7ba5188be/glance-log/0.log" Jan 28 07:37:53 crc kubenswrapper[4642]: I0128 07:37:53.412290 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_16c4c401-8d2d-479c-bbb2-75b0f3ac300a/glance-log/0.log" Jan 28 07:37:53 crc kubenswrapper[4642]: I0128 07:37:53.445564 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_16c4c401-8d2d-479c-bbb2-75b0f3ac300a/glance-httpd/0.log" Jan 28 07:37:53 crc kubenswrapper[4642]: I0128 07:37:53.473300 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr_a86d82be-6640-4441-a938-230f7beded20/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 07:37:53 crc kubenswrapper[4642]: I0128 07:37:53.632282 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-rtjpp_8ce13f76-42f1-46cb-a43b-cdb1acaf6cd8/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 07:37:53 crc kubenswrapper[4642]: I0128 07:37:53.701283 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-f77bb558-ws68h_83dd211e-6375-4640-921f-c26d8181e31b/keystone-api/0.log" Jan 28 07:37:53 crc kubenswrapper[4642]: I0128 07:37:53.757999 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_36fd1727-1fc5-4cf4-91a2-4d2a01c1d7c9/kube-state-metrics/0.log" Jan 28 07:37:53 crc kubenswrapper[4642]: I0128 07:37:53.853121 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-5snh2_4fe048e8-d571-4c2a-a306-3b1e9fdc1798/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 07:37:54 crc kubenswrapper[4642]: I0128 07:37:54.119543 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7476bb99fc-vvh9d_49e0885f-27b0-4197-9ff0-95732b63bf51/neutron-httpd/0.log" Jan 28 07:37:54 crc kubenswrapper[4642]: I0128 07:37:54.129657 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7476bb99fc-vvh9d_49e0885f-27b0-4197-9ff0-95732b63bf51/neutron-api/0.log" Jan 28 07:37:54 crc kubenswrapper[4642]: I0128 07:37:54.270372 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-jd58d_29a8a74b-b7e6-4315-93a4-cde0bdc10ae9/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 07:37:54 crc kubenswrapper[4642]: I0128 07:37:54.660733 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_fafb88b9-f909-4a9c-92af-63b0428e44e8/nova-api-log/0.log" Jan 28 07:37:54 crc kubenswrapper[4642]: I0128 07:37:54.821046 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_fafb88b9-f909-4a9c-92af-63b0428e44e8/nova-api-api/0.log" Jan 28 07:37:54 crc kubenswrapper[4642]: I0128 07:37:54.833018 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_5b79b1b9-2072-44eb-ab2f-977a02871f54/nova-cell0-conductor-conductor/0.log" Jan 28 07:37:55 crc kubenswrapper[4642]: I0128 07:37:55.044939 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_e5b728d1-49f3-4652-b330-89eb118ee26e/nova-cell1-conductor-conductor/0.log" Jan 28 07:37:55 crc kubenswrapper[4642]: I0128 07:37:55.065398 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_8befd04d-7f83-44b3-8136-94b85511b14f/nova-cell1-novncproxy-novncproxy/0.log" Jan 28 07:37:55 crc kubenswrapper[4642]: I0128 07:37:55.220802 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-jkwgg_57077b74-e1c4-4ab3-b414-1301bacf7e3c/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 07:37:55 crc kubenswrapper[4642]: I0128 07:37:55.273086 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_fca45030-caaf-4344-8a7c-5440a27f8e57/nova-metadata-log/0.log" Jan 28 07:37:55 crc kubenswrapper[4642]: I0128 07:37:55.512252 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_68213c74-0be2-4d55-8f7c-7f5991da4f75/nova-scheduler-scheduler/0.log" Jan 28 07:37:55 crc kubenswrapper[4642]: I0128 07:37:55.560694 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_c03a521e-dd32-4a74-b452-512fe8bdae8e/mysql-bootstrap/0.log" Jan 28 07:37:55 crc kubenswrapper[4642]: I0128 07:37:55.711809 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_c03a521e-dd32-4a74-b452-512fe8bdae8e/mysql-bootstrap/0.log" Jan 28 07:37:55 crc kubenswrapper[4642]: I0128 07:37:55.737769 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_c03a521e-dd32-4a74-b452-512fe8bdae8e/galera/0.log" Jan 28 07:37:55 crc kubenswrapper[4642]: I0128 07:37:55.881935 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_602638e1-0a19-4a7f-a752-50b0e228a7da/mysql-bootstrap/0.log" Jan 28 07:37:56 crc kubenswrapper[4642]: I0128 07:37:56.021855 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_602638e1-0a19-4a7f-a752-50b0e228a7da/mysql-bootstrap/0.log" Jan 28 07:37:56 crc kubenswrapper[4642]: I0128 07:37:56.059749 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_fca45030-caaf-4344-8a7c-5440a27f8e57/nova-metadata-metadata/0.log" Jan 28 07:37:56 crc kubenswrapper[4642]: I0128 07:37:56.079126 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_602638e1-0a19-4a7f-a752-50b0e228a7da/galera/0.log" Jan 28 07:37:56 crc kubenswrapper[4642]: I0128 07:37:56.232267 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_c754de0a-7ee3-416f-988d-d0eb4829ea99/openstackclient/0.log" Jan 28 07:37:56 crc kubenswrapper[4642]: I0128 07:37:56.242395 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-9d4kj_29b93c34-de22-48ac-80da-b79048401506/ovn-controller/0.log" Jan 28 07:37:56 crc kubenswrapper[4642]: I0128 07:37:56.435828 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-nvphd_e0307f10-0ff0-4421-91a1-34ff47b17d16/ovsdb-server-init/0.log" Jan 28 07:37:56 crc kubenswrapper[4642]: I0128 07:37:56.440158 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-xmwjf_9c18095f-18c4-435f-a2cc-216a62127faa/openstack-network-exporter/0.log" Jan 28 07:37:56 crc kubenswrapper[4642]: I0128 07:37:56.599706 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-nvphd_e0307f10-0ff0-4421-91a1-34ff47b17d16/ovsdb-server-init/0.log" Jan 28 07:37:56 crc kubenswrapper[4642]: I0128 07:37:56.607130 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-nvphd_e0307f10-0ff0-4421-91a1-34ff47b17d16/ovsdb-server/0.log" Jan 28 07:37:56 crc kubenswrapper[4642]: I0128 07:37:56.615223 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-nvphd_e0307f10-0ff0-4421-91a1-34ff47b17d16/ovs-vswitchd/0.log" Jan 28 07:37:56 crc kubenswrapper[4642]: I0128 07:37:56.796038 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-h6cx4_2de02231-7ff5-4fea-8660-09a3a907adbe/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 07:37:56 crc kubenswrapper[4642]: I0128 07:37:56.821364 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_cc7a3db8-5279-4295-a18a-59749e31d9a4/openstack-network-exporter/0.log" Jan 28 07:37:56 crc kubenswrapper[4642]: I0128 07:37:56.847069 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_cc7a3db8-5279-4295-a18a-59749e31d9a4/ovn-northd/0.log" Jan 28 07:37:57 crc kubenswrapper[4642]: I0128 07:37:57.056084 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_f7f96797-56a8-4fc5-a520-cfaecf44c4a0/ovsdbserver-nb/0.log" Jan 28 07:37:57 crc kubenswrapper[4642]: I0128 07:37:57.101004 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_f7f96797-56a8-4fc5-a520-cfaecf44c4a0/openstack-network-exporter/0.log" Jan 28 07:37:57 crc kubenswrapper[4642]: I0128 07:37:57.222248 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_08e72283-7898-4b33-a2ef-5ebe2a319fe8/openstack-network-exporter/0.log" Jan 28 07:37:57 crc kubenswrapper[4642]: I0128 07:37:57.251333 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_08e72283-7898-4b33-a2ef-5ebe2a319fe8/ovsdbserver-sb/0.log" Jan 28 07:37:57 crc kubenswrapper[4642]: I0128 07:37:57.438264 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-59fd947774-hzkdn_e08591ac-7a27-4fc3-aaf0-b6957a9d94b5/placement-api/0.log" Jan 28 07:37:57 crc kubenswrapper[4642]: I0128 07:37:57.450639 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-59fd947774-hzkdn_e08591ac-7a27-4fc3-aaf0-b6957a9d94b5/placement-log/0.log" Jan 28 07:37:57 crc kubenswrapper[4642]: I0128 07:37:57.467750 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_830d2eb5-3d8a-4b74-833e-758894985129/setup-container/0.log" Jan 28 07:37:57 crc kubenswrapper[4642]: I0128 07:37:57.603645 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_830d2eb5-3d8a-4b74-833e-758894985129/rabbitmq/0.log" Jan 28 07:37:57 crc kubenswrapper[4642]: I0128 07:37:57.648178 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_830d2eb5-3d8a-4b74-833e-758894985129/setup-container/0.log" Jan 28 07:37:57 crc kubenswrapper[4642]: I0128 07:37:57.667395 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_62614395-0b52-4d39-865d-c42587ac034b/setup-container/0.log" Jan 28 07:37:57 crc kubenswrapper[4642]: I0128 07:37:57.843092 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_62614395-0b52-4d39-865d-c42587ac034b/rabbitmq/0.log" Jan 28 07:37:57 crc kubenswrapper[4642]: I0128 07:37:57.884548 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_62614395-0b52-4d39-865d-c42587ac034b/setup-container/0.log" Jan 28 07:37:57 crc kubenswrapper[4642]: I0128 07:37:57.892227 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-w7l2s_2e6a86f6-9d82-44b5-8f3d-cd0d0520462b/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 07:37:58 crc kubenswrapper[4642]: I0128 07:37:58.043798 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-4xwd4_0571eda4-a4be-4e57-93f6-b31928d2bdd3/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 07:37:58 crc kubenswrapper[4642]: I0128 07:37:58.086971 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-92kzl_f8485ba5-af89-41de-82ac-61f80fdf4831/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 07:37:58 crc kubenswrapper[4642]: I0128 07:37:58.213079 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-4lgl6_826495f0-3162-41a2-bbf2-f95814348f47/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 07:37:58 crc kubenswrapper[4642]: I0128 07:37:58.273673 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-9sx26_00779b3c-2623-48a8-88e3-72355cdcf9f9/ssh-known-hosts-edpm-deployment/0.log" Jan 28 07:37:58 crc kubenswrapper[4642]: I0128 07:37:58.424252 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6f4ff4554c-l99rf_31037f93-2b83-4bd0-bcdf-62c0a973432a/proxy-httpd/0.log" Jan 28 07:37:58 crc kubenswrapper[4642]: I0128 07:37:58.457403 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6f4ff4554c-l99rf_31037f93-2b83-4bd0-bcdf-62c0a973432a/proxy-server/0.log" Jan 28 07:37:58 crc kubenswrapper[4642]: I0128 07:37:58.597737 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-pp9sd_9c8f1362-c01b-4533-b2fa-a7cbfb573175/swift-ring-rebalance/0.log" Jan 28 07:37:58 crc kubenswrapper[4642]: I0128 07:37:58.602434 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4dd91859-662e-4131-a376-57998c03d752/account-auditor/0.log" Jan 28 07:37:58 crc kubenswrapper[4642]: I0128 07:37:58.695025 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4dd91859-662e-4131-a376-57998c03d752/account-reaper/0.log" Jan 28 07:37:58 crc kubenswrapper[4642]: I0128 07:37:58.748461 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4dd91859-662e-4131-a376-57998c03d752/account-server/0.log" Jan 28 07:37:58 crc kubenswrapper[4642]: I0128 07:37:58.780065 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4dd91859-662e-4131-a376-57998c03d752/account-replicator/0.log" Jan 28 07:37:58 crc kubenswrapper[4642]: I0128 07:37:58.788364 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4dd91859-662e-4131-a376-57998c03d752/container-auditor/0.log" Jan 28 07:37:58 crc kubenswrapper[4642]: I0128 07:37:58.855946 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4dd91859-662e-4131-a376-57998c03d752/container-replicator/0.log" Jan 28 07:37:58 crc kubenswrapper[4642]: I0128 07:37:58.897863 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4dd91859-662e-4131-a376-57998c03d752/container-server/0.log" Jan 28 07:37:58 crc kubenswrapper[4642]: I0128 07:37:58.932055 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4dd91859-662e-4131-a376-57998c03d752/container-updater/0.log" Jan 28 07:37:58 crc kubenswrapper[4642]: I0128 07:37:58.961857 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4dd91859-662e-4131-a376-57998c03d752/object-auditor/0.log" Jan 28 07:37:59 crc kubenswrapper[4642]: I0128 07:37:59.018879 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4dd91859-662e-4131-a376-57998c03d752/object-expirer/0.log" Jan 28 07:37:59 crc kubenswrapper[4642]: I0128 07:37:59.074495 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4dd91859-662e-4131-a376-57998c03d752/object-replicator/0.log" Jan 28 07:37:59 crc kubenswrapper[4642]: I0128 07:37:59.080385 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4dd91859-662e-4131-a376-57998c03d752/object-server/0.log" Jan 28 07:37:59 crc kubenswrapper[4642]: I0128 07:37:59.105349 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4dd91859-662e-4131-a376-57998c03d752/object-updater/0.log" Jan 28 07:37:59 crc kubenswrapper[4642]: I0128 07:37:59.159921 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4dd91859-662e-4131-a376-57998c03d752/rsync/0.log" Jan 28 07:37:59 crc kubenswrapper[4642]: I0128 07:37:59.230282 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4dd91859-662e-4131-a376-57998c03d752/swift-recon-cron/0.log" Jan 28 07:37:59 crc kubenswrapper[4642]: I0128 07:37:59.329042 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-xb7sb_3d342237-d10d-4315-a659-c8f91ecc6d5d/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 07:37:59 crc kubenswrapper[4642]: I0128 07:37:59.406490 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_7e694181-faba-42ea-a552-04cdb4a7536d/tempest-tests-tempest-tests-runner/0.log" Jan 28 07:37:59 crc kubenswrapper[4642]: I0128 07:37:59.491410 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_57fc1bb3-928f-4db7-81b2-6fe911be8403/test-operator-logs-container/0.log" Jan 28 07:37:59 crc kubenswrapper[4642]: I0128 07:37:59.602897 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-h2225_8032b59a-f024-4eb0-93d7-d26a77889a96/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 07:38:03 crc kubenswrapper[4642]: I0128 07:38:03.098102 4642 scope.go:117] "RemoveContainer" containerID="0dcd78f9e568ada3af5f90732ccb64d686cd1d0ba16db578037a8ae26e1efe3d" Jan 28 07:38:03 crc kubenswrapper[4642]: E0128 07:38:03.098499 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdsmf_openshift-machine-config-operator(338ae955-434d-40bd-8519-580badf3e175)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" Jan 28 07:38:06 crc kubenswrapper[4642]: I0128 07:38:06.758799 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_59611c4a-ee6f-4f16-9804-aba66d47d908/memcached/0.log" Jan 28 07:38:16 crc kubenswrapper[4642]: I0128 07:38:16.286826 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7f86f8796f-r6l8l_8f714147-0e51-40d4-bc83-a1bcd90da40f/manager/0.log" Jan 28 07:38:16 crc kubenswrapper[4642]: I0128 07:38:16.390334 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c63724877025f1d18ba1fa29f8076dfd209b6bb3b67e44a6aa3755fab2rpf9j_ca36e19a-c862-47bc-b335-0f3d55dc2d4c/util/0.log" Jan 28 07:38:16 crc kubenswrapper[4642]: I0128 07:38:16.505816 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c63724877025f1d18ba1fa29f8076dfd209b6bb3b67e44a6aa3755fab2rpf9j_ca36e19a-c862-47bc-b335-0f3d55dc2d4c/util/0.log" Jan 28 07:38:16 crc kubenswrapper[4642]: I0128 07:38:16.506499 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c63724877025f1d18ba1fa29f8076dfd209b6bb3b67e44a6aa3755fab2rpf9j_ca36e19a-c862-47bc-b335-0f3d55dc2d4c/pull/0.log" Jan 28 07:38:16 crc kubenswrapper[4642]: I0128 07:38:16.551370 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c63724877025f1d18ba1fa29f8076dfd209b6bb3b67e44a6aa3755fab2rpf9j_ca36e19a-c862-47bc-b335-0f3d55dc2d4c/pull/0.log" Jan 28 07:38:16 crc kubenswrapper[4642]: I0128 07:38:16.657271 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c63724877025f1d18ba1fa29f8076dfd209b6bb3b67e44a6aa3755fab2rpf9j_ca36e19a-c862-47bc-b335-0f3d55dc2d4c/util/0.log" Jan 28 07:38:16 crc kubenswrapper[4642]: I0128 07:38:16.657682 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c63724877025f1d18ba1fa29f8076dfd209b6bb3b67e44a6aa3755fab2rpf9j_ca36e19a-c862-47bc-b335-0f3d55dc2d4c/pull/0.log" Jan 28 07:38:16 crc kubenswrapper[4642]: I0128 07:38:16.660384 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c63724877025f1d18ba1fa29f8076dfd209b6bb3b67e44a6aa3755fab2rpf9j_ca36e19a-c862-47bc-b335-0f3d55dc2d4c/extract/0.log" Jan 28 07:38:16 crc kubenswrapper[4642]: I0128 07:38:16.805764 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-7478f7dbf9-ppss4_e7c99a85-efe2-41d4-8682-b91441ed42bf/manager/0.log" Jan 28 07:38:16 crc kubenswrapper[4642]: I0128 07:38:16.823200 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-cv9ph_8ce8250d-808a-4044-9473-ef4de236ea47/manager/0.log" Jan 28 07:38:16 crc kubenswrapper[4642]: I0128 07:38:16.948065 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-wvkg2_3b826964-4d30-4419-85ff-e4c4fab25d5f/manager/0.log" Jan 28 07:38:16 crc kubenswrapper[4642]: I0128 07:38:16.982457 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-r2p4j_926efdce-a7f6-465b-b4e8-752d78e79cae/manager/0.log" Jan 28 07:38:17 crc kubenswrapper[4642]: I0128 07:38:17.083615 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-xqxpn_5af1bfbf-97ed-4ac2-b688-60b50d0800f0/manager/0.log" Jan 28 07:38:17 crc kubenswrapper[4642]: I0128 07:38:17.253134 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-598f7747c9-n42jz_fe0506df-e213-4430-a075-3e4a25ae3bf8/manager/0.log" Jan 28 07:38:17 crc kubenswrapper[4642]: I0128 07:38:17.325942 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-694cf4f878-g9qq4_33d74ff8-8576-4acc-8233-df91f8c11cbd/manager/0.log" Jan 28 07:38:17 crc kubenswrapper[4642]: I0128 07:38:17.400357 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-jfxhp_fd2f775c-8111-4523-b235-1e61f428b03e/manager/0.log" Jan 28 07:38:17 crc kubenswrapper[4642]: I0128 07:38:17.469380 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-snqv5_d0b658bf-5e42-4af9-93ce-b6e0b03b1db2/manager/0.log" Jan 28 07:38:17 crc kubenswrapper[4642]: I0128 07:38:17.576157 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-5n97g_c56780c4-c549-4261-807d-c85fa6bbb166/manager/0.log" Jan 28 07:38:17 crc kubenswrapper[4642]: I0128 07:38:17.632823 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78d58447c5-nxs2v_955adb33-713e-4988-a885-8c26474165e5/manager/0.log" Jan 28 07:38:17 crc kubenswrapper[4642]: I0128 07:38:17.754909 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-7bdb645866-gxt6x_7c0247c0-e28d-4914-8d63-d90f9ad06fe3/manager/0.log" Jan 28 07:38:17 crc kubenswrapper[4642]: I0128 07:38:17.789741 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5f4cd88d46-4xmct_43c6d7b6-0086-4de0-b6d6-1a313d0c7214/manager/0.log" Jan 28 07:38:17 crc kubenswrapper[4642]: I0128 07:38:17.890245 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-5b5d4999dch6b8n_e5eb1461-1a4f-403d-bc4f-c05d36ad23e8/manager/0.log" Jan 28 07:38:18 crc kubenswrapper[4642]: I0128 07:38:18.006338 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-554f878768-rqjln_c27f0ead-ebcd-4c83-ad72-311bcacff990/operator/0.log" Jan 28 07:38:18 crc kubenswrapper[4642]: I0128 07:38:18.097817 4642 scope.go:117] "RemoveContainer" containerID="0dcd78f9e568ada3af5f90732ccb64d686cd1d0ba16db578037a8ae26e1efe3d" Jan 28 07:38:18 crc kubenswrapper[4642]: E0128 07:38:18.098155 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdsmf_openshift-machine-config-operator(338ae955-434d-40bd-8519-580badf3e175)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" Jan 28 07:38:18 crc kubenswrapper[4642]: I0128 07:38:18.229131 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-j2f6p_68d33b51-456a-4363-83ec-7f60de722a77/registry-server/0.log" Jan 28 07:38:18 crc kubenswrapper[4642]: I0128 07:38:18.236095 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-6f75f45d54-gpvnm_ef130a26-1119-48ca-87c7-9def2d39f0b5/manager/0.log" Jan 28 07:38:18 crc kubenswrapper[4642]: I0128 07:38:18.432907 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-79d5ccc684-lfdnj_d1e9a5df-6796-4bdb-8412-f2f832aeebd3/manager/0.log" Jan 28 07:38:18 crc kubenswrapper[4642]: I0128 07:38:18.547072 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-8kkj6_1bbb1fbc-a22c-4a90-b15a-abf791757ef2/operator/0.log" Jan 28 07:38:18 crc kubenswrapper[4642]: I0128 07:38:18.685449 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-x62jq_be43dd0d-944f-4d01-8e8f-22adc9306708/manager/0.log" Jan 28 07:38:18 crc kubenswrapper[4642]: I0128 07:38:18.875898 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-85cd9769bb-g5765_65108034-33b6-4b00-8bc0-6dbf2955510c/manager/0.log" Jan 28 07:38:18 crc kubenswrapper[4642]: I0128 07:38:18.917366 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-9f67d7-9kg2t_a453bbb9-176c-413b-82dd-294ecb3bdb2b/manager/0.log" Jan 28 07:38:18 crc kubenswrapper[4642]: I0128 07:38:18.969345 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-8n8j8_79a5daf5-be64-4759-bbb6-6d3850ff574e/manager/0.log" Jan 28 07:38:19 crc kubenswrapper[4642]: I0128 07:38:19.043953 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-vrkkm_ffd25d2c-380e-4a54-a2af-ca488f438da7/manager/0.log" Jan 28 07:38:29 crc kubenswrapper[4642]: I0128 07:38:29.098462 4642 scope.go:117] "RemoveContainer" containerID="0dcd78f9e568ada3af5f90732ccb64d686cd1d0ba16db578037a8ae26e1efe3d" Jan 28 07:38:29 crc kubenswrapper[4642]: E0128 07:38:29.100431 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdsmf_openshift-machine-config-operator(338ae955-434d-40bd-8519-580badf3e175)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" Jan 28 07:38:32 crc kubenswrapper[4642]: I0128 07:38:32.025448 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-8w4vg_28e7930c-b0c7-4ef7-975d-fe130a30089c/control-plane-machine-set-operator/0.log" Jan 28 07:38:32 crc kubenswrapper[4642]: I0128 07:38:32.193170 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-ttt2d_536f8472-158f-45c2-a0f1-b6799b6bdbdd/machine-api-operator/0.log" Jan 28 07:38:32 crc kubenswrapper[4642]: I0128 07:38:32.196787 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-ttt2d_536f8472-158f-45c2-a0f1-b6799b6bdbdd/kube-rbac-proxy/0.log" Jan 28 07:38:41 crc kubenswrapper[4642]: I0128 07:38:41.371383 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-l8bb4_e8183488-30bb-4dad-affe-d8ac650f1396/cert-manager-controller/0.log" Jan 28 07:38:41 crc kubenswrapper[4642]: I0128 07:38:41.491994 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-qv4ls_130b06ad-fdbf-4c37-b60e-4a6893a00984/cert-manager-cainjector/0.log" Jan 28 07:38:41 crc kubenswrapper[4642]: I0128 07:38:41.534008 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-2kkdr_f0d82d56-7c08-4a56-9d8d-14f1b372c248/cert-manager-webhook/0.log" Jan 28 07:38:44 crc kubenswrapper[4642]: I0128 07:38:44.099316 4642 scope.go:117] "RemoveContainer" containerID="0dcd78f9e568ada3af5f90732ccb64d686cd1d0ba16db578037a8ae26e1efe3d" Jan 28 07:38:44 crc kubenswrapper[4642]: E0128 07:38:44.100087 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdsmf_openshift-machine-config-operator(338ae955-434d-40bd-8519-580badf3e175)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" Jan 28 07:38:51 crc kubenswrapper[4642]: I0128 07:38:51.240010 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-mghfw_41dcadcf-4728-4cba-9997-5e76250477e6/nmstate-console-plugin/0.log" Jan 28 07:38:51 crc kubenswrapper[4642]: I0128 07:38:51.349421 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-nbnj6_e311b34d-bd2e-4096-bfd4-734999821b7e/nmstate-handler/0.log" Jan 28 07:38:51 crc kubenswrapper[4642]: I0128 07:38:51.416770 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-v4jdx_24cc2707-e7fa-4112-83cd-549fede20a62/kube-rbac-proxy/0.log" Jan 28 07:38:51 crc kubenswrapper[4642]: I0128 07:38:51.489724 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-v4jdx_24cc2707-e7fa-4112-83cd-549fede20a62/nmstate-metrics/0.log" Jan 28 07:38:51 crc kubenswrapper[4642]: I0128 07:38:51.547791 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-89bz2_b97164bc-f5f3-489d-b0f2-c33fdf700a20/nmstate-operator/0.log" Jan 28 07:38:51 crc kubenswrapper[4642]: I0128 07:38:51.610691 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-pmhhc_e8385e4f-aa98-4f3c-9712-0ee8951e1322/nmstate-webhook/0.log" Jan 28 07:38:59 crc kubenswrapper[4642]: I0128 07:38:59.098998 4642 scope.go:117] "RemoveContainer" containerID="0dcd78f9e568ada3af5f90732ccb64d686cd1d0ba16db578037a8ae26e1efe3d" Jan 28 07:38:59 crc kubenswrapper[4642]: E0128 07:38:59.099682 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdsmf_openshift-machine-config-operator(338ae955-434d-40bd-8519-580badf3e175)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" Jan 28 07:39:10 crc kubenswrapper[4642]: I0128 07:39:10.098659 4642 scope.go:117] "RemoveContainer" containerID="0dcd78f9e568ada3af5f90732ccb64d686cd1d0ba16db578037a8ae26e1efe3d" Jan 28 07:39:10 crc kubenswrapper[4642]: E0128 07:39:10.099221 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdsmf_openshift-machine-config-operator(338ae955-434d-40bd-8519-580badf3e175)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" Jan 28 07:39:10 crc kubenswrapper[4642]: I0128 07:39:10.126533 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-bwswz_cb4d1f9b-a7f7-4bc5-89e1-3c175cbdaee2/kube-rbac-proxy/0.log" Jan 28 07:39:10 crc kubenswrapper[4642]: I0128 07:39:10.203207 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-bwswz_cb4d1f9b-a7f7-4bc5-89e1-3c175cbdaee2/controller/0.log" Jan 28 07:39:10 crc kubenswrapper[4642]: I0128 07:39:10.266943 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c9xns_7eff0229-6d46-439f-9e3b-b1382d2615ee/cp-frr-files/0.log" Jan 28 07:39:10 crc kubenswrapper[4642]: I0128 07:39:10.396318 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c9xns_7eff0229-6d46-439f-9e3b-b1382d2615ee/cp-frr-files/0.log" Jan 28 07:39:10 crc kubenswrapper[4642]: I0128 07:39:10.403775 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c9xns_7eff0229-6d46-439f-9e3b-b1382d2615ee/cp-metrics/0.log" Jan 28 07:39:10 crc kubenswrapper[4642]: I0128 07:39:10.407939 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c9xns_7eff0229-6d46-439f-9e3b-b1382d2615ee/cp-reloader/0.log" Jan 28 07:39:10 crc kubenswrapper[4642]: I0128 07:39:10.425298 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c9xns_7eff0229-6d46-439f-9e3b-b1382d2615ee/cp-reloader/0.log" Jan 28 07:39:10 crc kubenswrapper[4642]: I0128 07:39:10.553798 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c9xns_7eff0229-6d46-439f-9e3b-b1382d2615ee/cp-frr-files/0.log" Jan 28 07:39:10 crc kubenswrapper[4642]: I0128 07:39:10.565026 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c9xns_7eff0229-6d46-439f-9e3b-b1382d2615ee/cp-reloader/0.log" Jan 28 07:39:10 crc kubenswrapper[4642]: I0128 07:39:10.565223 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c9xns_7eff0229-6d46-439f-9e3b-b1382d2615ee/cp-metrics/0.log" Jan 28 07:39:10 crc kubenswrapper[4642]: I0128 07:39:10.569326 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c9xns_7eff0229-6d46-439f-9e3b-b1382d2615ee/cp-metrics/0.log" Jan 28 07:39:10 crc kubenswrapper[4642]: I0128 07:39:10.702293 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c9xns_7eff0229-6d46-439f-9e3b-b1382d2615ee/cp-frr-files/0.log" Jan 28 07:39:10 crc kubenswrapper[4642]: I0128 07:39:10.726797 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c9xns_7eff0229-6d46-439f-9e3b-b1382d2615ee/controller/0.log" Jan 28 07:39:10 crc kubenswrapper[4642]: I0128 07:39:10.728203 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c9xns_7eff0229-6d46-439f-9e3b-b1382d2615ee/cp-reloader/0.log" Jan 28 07:39:10 crc kubenswrapper[4642]: I0128 07:39:10.733361 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c9xns_7eff0229-6d46-439f-9e3b-b1382d2615ee/cp-metrics/0.log" Jan 28 07:39:10 crc kubenswrapper[4642]: I0128 07:39:10.859442 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c9xns_7eff0229-6d46-439f-9e3b-b1382d2615ee/frr-metrics/0.log" Jan 28 07:39:10 crc kubenswrapper[4642]: I0128 07:39:10.869969 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c9xns_7eff0229-6d46-439f-9e3b-b1382d2615ee/kube-rbac-proxy/0.log" Jan 28 07:39:10 crc kubenswrapper[4642]: I0128 07:39:10.889950 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c9xns_7eff0229-6d46-439f-9e3b-b1382d2615ee/kube-rbac-proxy-frr/0.log" Jan 28 07:39:11 crc kubenswrapper[4642]: I0128 07:39:11.040838 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c9xns_7eff0229-6d46-439f-9e3b-b1382d2615ee/reloader/0.log" Jan 28 07:39:11 crc kubenswrapper[4642]: I0128 07:39:11.051313 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-g79zx_6b78a60e-9afd-4252-98b0-a1ba76c8e54c/frr-k8s-webhook-server/0.log" Jan 28 07:39:11 crc kubenswrapper[4642]: I0128 07:39:11.278495 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-85fb65d6bf-4cwxd_4b4ddf14-3402-4717-8cd7-9858e01a1bc2/manager/0.log" Jan 28 07:39:11 crc kubenswrapper[4642]: I0128 07:39:11.367912 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-765f49f7c6-dglx5_1fcc6dc0-d8c3-47a9-965d-dec1320015c6/webhook-server/0.log" Jan 28 07:39:11 crc kubenswrapper[4642]: I0128 07:39:11.445706 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-6jjwb_78316030-2b3d-4a8a-b7ed-3ace14a05e80/kube-rbac-proxy/0.log" Jan 28 07:39:11 crc kubenswrapper[4642]: I0128 07:39:11.854056 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-6jjwb_78316030-2b3d-4a8a-b7ed-3ace14a05e80/speaker/0.log" Jan 28 07:39:11 crc kubenswrapper[4642]: I0128 07:39:11.906155 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c9xns_7eff0229-6d46-439f-9e3b-b1382d2615ee/frr/0.log" Jan 28 07:39:19 crc kubenswrapper[4642]: I0128 07:39:19.859894 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx9p7q_fec6cae0-ef33-4521-b704-1fead4aca74b/util/0.log" Jan 28 07:39:19 crc kubenswrapper[4642]: I0128 07:39:19.998126 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx9p7q_fec6cae0-ef33-4521-b704-1fead4aca74b/util/0.log" Jan 28 07:39:20 crc kubenswrapper[4642]: I0128 07:39:20.001449 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx9p7q_fec6cae0-ef33-4521-b704-1fead4aca74b/pull/0.log" Jan 28 07:39:20 crc kubenswrapper[4642]: I0128 07:39:20.012708 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx9p7q_fec6cae0-ef33-4521-b704-1fead4aca74b/pull/0.log" Jan 28 07:39:20 crc kubenswrapper[4642]: I0128 07:39:20.162714 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx9p7q_fec6cae0-ef33-4521-b704-1fead4aca74b/util/0.log" Jan 28 07:39:20 crc kubenswrapper[4642]: I0128 07:39:20.165535 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx9p7q_fec6cae0-ef33-4521-b704-1fead4aca74b/pull/0.log" Jan 28 07:39:20 crc kubenswrapper[4642]: I0128 07:39:20.176729 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx9p7q_fec6cae0-ef33-4521-b704-1fead4aca74b/extract/0.log" Jan 28 07:39:20 crc kubenswrapper[4642]: I0128 07:39:20.280176 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137cnpp_4ac15d4c-285c-4cef-8de9-b532767c0a6b/util/0.log" Jan 28 07:39:20 crc kubenswrapper[4642]: I0128 07:39:20.373496 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137cnpp_4ac15d4c-285c-4cef-8de9-b532767c0a6b/util/0.log" Jan 28 07:39:20 crc kubenswrapper[4642]: I0128 07:39:20.395017 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137cnpp_4ac15d4c-285c-4cef-8de9-b532767c0a6b/pull/0.log" Jan 28 07:39:20 crc kubenswrapper[4642]: I0128 07:39:20.402002 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137cnpp_4ac15d4c-285c-4cef-8de9-b532767c0a6b/pull/0.log" Jan 28 07:39:20 crc kubenswrapper[4642]: I0128 07:39:20.506807 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137cnpp_4ac15d4c-285c-4cef-8de9-b532767c0a6b/pull/0.log" Jan 28 07:39:20 crc kubenswrapper[4642]: I0128 07:39:20.517400 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137cnpp_4ac15d4c-285c-4cef-8de9-b532767c0a6b/util/0.log" Jan 28 07:39:20 crc kubenswrapper[4642]: I0128 07:39:20.540148 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137cnpp_4ac15d4c-285c-4cef-8de9-b532767c0a6b/extract/0.log" Jan 28 07:39:20 crc kubenswrapper[4642]: I0128 07:39:20.632852 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-65j4l_04f55e4c-8e13-4147-87e5-c69535042a39/extract-utilities/0.log" Jan 28 07:39:20 crc kubenswrapper[4642]: I0128 07:39:20.756089 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-65j4l_04f55e4c-8e13-4147-87e5-c69535042a39/extract-utilities/0.log" Jan 28 07:39:20 crc kubenswrapper[4642]: I0128 07:39:20.760081 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-65j4l_04f55e4c-8e13-4147-87e5-c69535042a39/extract-content/0.log" Jan 28 07:39:20 crc kubenswrapper[4642]: I0128 07:39:20.777774 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-65j4l_04f55e4c-8e13-4147-87e5-c69535042a39/extract-content/0.log" Jan 28 07:39:20 crc kubenswrapper[4642]: I0128 07:39:20.876748 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-65j4l_04f55e4c-8e13-4147-87e5-c69535042a39/extract-content/0.log" Jan 28 07:39:20 crc kubenswrapper[4642]: I0128 07:39:20.906951 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-65j4l_04f55e4c-8e13-4147-87e5-c69535042a39/extract-utilities/0.log" Jan 28 07:39:21 crc kubenswrapper[4642]: I0128 07:39:21.054478 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qjvsc_70ce17da-66a5-4aed-90f5-3ed27538b630/extract-utilities/0.log" Jan 28 07:39:21 crc kubenswrapper[4642]: I0128 07:39:21.098525 4642 scope.go:117] "RemoveContainer" containerID="0dcd78f9e568ada3af5f90732ccb64d686cd1d0ba16db578037a8ae26e1efe3d" Jan 28 07:39:21 crc kubenswrapper[4642]: E0128 07:39:21.098769 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdsmf_openshift-machine-config-operator(338ae955-434d-40bd-8519-580badf3e175)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" Jan 28 07:39:21 crc kubenswrapper[4642]: I0128 07:39:21.267972 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qjvsc_70ce17da-66a5-4aed-90f5-3ed27538b630/extract-content/0.log" Jan 28 07:39:21 crc kubenswrapper[4642]: I0128 07:39:21.273718 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qjvsc_70ce17da-66a5-4aed-90f5-3ed27538b630/extract-utilities/0.log" Jan 28 07:39:21 crc kubenswrapper[4642]: I0128 07:39:21.303785 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-65j4l_04f55e4c-8e13-4147-87e5-c69535042a39/registry-server/0.log" Jan 28 07:39:21 crc kubenswrapper[4642]: I0128 07:39:21.304557 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qjvsc_70ce17da-66a5-4aed-90f5-3ed27538b630/extract-content/0.log" Jan 28 07:39:21 crc kubenswrapper[4642]: I0128 07:39:21.387530 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qjvsc_70ce17da-66a5-4aed-90f5-3ed27538b630/extract-utilities/0.log" Jan 28 07:39:21 crc kubenswrapper[4642]: I0128 07:39:21.395217 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qjvsc_70ce17da-66a5-4aed-90f5-3ed27538b630/extract-content/0.log" Jan 28 07:39:21 crc kubenswrapper[4642]: I0128 07:39:21.553908 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-mfmj2_09fc0334-7203-49cf-958d-0c34a6dc1bdc/marketplace-operator/0.log" Jan 28 07:39:21 crc kubenswrapper[4642]: I0128 07:39:21.610954 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qjvsc_70ce17da-66a5-4aed-90f5-3ed27538b630/registry-server/0.log" Jan 28 07:39:21 crc kubenswrapper[4642]: I0128 07:39:21.630120 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pnx2g_01782f50-40a0-4a5d-ba1d-0fd6846cb642/extract-utilities/0.log" Jan 28 07:39:21 crc kubenswrapper[4642]: I0128 07:39:21.737439 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pnx2g_01782f50-40a0-4a5d-ba1d-0fd6846cb642/extract-utilities/0.log" Jan 28 07:39:21 crc kubenswrapper[4642]: I0128 07:39:21.742358 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pnx2g_01782f50-40a0-4a5d-ba1d-0fd6846cb642/extract-content/0.log" Jan 28 07:39:21 crc kubenswrapper[4642]: I0128 07:39:21.742647 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pnx2g_01782f50-40a0-4a5d-ba1d-0fd6846cb642/extract-content/0.log" Jan 28 07:39:21 crc kubenswrapper[4642]: I0128 07:39:21.867632 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pnx2g_01782f50-40a0-4a5d-ba1d-0fd6846cb642/extract-content/0.log" Jan 28 07:39:21 crc kubenswrapper[4642]: I0128 07:39:21.907353 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pnx2g_01782f50-40a0-4a5d-ba1d-0fd6846cb642/extract-utilities/0.log" Jan 28 07:39:21 crc kubenswrapper[4642]: I0128 07:39:21.990758 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pnx2g_01782f50-40a0-4a5d-ba1d-0fd6846cb642/registry-server/0.log" Jan 28 07:39:22 crc kubenswrapper[4642]: I0128 07:39:22.038924 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6gz7t_f07b2642-09a0-4490-bbe8-3e3a48e2a81a/extract-utilities/0.log" Jan 28 07:39:22 crc kubenswrapper[4642]: I0128 07:39:22.139813 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6gz7t_f07b2642-09a0-4490-bbe8-3e3a48e2a81a/extract-utilities/0.log" Jan 28 07:39:22 crc kubenswrapper[4642]: I0128 07:39:22.144762 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6gz7t_f07b2642-09a0-4490-bbe8-3e3a48e2a81a/extract-content/0.log" Jan 28 07:39:22 crc kubenswrapper[4642]: I0128 07:39:22.160942 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6gz7t_f07b2642-09a0-4490-bbe8-3e3a48e2a81a/extract-content/0.log" Jan 28 07:39:22 crc kubenswrapper[4642]: I0128 07:39:22.267287 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6gz7t_f07b2642-09a0-4490-bbe8-3e3a48e2a81a/extract-content/0.log" Jan 28 07:39:22 crc kubenswrapper[4642]: I0128 07:39:22.267412 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6gz7t_f07b2642-09a0-4490-bbe8-3e3a48e2a81a/extract-utilities/0.log" Jan 28 07:39:22 crc kubenswrapper[4642]: I0128 07:39:22.684960 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6gz7t_f07b2642-09a0-4490-bbe8-3e3a48e2a81a/registry-server/0.log" Jan 28 07:39:35 crc kubenswrapper[4642]: I0128 07:39:35.098761 4642 scope.go:117] "RemoveContainer" containerID="0dcd78f9e568ada3af5f90732ccb64d686cd1d0ba16db578037a8ae26e1efe3d" Jan 28 07:39:35 crc kubenswrapper[4642]: E0128 07:39:35.099375 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdsmf_openshift-machine-config-operator(338ae955-434d-40bd-8519-580badf3e175)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" Jan 28 07:39:50 crc kubenswrapper[4642]: I0128 07:39:50.098335 4642 scope.go:117] "RemoveContainer" containerID="0dcd78f9e568ada3af5f90732ccb64d686cd1d0ba16db578037a8ae26e1efe3d" Jan 28 07:39:50 crc kubenswrapper[4642]: E0128 07:39:50.098849 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdsmf_openshift-machine-config-operator(338ae955-434d-40bd-8519-580badf3e175)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" Jan 28 07:40:04 crc kubenswrapper[4642]: I0128 07:40:04.098834 4642 scope.go:117] "RemoveContainer" containerID="0dcd78f9e568ada3af5f90732ccb64d686cd1d0ba16db578037a8ae26e1efe3d" Jan 28 07:40:04 crc kubenswrapper[4642]: E0128 07:40:04.099362 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdsmf_openshift-machine-config-operator(338ae955-434d-40bd-8519-580badf3e175)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" Jan 28 07:40:15 crc kubenswrapper[4642]: I0128 07:40:15.097942 4642 scope.go:117] "RemoveContainer" containerID="0dcd78f9e568ada3af5f90732ccb64d686cd1d0ba16db578037a8ae26e1efe3d" Jan 28 07:40:15 crc kubenswrapper[4642]: E0128 07:40:15.098635 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdsmf_openshift-machine-config-operator(338ae955-434d-40bd-8519-580badf3e175)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" Jan 28 07:40:29 crc kubenswrapper[4642]: I0128 07:40:29.097941 4642 scope.go:117] "RemoveContainer" containerID="0dcd78f9e568ada3af5f90732ccb64d686cd1d0ba16db578037a8ae26e1efe3d" Jan 28 07:40:29 crc kubenswrapper[4642]: E0128 07:40:29.098523 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdsmf_openshift-machine-config-operator(338ae955-434d-40bd-8519-580badf3e175)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" Jan 28 07:40:38 crc kubenswrapper[4642]: I0128 07:40:38.365494 4642 generic.go:334] "Generic (PLEG): container finished" podID="93151dcc-9130-4c41-9721-035e1d183478" containerID="89b48ccde97f03fb4132292404ffb04cce0fca8cd710783ca2518c8116e59654" exitCode=0 Jan 28 07:40:38 crc kubenswrapper[4642]: I0128 07:40:38.365569 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xtv7h/must-gather-szpf5" event={"ID":"93151dcc-9130-4c41-9721-035e1d183478","Type":"ContainerDied","Data":"89b48ccde97f03fb4132292404ffb04cce0fca8cd710783ca2518c8116e59654"} Jan 28 07:40:38 crc kubenswrapper[4642]: I0128 07:40:38.366433 4642 scope.go:117] "RemoveContainer" containerID="89b48ccde97f03fb4132292404ffb04cce0fca8cd710783ca2518c8116e59654" Jan 28 07:40:39 crc kubenswrapper[4642]: I0128 07:40:39.031834 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-xtv7h_must-gather-szpf5_93151dcc-9130-4c41-9721-035e1d183478/gather/0.log" Jan 28 07:40:40 crc kubenswrapper[4642]: I0128 07:40:40.098619 4642 scope.go:117] "RemoveContainer" containerID="0dcd78f9e568ada3af5f90732ccb64d686cd1d0ba16db578037a8ae26e1efe3d" Jan 28 07:40:40 crc kubenswrapper[4642]: I0128 07:40:40.380712 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" event={"ID":"338ae955-434d-40bd-8519-580badf3e175","Type":"ContainerStarted","Data":"ac735d129b9c30db88be66b981b03419f68aa61de1ef5f25e471c18bc7e1d026"} Jan 28 07:40:46 crc kubenswrapper[4642]: I0128 07:40:46.252790 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-xtv7h/must-gather-szpf5"] Jan 28 07:40:46 crc kubenswrapper[4642]: I0128 07:40:46.253343 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-xtv7h/must-gather-szpf5" podUID="93151dcc-9130-4c41-9721-035e1d183478" containerName="copy" containerID="cri-o://f7e4bb6792368919f1e6f89a40d13b67cefd8ebdc747bb45a3dfed0698e9ca13" gracePeriod=2 Jan 28 07:40:46 crc kubenswrapper[4642]: I0128 07:40:46.259507 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-xtv7h/must-gather-szpf5"] Jan 28 07:40:46 crc kubenswrapper[4642]: I0128 07:40:46.419355 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-xtv7h_must-gather-szpf5_93151dcc-9130-4c41-9721-035e1d183478/copy/0.log" Jan 28 07:40:46 crc kubenswrapper[4642]: I0128 07:40:46.419741 4642 generic.go:334] "Generic (PLEG): container finished" podID="93151dcc-9130-4c41-9721-035e1d183478" containerID="f7e4bb6792368919f1e6f89a40d13b67cefd8ebdc747bb45a3dfed0698e9ca13" exitCode=143 Jan 28 07:40:46 crc kubenswrapper[4642]: I0128 07:40:46.590242 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-xtv7h_must-gather-szpf5_93151dcc-9130-4c41-9721-035e1d183478/copy/0.log" Jan 28 07:40:46 crc kubenswrapper[4642]: I0128 07:40:46.590862 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xtv7h/must-gather-szpf5" Jan 28 07:40:46 crc kubenswrapper[4642]: I0128 07:40:46.709592 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khp9r\" (UniqueName: \"kubernetes.io/projected/93151dcc-9130-4c41-9721-035e1d183478-kube-api-access-khp9r\") pod \"93151dcc-9130-4c41-9721-035e1d183478\" (UID: \"93151dcc-9130-4c41-9721-035e1d183478\") " Jan 28 07:40:46 crc kubenswrapper[4642]: I0128 07:40:46.710725 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/93151dcc-9130-4c41-9721-035e1d183478-must-gather-output\") pod \"93151dcc-9130-4c41-9721-035e1d183478\" (UID: \"93151dcc-9130-4c41-9721-035e1d183478\") " Jan 28 07:40:46 crc kubenswrapper[4642]: I0128 07:40:46.719090 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93151dcc-9130-4c41-9721-035e1d183478-kube-api-access-khp9r" (OuterVolumeSpecName: "kube-api-access-khp9r") pod "93151dcc-9130-4c41-9721-035e1d183478" (UID: "93151dcc-9130-4c41-9721-035e1d183478"). InnerVolumeSpecName "kube-api-access-khp9r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:40:46 crc kubenswrapper[4642]: I0128 07:40:46.814093 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-khp9r\" (UniqueName: \"kubernetes.io/projected/93151dcc-9130-4c41-9721-035e1d183478-kube-api-access-khp9r\") on node \"crc\" DevicePath \"\"" Jan 28 07:40:46 crc kubenswrapper[4642]: I0128 07:40:46.825837 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93151dcc-9130-4c41-9721-035e1d183478-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "93151dcc-9130-4c41-9721-035e1d183478" (UID: "93151dcc-9130-4c41-9721-035e1d183478"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 07:40:46 crc kubenswrapper[4642]: I0128 07:40:46.915345 4642 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/93151dcc-9130-4c41-9721-035e1d183478-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 28 07:40:47 crc kubenswrapper[4642]: I0128 07:40:47.106479 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93151dcc-9130-4c41-9721-035e1d183478" path="/var/lib/kubelet/pods/93151dcc-9130-4c41-9721-035e1d183478/volumes" Jan 28 07:40:47 crc kubenswrapper[4642]: I0128 07:40:47.426782 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-xtv7h_must-gather-szpf5_93151dcc-9130-4c41-9721-035e1d183478/copy/0.log" Jan 28 07:40:47 crc kubenswrapper[4642]: I0128 07:40:47.427330 4642 scope.go:117] "RemoveContainer" containerID="f7e4bb6792368919f1e6f89a40d13b67cefd8ebdc747bb45a3dfed0698e9ca13" Jan 28 07:40:47 crc kubenswrapper[4642]: I0128 07:40:47.427369 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xtv7h/must-gather-szpf5" Jan 28 07:40:47 crc kubenswrapper[4642]: I0128 07:40:47.441962 4642 scope.go:117] "RemoveContainer" containerID="89b48ccde97f03fb4132292404ffb04cce0fca8cd710783ca2518c8116e59654" Jan 28 07:41:29 crc kubenswrapper[4642]: I0128 07:41:29.715875 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5t9q7"] Jan 28 07:41:29 crc kubenswrapper[4642]: E0128 07:41:29.716587 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ec3c60b-e7e1-44d2-b23b-1549c5c349bf" containerName="container-00" Jan 28 07:41:29 crc kubenswrapper[4642]: I0128 07:41:29.716608 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ec3c60b-e7e1-44d2-b23b-1549c5c349bf" containerName="container-00" Jan 28 07:41:29 crc kubenswrapper[4642]: E0128 07:41:29.716633 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93151dcc-9130-4c41-9721-035e1d183478" containerName="gather" Jan 28 07:41:29 crc kubenswrapper[4642]: I0128 07:41:29.716638 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="93151dcc-9130-4c41-9721-035e1d183478" containerName="gather" Jan 28 07:41:29 crc kubenswrapper[4642]: E0128 07:41:29.716645 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93151dcc-9130-4c41-9721-035e1d183478" containerName="copy" Jan 28 07:41:29 crc kubenswrapper[4642]: I0128 07:41:29.716651 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="93151dcc-9130-4c41-9721-035e1d183478" containerName="copy" Jan 28 07:41:29 crc kubenswrapper[4642]: I0128 07:41:29.716819 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="93151dcc-9130-4c41-9721-035e1d183478" containerName="copy" Jan 28 07:41:29 crc kubenswrapper[4642]: I0128 07:41:29.716847 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ec3c60b-e7e1-44d2-b23b-1549c5c349bf" containerName="container-00" Jan 28 07:41:29 crc kubenswrapper[4642]: I0128 07:41:29.716861 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="93151dcc-9130-4c41-9721-035e1d183478" containerName="gather" Jan 28 07:41:29 crc kubenswrapper[4642]: I0128 07:41:29.718002 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5t9q7" Jan 28 07:41:29 crc kubenswrapper[4642]: I0128 07:41:29.725354 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5t9q7"] Jan 28 07:41:29 crc kubenswrapper[4642]: I0128 07:41:29.784559 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0147255c-5bd2-4bd6-995f-60b72a5322bc-utilities\") pod \"redhat-marketplace-5t9q7\" (UID: \"0147255c-5bd2-4bd6-995f-60b72a5322bc\") " pod="openshift-marketplace/redhat-marketplace-5t9q7" Jan 28 07:41:29 crc kubenswrapper[4642]: I0128 07:41:29.784618 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zq89\" (UniqueName: \"kubernetes.io/projected/0147255c-5bd2-4bd6-995f-60b72a5322bc-kube-api-access-8zq89\") pod \"redhat-marketplace-5t9q7\" (UID: \"0147255c-5bd2-4bd6-995f-60b72a5322bc\") " pod="openshift-marketplace/redhat-marketplace-5t9q7" Jan 28 07:41:29 crc kubenswrapper[4642]: I0128 07:41:29.784678 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0147255c-5bd2-4bd6-995f-60b72a5322bc-catalog-content\") pod \"redhat-marketplace-5t9q7\" (UID: \"0147255c-5bd2-4bd6-995f-60b72a5322bc\") " pod="openshift-marketplace/redhat-marketplace-5t9q7" Jan 28 07:41:29 crc kubenswrapper[4642]: I0128 07:41:29.885940 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0147255c-5bd2-4bd6-995f-60b72a5322bc-catalog-content\") pod \"redhat-marketplace-5t9q7\" (UID: \"0147255c-5bd2-4bd6-995f-60b72a5322bc\") " pod="openshift-marketplace/redhat-marketplace-5t9q7" Jan 28 07:41:29 crc kubenswrapper[4642]: I0128 07:41:29.886272 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0147255c-5bd2-4bd6-995f-60b72a5322bc-utilities\") pod \"redhat-marketplace-5t9q7\" (UID: \"0147255c-5bd2-4bd6-995f-60b72a5322bc\") " pod="openshift-marketplace/redhat-marketplace-5t9q7" Jan 28 07:41:29 crc kubenswrapper[4642]: I0128 07:41:29.886325 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zq89\" (UniqueName: \"kubernetes.io/projected/0147255c-5bd2-4bd6-995f-60b72a5322bc-kube-api-access-8zq89\") pod \"redhat-marketplace-5t9q7\" (UID: \"0147255c-5bd2-4bd6-995f-60b72a5322bc\") " pod="openshift-marketplace/redhat-marketplace-5t9q7" Jan 28 07:41:29 crc kubenswrapper[4642]: I0128 07:41:29.886384 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0147255c-5bd2-4bd6-995f-60b72a5322bc-catalog-content\") pod \"redhat-marketplace-5t9q7\" (UID: \"0147255c-5bd2-4bd6-995f-60b72a5322bc\") " pod="openshift-marketplace/redhat-marketplace-5t9q7" Jan 28 07:41:29 crc kubenswrapper[4642]: I0128 07:41:29.886687 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0147255c-5bd2-4bd6-995f-60b72a5322bc-utilities\") pod \"redhat-marketplace-5t9q7\" (UID: \"0147255c-5bd2-4bd6-995f-60b72a5322bc\") " pod="openshift-marketplace/redhat-marketplace-5t9q7" Jan 28 07:41:29 crc kubenswrapper[4642]: I0128 07:41:29.902101 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zq89\" (UniqueName: \"kubernetes.io/projected/0147255c-5bd2-4bd6-995f-60b72a5322bc-kube-api-access-8zq89\") pod \"redhat-marketplace-5t9q7\" (UID: \"0147255c-5bd2-4bd6-995f-60b72a5322bc\") " pod="openshift-marketplace/redhat-marketplace-5t9q7" Jan 28 07:41:30 crc kubenswrapper[4642]: I0128 07:41:30.031694 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5t9q7" Jan 28 07:41:30 crc kubenswrapper[4642]: I0128 07:41:30.803909 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5t9q7"] Jan 28 07:41:31 crc kubenswrapper[4642]: I0128 07:41:31.706284 4642 generic.go:334] "Generic (PLEG): container finished" podID="0147255c-5bd2-4bd6-995f-60b72a5322bc" containerID="9eb26fd3c8bca3bb2e849386a184e4fc6ba4e8435131b0a3a0fed9072f0e512a" exitCode=0 Jan 28 07:41:31 crc kubenswrapper[4642]: I0128 07:41:31.706388 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5t9q7" event={"ID":"0147255c-5bd2-4bd6-995f-60b72a5322bc","Type":"ContainerDied","Data":"9eb26fd3c8bca3bb2e849386a184e4fc6ba4e8435131b0a3a0fed9072f0e512a"} Jan 28 07:41:31 crc kubenswrapper[4642]: I0128 07:41:31.706512 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5t9q7" event={"ID":"0147255c-5bd2-4bd6-995f-60b72a5322bc","Type":"ContainerStarted","Data":"9511a3c911f487cd616bc07056f20432c15e691acda86424bbaeb2dc0099b45d"} Jan 28 07:41:31 crc kubenswrapper[4642]: I0128 07:41:31.708945 4642 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 07:41:32 crc kubenswrapper[4642]: I0128 07:41:32.714669 4642 generic.go:334] "Generic (PLEG): container finished" podID="0147255c-5bd2-4bd6-995f-60b72a5322bc" containerID="f6070de2142e79c3914aa9200529699e03025c1b9308551677f83dfac294a1b2" exitCode=0 Jan 28 07:41:32 crc kubenswrapper[4642]: I0128 07:41:32.714802 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5t9q7" event={"ID":"0147255c-5bd2-4bd6-995f-60b72a5322bc","Type":"ContainerDied","Data":"f6070de2142e79c3914aa9200529699e03025c1b9308551677f83dfac294a1b2"} Jan 28 07:41:33 crc kubenswrapper[4642]: I0128 07:41:33.724292 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5t9q7" event={"ID":"0147255c-5bd2-4bd6-995f-60b72a5322bc","Type":"ContainerStarted","Data":"ac8e506d9dc529f09d82a74ef39e1b51b2445119be0c48980ef65e9e424e2e6e"} Jan 28 07:41:33 crc kubenswrapper[4642]: I0128 07:41:33.740910 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5t9q7" podStartSLOduration=3.292707826 podStartE2EDuration="4.740893177s" podCreationTimestamp="2026-01-28 07:41:29 +0000 UTC" firstStartedPulling="2026-01-28 07:41:31.707925157 +0000 UTC m=+3214.940013966" lastFinishedPulling="2026-01-28 07:41:33.156110508 +0000 UTC m=+3216.388199317" observedRunningTime="2026-01-28 07:41:33.738980932 +0000 UTC m=+3216.971069741" watchObservedRunningTime="2026-01-28 07:41:33.740893177 +0000 UTC m=+3216.972981986" Jan 28 07:41:40 crc kubenswrapper[4642]: I0128 07:41:40.032772 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5t9q7" Jan 28 07:41:40 crc kubenswrapper[4642]: I0128 07:41:40.033134 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5t9q7" Jan 28 07:41:40 crc kubenswrapper[4642]: I0128 07:41:40.062635 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5t9q7" Jan 28 07:41:40 crc kubenswrapper[4642]: I0128 07:41:40.796449 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5t9q7" Jan 28 07:41:40 crc kubenswrapper[4642]: I0128 07:41:40.832987 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5t9q7"] Jan 28 07:41:42 crc kubenswrapper[4642]: I0128 07:41:42.777769 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5t9q7" podUID="0147255c-5bd2-4bd6-995f-60b72a5322bc" containerName="registry-server" containerID="cri-o://ac8e506d9dc529f09d82a74ef39e1b51b2445119be0c48980ef65e9e424e2e6e" gracePeriod=2 Jan 28 07:41:43 crc kubenswrapper[4642]: I0128 07:41:43.143248 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5t9q7" Jan 28 07:41:43 crc kubenswrapper[4642]: I0128 07:41:43.286836 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zq89\" (UniqueName: \"kubernetes.io/projected/0147255c-5bd2-4bd6-995f-60b72a5322bc-kube-api-access-8zq89\") pod \"0147255c-5bd2-4bd6-995f-60b72a5322bc\" (UID: \"0147255c-5bd2-4bd6-995f-60b72a5322bc\") " Jan 28 07:41:43 crc kubenswrapper[4642]: I0128 07:41:43.287002 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0147255c-5bd2-4bd6-995f-60b72a5322bc-utilities\") pod \"0147255c-5bd2-4bd6-995f-60b72a5322bc\" (UID: \"0147255c-5bd2-4bd6-995f-60b72a5322bc\") " Jan 28 07:41:43 crc kubenswrapper[4642]: I0128 07:41:43.287047 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0147255c-5bd2-4bd6-995f-60b72a5322bc-catalog-content\") pod \"0147255c-5bd2-4bd6-995f-60b72a5322bc\" (UID: \"0147255c-5bd2-4bd6-995f-60b72a5322bc\") " Jan 28 07:41:43 crc kubenswrapper[4642]: I0128 07:41:43.287589 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0147255c-5bd2-4bd6-995f-60b72a5322bc-utilities" (OuterVolumeSpecName: "utilities") pod "0147255c-5bd2-4bd6-995f-60b72a5322bc" (UID: "0147255c-5bd2-4bd6-995f-60b72a5322bc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 07:41:43 crc kubenswrapper[4642]: I0128 07:41:43.291034 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0147255c-5bd2-4bd6-995f-60b72a5322bc-kube-api-access-8zq89" (OuterVolumeSpecName: "kube-api-access-8zq89") pod "0147255c-5bd2-4bd6-995f-60b72a5322bc" (UID: "0147255c-5bd2-4bd6-995f-60b72a5322bc"). InnerVolumeSpecName "kube-api-access-8zq89". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:41:43 crc kubenswrapper[4642]: I0128 07:41:43.303288 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0147255c-5bd2-4bd6-995f-60b72a5322bc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0147255c-5bd2-4bd6-995f-60b72a5322bc" (UID: "0147255c-5bd2-4bd6-995f-60b72a5322bc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 07:41:43 crc kubenswrapper[4642]: I0128 07:41:43.388818 4642 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0147255c-5bd2-4bd6-995f-60b72a5322bc-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 07:41:43 crc kubenswrapper[4642]: I0128 07:41:43.388843 4642 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0147255c-5bd2-4bd6-995f-60b72a5322bc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 07:41:43 crc kubenswrapper[4642]: I0128 07:41:43.388853 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8zq89\" (UniqueName: \"kubernetes.io/projected/0147255c-5bd2-4bd6-995f-60b72a5322bc-kube-api-access-8zq89\") on node \"crc\" DevicePath \"\"" Jan 28 07:41:43 crc kubenswrapper[4642]: I0128 07:41:43.784206 4642 generic.go:334] "Generic (PLEG): container finished" podID="0147255c-5bd2-4bd6-995f-60b72a5322bc" containerID="ac8e506d9dc529f09d82a74ef39e1b51b2445119be0c48980ef65e9e424e2e6e" exitCode=0 Jan 28 07:41:43 crc kubenswrapper[4642]: I0128 07:41:43.784240 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5t9q7" event={"ID":"0147255c-5bd2-4bd6-995f-60b72a5322bc","Type":"ContainerDied","Data":"ac8e506d9dc529f09d82a74ef39e1b51b2445119be0c48980ef65e9e424e2e6e"} Jan 28 07:41:43 crc kubenswrapper[4642]: I0128 07:41:43.784271 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5t9q7" event={"ID":"0147255c-5bd2-4bd6-995f-60b72a5322bc","Type":"ContainerDied","Data":"9511a3c911f487cd616bc07056f20432c15e691acda86424bbaeb2dc0099b45d"} Jan 28 07:41:43 crc kubenswrapper[4642]: I0128 07:41:43.784284 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5t9q7" Jan 28 07:41:43 crc kubenswrapper[4642]: I0128 07:41:43.784287 4642 scope.go:117] "RemoveContainer" containerID="ac8e506d9dc529f09d82a74ef39e1b51b2445119be0c48980ef65e9e424e2e6e" Jan 28 07:41:43 crc kubenswrapper[4642]: I0128 07:41:43.798100 4642 scope.go:117] "RemoveContainer" containerID="f6070de2142e79c3914aa9200529699e03025c1b9308551677f83dfac294a1b2" Jan 28 07:41:43 crc kubenswrapper[4642]: I0128 07:41:43.808343 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5t9q7"] Jan 28 07:41:43 crc kubenswrapper[4642]: I0128 07:41:43.814998 4642 scope.go:117] "RemoveContainer" containerID="9eb26fd3c8bca3bb2e849386a184e4fc6ba4e8435131b0a3a0fed9072f0e512a" Jan 28 07:41:43 crc kubenswrapper[4642]: I0128 07:41:43.817137 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5t9q7"] Jan 28 07:41:43 crc kubenswrapper[4642]: I0128 07:41:43.845559 4642 scope.go:117] "RemoveContainer" containerID="ac8e506d9dc529f09d82a74ef39e1b51b2445119be0c48980ef65e9e424e2e6e" Jan 28 07:41:43 crc kubenswrapper[4642]: E0128 07:41:43.845905 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac8e506d9dc529f09d82a74ef39e1b51b2445119be0c48980ef65e9e424e2e6e\": container with ID starting with ac8e506d9dc529f09d82a74ef39e1b51b2445119be0c48980ef65e9e424e2e6e not found: ID does not exist" containerID="ac8e506d9dc529f09d82a74ef39e1b51b2445119be0c48980ef65e9e424e2e6e" Jan 28 07:41:43 crc kubenswrapper[4642]: I0128 07:41:43.845945 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac8e506d9dc529f09d82a74ef39e1b51b2445119be0c48980ef65e9e424e2e6e"} err="failed to get container status \"ac8e506d9dc529f09d82a74ef39e1b51b2445119be0c48980ef65e9e424e2e6e\": rpc error: code = NotFound desc = could not find container \"ac8e506d9dc529f09d82a74ef39e1b51b2445119be0c48980ef65e9e424e2e6e\": container with ID starting with ac8e506d9dc529f09d82a74ef39e1b51b2445119be0c48980ef65e9e424e2e6e not found: ID does not exist" Jan 28 07:41:43 crc kubenswrapper[4642]: I0128 07:41:43.845973 4642 scope.go:117] "RemoveContainer" containerID="f6070de2142e79c3914aa9200529699e03025c1b9308551677f83dfac294a1b2" Jan 28 07:41:43 crc kubenswrapper[4642]: E0128 07:41:43.846402 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6070de2142e79c3914aa9200529699e03025c1b9308551677f83dfac294a1b2\": container with ID starting with f6070de2142e79c3914aa9200529699e03025c1b9308551677f83dfac294a1b2 not found: ID does not exist" containerID="f6070de2142e79c3914aa9200529699e03025c1b9308551677f83dfac294a1b2" Jan 28 07:41:43 crc kubenswrapper[4642]: I0128 07:41:43.846430 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6070de2142e79c3914aa9200529699e03025c1b9308551677f83dfac294a1b2"} err="failed to get container status \"f6070de2142e79c3914aa9200529699e03025c1b9308551677f83dfac294a1b2\": rpc error: code = NotFound desc = could not find container \"f6070de2142e79c3914aa9200529699e03025c1b9308551677f83dfac294a1b2\": container with ID starting with f6070de2142e79c3914aa9200529699e03025c1b9308551677f83dfac294a1b2 not found: ID does not exist" Jan 28 07:41:43 crc kubenswrapper[4642]: I0128 07:41:43.846447 4642 scope.go:117] "RemoveContainer" containerID="9eb26fd3c8bca3bb2e849386a184e4fc6ba4e8435131b0a3a0fed9072f0e512a" Jan 28 07:41:43 crc kubenswrapper[4642]: E0128 07:41:43.846663 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9eb26fd3c8bca3bb2e849386a184e4fc6ba4e8435131b0a3a0fed9072f0e512a\": container with ID starting with 9eb26fd3c8bca3bb2e849386a184e4fc6ba4e8435131b0a3a0fed9072f0e512a not found: ID does not exist" containerID="9eb26fd3c8bca3bb2e849386a184e4fc6ba4e8435131b0a3a0fed9072f0e512a" Jan 28 07:41:43 crc kubenswrapper[4642]: I0128 07:41:43.846691 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9eb26fd3c8bca3bb2e849386a184e4fc6ba4e8435131b0a3a0fed9072f0e512a"} err="failed to get container status \"9eb26fd3c8bca3bb2e849386a184e4fc6ba4e8435131b0a3a0fed9072f0e512a\": rpc error: code = NotFound desc = could not find container \"9eb26fd3c8bca3bb2e849386a184e4fc6ba4e8435131b0a3a0fed9072f0e512a\": container with ID starting with 9eb26fd3c8bca3bb2e849386a184e4fc6ba4e8435131b0a3a0fed9072f0e512a not found: ID does not exist" Jan 28 07:41:45 crc kubenswrapper[4642]: I0128 07:41:45.105156 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0147255c-5bd2-4bd6-995f-60b72a5322bc" path="/var/lib/kubelet/pods/0147255c-5bd2-4bd6-995f-60b72a5322bc/volumes" Jan 28 07:42:55 crc kubenswrapper[4642]: I0128 07:42:55.368420 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-4fmnl/must-gather-m4hsk"] Jan 28 07:42:55 crc kubenswrapper[4642]: E0128 07:42:55.369054 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0147255c-5bd2-4bd6-995f-60b72a5322bc" containerName="extract-content" Jan 28 07:42:55 crc kubenswrapper[4642]: I0128 07:42:55.369066 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="0147255c-5bd2-4bd6-995f-60b72a5322bc" containerName="extract-content" Jan 28 07:42:55 crc kubenswrapper[4642]: E0128 07:42:55.369082 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0147255c-5bd2-4bd6-995f-60b72a5322bc" containerName="registry-server" Jan 28 07:42:55 crc kubenswrapper[4642]: I0128 07:42:55.369088 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="0147255c-5bd2-4bd6-995f-60b72a5322bc" containerName="registry-server" Jan 28 07:42:55 crc kubenswrapper[4642]: E0128 07:42:55.369101 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0147255c-5bd2-4bd6-995f-60b72a5322bc" containerName="extract-utilities" Jan 28 07:42:55 crc kubenswrapper[4642]: I0128 07:42:55.369107 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="0147255c-5bd2-4bd6-995f-60b72a5322bc" containerName="extract-utilities" Jan 28 07:42:55 crc kubenswrapper[4642]: I0128 07:42:55.369287 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="0147255c-5bd2-4bd6-995f-60b72a5322bc" containerName="registry-server" Jan 28 07:42:55 crc kubenswrapper[4642]: I0128 07:42:55.370087 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4fmnl/must-gather-m4hsk" Jan 28 07:42:55 crc kubenswrapper[4642]: I0128 07:42:55.372060 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-4fmnl"/"openshift-service-ca.crt" Jan 28 07:42:55 crc kubenswrapper[4642]: I0128 07:42:55.372714 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-4fmnl"/"kube-root-ca.crt" Jan 28 07:42:55 crc kubenswrapper[4642]: I0128 07:42:55.386761 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-4fmnl/must-gather-m4hsk"] Jan 28 07:42:55 crc kubenswrapper[4642]: I0128 07:42:55.525453 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/1937621d-44bd-461a-9387-71399215fb23-must-gather-output\") pod \"must-gather-m4hsk\" (UID: \"1937621d-44bd-461a-9387-71399215fb23\") " pod="openshift-must-gather-4fmnl/must-gather-m4hsk" Jan 28 07:42:55 crc kubenswrapper[4642]: I0128 07:42:55.525566 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgz5v\" (UniqueName: \"kubernetes.io/projected/1937621d-44bd-461a-9387-71399215fb23-kube-api-access-rgz5v\") pod \"must-gather-m4hsk\" (UID: \"1937621d-44bd-461a-9387-71399215fb23\") " pod="openshift-must-gather-4fmnl/must-gather-m4hsk" Jan 28 07:42:55 crc kubenswrapper[4642]: I0128 07:42:55.627374 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/1937621d-44bd-461a-9387-71399215fb23-must-gather-output\") pod \"must-gather-m4hsk\" (UID: \"1937621d-44bd-461a-9387-71399215fb23\") " pod="openshift-must-gather-4fmnl/must-gather-m4hsk" Jan 28 07:42:55 crc kubenswrapper[4642]: I0128 07:42:55.627482 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgz5v\" (UniqueName: \"kubernetes.io/projected/1937621d-44bd-461a-9387-71399215fb23-kube-api-access-rgz5v\") pod \"must-gather-m4hsk\" (UID: \"1937621d-44bd-461a-9387-71399215fb23\") " pod="openshift-must-gather-4fmnl/must-gather-m4hsk" Jan 28 07:42:55 crc kubenswrapper[4642]: I0128 07:42:55.627771 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/1937621d-44bd-461a-9387-71399215fb23-must-gather-output\") pod \"must-gather-m4hsk\" (UID: \"1937621d-44bd-461a-9387-71399215fb23\") " pod="openshift-must-gather-4fmnl/must-gather-m4hsk" Jan 28 07:42:55 crc kubenswrapper[4642]: I0128 07:42:55.643451 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgz5v\" (UniqueName: \"kubernetes.io/projected/1937621d-44bd-461a-9387-71399215fb23-kube-api-access-rgz5v\") pod \"must-gather-m4hsk\" (UID: \"1937621d-44bd-461a-9387-71399215fb23\") " pod="openshift-must-gather-4fmnl/must-gather-m4hsk" Jan 28 07:42:55 crc kubenswrapper[4642]: I0128 07:42:55.684582 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4fmnl/must-gather-m4hsk" Jan 28 07:42:56 crc kubenswrapper[4642]: I0128 07:42:56.050779 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-4fmnl/must-gather-m4hsk"] Jan 28 07:42:56 crc kubenswrapper[4642]: I0128 07:42:56.215036 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4fmnl/must-gather-m4hsk" event={"ID":"1937621d-44bd-461a-9387-71399215fb23","Type":"ContainerStarted","Data":"bc61c0908dd7ccae29805f46638aacedfb4870497d5aa46027069dffd6212415"} Jan 28 07:42:56 crc kubenswrapper[4642]: I0128 07:42:56.215263 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4fmnl/must-gather-m4hsk" event={"ID":"1937621d-44bd-461a-9387-71399215fb23","Type":"ContainerStarted","Data":"f78e9e00f4a6cd2efc70e0887c175976dd23e19d5deb25063f5ce06cea0ad8c5"} Jan 28 07:42:57 crc kubenswrapper[4642]: I0128 07:42:57.224168 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4fmnl/must-gather-m4hsk" event={"ID":"1937621d-44bd-461a-9387-71399215fb23","Type":"ContainerStarted","Data":"3d2a9e55adede0899c649a7cbf88b64f734ced12e5aff9eb82e978d55c217746"} Jan 28 07:42:57 crc kubenswrapper[4642]: I0128 07:42:57.240711 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-4fmnl/must-gather-m4hsk" podStartSLOduration=2.240696649 podStartE2EDuration="2.240696649s" podCreationTimestamp="2026-01-28 07:42:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 07:42:57.233987475 +0000 UTC m=+3300.466076284" watchObservedRunningTime="2026-01-28 07:42:57.240696649 +0000 UTC m=+3300.472785459" Jan 28 07:42:58 crc kubenswrapper[4642]: I0128 07:42:58.867372 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-4fmnl/crc-debug-gdpzd"] Jan 28 07:42:58 crc kubenswrapper[4642]: I0128 07:42:58.868833 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4fmnl/crc-debug-gdpzd" Jan 28 07:42:58 crc kubenswrapper[4642]: I0128 07:42:58.870453 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-4fmnl"/"default-dockercfg-667wc" Jan 28 07:42:58 crc kubenswrapper[4642]: I0128 07:42:58.983046 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwncb\" (UniqueName: \"kubernetes.io/projected/5c9652c3-0b4c-40f4-a9be-dfb04d7b3ff6-kube-api-access-fwncb\") pod \"crc-debug-gdpzd\" (UID: \"5c9652c3-0b4c-40f4-a9be-dfb04d7b3ff6\") " pod="openshift-must-gather-4fmnl/crc-debug-gdpzd" Jan 28 07:42:58 crc kubenswrapper[4642]: I0128 07:42:58.983268 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5c9652c3-0b4c-40f4-a9be-dfb04d7b3ff6-host\") pod \"crc-debug-gdpzd\" (UID: \"5c9652c3-0b4c-40f4-a9be-dfb04d7b3ff6\") " pod="openshift-must-gather-4fmnl/crc-debug-gdpzd" Jan 28 07:42:59 crc kubenswrapper[4642]: I0128 07:42:59.086310 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5c9652c3-0b4c-40f4-a9be-dfb04d7b3ff6-host\") pod \"crc-debug-gdpzd\" (UID: \"5c9652c3-0b4c-40f4-a9be-dfb04d7b3ff6\") " pod="openshift-must-gather-4fmnl/crc-debug-gdpzd" Jan 28 07:42:59 crc kubenswrapper[4642]: I0128 07:42:59.086466 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwncb\" (UniqueName: \"kubernetes.io/projected/5c9652c3-0b4c-40f4-a9be-dfb04d7b3ff6-kube-api-access-fwncb\") pod \"crc-debug-gdpzd\" (UID: \"5c9652c3-0b4c-40f4-a9be-dfb04d7b3ff6\") " pod="openshift-must-gather-4fmnl/crc-debug-gdpzd" Jan 28 07:42:59 crc kubenswrapper[4642]: I0128 07:42:59.086785 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5c9652c3-0b4c-40f4-a9be-dfb04d7b3ff6-host\") pod \"crc-debug-gdpzd\" (UID: \"5c9652c3-0b4c-40f4-a9be-dfb04d7b3ff6\") " pod="openshift-must-gather-4fmnl/crc-debug-gdpzd" Jan 28 07:42:59 crc kubenswrapper[4642]: I0128 07:42:59.110402 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwncb\" (UniqueName: \"kubernetes.io/projected/5c9652c3-0b4c-40f4-a9be-dfb04d7b3ff6-kube-api-access-fwncb\") pod \"crc-debug-gdpzd\" (UID: \"5c9652c3-0b4c-40f4-a9be-dfb04d7b3ff6\") " pod="openshift-must-gather-4fmnl/crc-debug-gdpzd" Jan 28 07:42:59 crc kubenswrapper[4642]: I0128 07:42:59.181528 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4fmnl/crc-debug-gdpzd" Jan 28 07:42:59 crc kubenswrapper[4642]: I0128 07:42:59.239522 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4fmnl/crc-debug-gdpzd" event={"ID":"5c9652c3-0b4c-40f4-a9be-dfb04d7b3ff6","Type":"ContainerStarted","Data":"2fd22c61f4f520111ebd0a3ce8507343a71904497e50188f1547c6a9874926f7"} Jan 28 07:43:00 crc kubenswrapper[4642]: I0128 07:43:00.246460 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4fmnl/crc-debug-gdpzd" event={"ID":"5c9652c3-0b4c-40f4-a9be-dfb04d7b3ff6","Type":"ContainerStarted","Data":"713fa81dcc4552accd7f6aa4c85bc2abff20b42acd29ae7cbf9bdfabfda83309"} Jan 28 07:43:00 crc kubenswrapper[4642]: I0128 07:43:00.260268 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-4fmnl/crc-debug-gdpzd" podStartSLOduration=2.26025223 podStartE2EDuration="2.26025223s" podCreationTimestamp="2026-01-28 07:42:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 07:43:00.25637543 +0000 UTC m=+3303.488464240" watchObservedRunningTime="2026-01-28 07:43:00.26025223 +0000 UTC m=+3303.492341039" Jan 28 07:43:08 crc kubenswrapper[4642]: I0128 07:43:08.199745 4642 patch_prober.go:28] interesting pod/machine-config-daemon-hdsmf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 07:43:08 crc kubenswrapper[4642]: I0128 07:43:08.200118 4642 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 07:43:25 crc kubenswrapper[4642]: I0128 07:43:25.390438 4642 generic.go:334] "Generic (PLEG): container finished" podID="5c9652c3-0b4c-40f4-a9be-dfb04d7b3ff6" containerID="713fa81dcc4552accd7f6aa4c85bc2abff20b42acd29ae7cbf9bdfabfda83309" exitCode=0 Jan 28 07:43:25 crc kubenswrapper[4642]: I0128 07:43:25.390569 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4fmnl/crc-debug-gdpzd" event={"ID":"5c9652c3-0b4c-40f4-a9be-dfb04d7b3ff6","Type":"ContainerDied","Data":"713fa81dcc4552accd7f6aa4c85bc2abff20b42acd29ae7cbf9bdfabfda83309"} Jan 28 07:43:26 crc kubenswrapper[4642]: I0128 07:43:26.470640 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4fmnl/crc-debug-gdpzd" Jan 28 07:43:26 crc kubenswrapper[4642]: I0128 07:43:26.492812 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-4fmnl/crc-debug-gdpzd"] Jan 28 07:43:26 crc kubenswrapper[4642]: I0128 07:43:26.500180 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-4fmnl/crc-debug-gdpzd"] Jan 28 07:43:26 crc kubenswrapper[4642]: I0128 07:43:26.616817 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwncb\" (UniqueName: \"kubernetes.io/projected/5c9652c3-0b4c-40f4-a9be-dfb04d7b3ff6-kube-api-access-fwncb\") pod \"5c9652c3-0b4c-40f4-a9be-dfb04d7b3ff6\" (UID: \"5c9652c3-0b4c-40f4-a9be-dfb04d7b3ff6\") " Jan 28 07:43:26 crc kubenswrapper[4642]: I0128 07:43:26.616948 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5c9652c3-0b4c-40f4-a9be-dfb04d7b3ff6-host\") pod \"5c9652c3-0b4c-40f4-a9be-dfb04d7b3ff6\" (UID: \"5c9652c3-0b4c-40f4-a9be-dfb04d7b3ff6\") " Jan 28 07:43:26 crc kubenswrapper[4642]: I0128 07:43:26.617042 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c9652c3-0b4c-40f4-a9be-dfb04d7b3ff6-host" (OuterVolumeSpecName: "host") pod "5c9652c3-0b4c-40f4-a9be-dfb04d7b3ff6" (UID: "5c9652c3-0b4c-40f4-a9be-dfb04d7b3ff6"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 07:43:26 crc kubenswrapper[4642]: I0128 07:43:26.617379 4642 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5c9652c3-0b4c-40f4-a9be-dfb04d7b3ff6-host\") on node \"crc\" DevicePath \"\"" Jan 28 07:43:26 crc kubenswrapper[4642]: I0128 07:43:26.625623 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c9652c3-0b4c-40f4-a9be-dfb04d7b3ff6-kube-api-access-fwncb" (OuterVolumeSpecName: "kube-api-access-fwncb") pod "5c9652c3-0b4c-40f4-a9be-dfb04d7b3ff6" (UID: "5c9652c3-0b4c-40f4-a9be-dfb04d7b3ff6"). InnerVolumeSpecName "kube-api-access-fwncb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:43:26 crc kubenswrapper[4642]: I0128 07:43:26.718691 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwncb\" (UniqueName: \"kubernetes.io/projected/5c9652c3-0b4c-40f4-a9be-dfb04d7b3ff6-kube-api-access-fwncb\") on node \"crc\" DevicePath \"\"" Jan 28 07:43:27 crc kubenswrapper[4642]: I0128 07:43:27.106123 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c9652c3-0b4c-40f4-a9be-dfb04d7b3ff6" path="/var/lib/kubelet/pods/5c9652c3-0b4c-40f4-a9be-dfb04d7b3ff6/volumes" Jan 28 07:43:27 crc kubenswrapper[4642]: I0128 07:43:27.406795 4642 scope.go:117] "RemoveContainer" containerID="713fa81dcc4552accd7f6aa4c85bc2abff20b42acd29ae7cbf9bdfabfda83309" Jan 28 07:43:27 crc kubenswrapper[4642]: I0128 07:43:27.406911 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4fmnl/crc-debug-gdpzd" Jan 28 07:43:27 crc kubenswrapper[4642]: I0128 07:43:27.624072 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-4fmnl/crc-debug-dvpmq"] Jan 28 07:43:27 crc kubenswrapper[4642]: E0128 07:43:27.624453 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c9652c3-0b4c-40f4-a9be-dfb04d7b3ff6" containerName="container-00" Jan 28 07:43:27 crc kubenswrapper[4642]: I0128 07:43:27.624467 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c9652c3-0b4c-40f4-a9be-dfb04d7b3ff6" containerName="container-00" Jan 28 07:43:27 crc kubenswrapper[4642]: I0128 07:43:27.624651 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c9652c3-0b4c-40f4-a9be-dfb04d7b3ff6" containerName="container-00" Jan 28 07:43:27 crc kubenswrapper[4642]: I0128 07:43:27.625149 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4fmnl/crc-debug-dvpmq" Jan 28 07:43:27 crc kubenswrapper[4642]: I0128 07:43:27.626377 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-4fmnl"/"default-dockercfg-667wc" Jan 28 07:43:27 crc kubenswrapper[4642]: I0128 07:43:27.633348 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2srn9\" (UniqueName: \"kubernetes.io/projected/951f31ba-2ab1-4c6b-841e-d233c31f3e8f-kube-api-access-2srn9\") pod \"crc-debug-dvpmq\" (UID: \"951f31ba-2ab1-4c6b-841e-d233c31f3e8f\") " pod="openshift-must-gather-4fmnl/crc-debug-dvpmq" Jan 28 07:43:27 crc kubenswrapper[4642]: I0128 07:43:27.633465 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/951f31ba-2ab1-4c6b-841e-d233c31f3e8f-host\") pod \"crc-debug-dvpmq\" (UID: \"951f31ba-2ab1-4c6b-841e-d233c31f3e8f\") " pod="openshift-must-gather-4fmnl/crc-debug-dvpmq" Jan 28 07:43:27 crc kubenswrapper[4642]: I0128 07:43:27.735295 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2srn9\" (UniqueName: \"kubernetes.io/projected/951f31ba-2ab1-4c6b-841e-d233c31f3e8f-kube-api-access-2srn9\") pod \"crc-debug-dvpmq\" (UID: \"951f31ba-2ab1-4c6b-841e-d233c31f3e8f\") " pod="openshift-must-gather-4fmnl/crc-debug-dvpmq" Jan 28 07:43:27 crc kubenswrapper[4642]: I0128 07:43:27.735413 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/951f31ba-2ab1-4c6b-841e-d233c31f3e8f-host\") pod \"crc-debug-dvpmq\" (UID: \"951f31ba-2ab1-4c6b-841e-d233c31f3e8f\") " pod="openshift-must-gather-4fmnl/crc-debug-dvpmq" Jan 28 07:43:27 crc kubenswrapper[4642]: I0128 07:43:27.735663 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/951f31ba-2ab1-4c6b-841e-d233c31f3e8f-host\") pod \"crc-debug-dvpmq\" (UID: \"951f31ba-2ab1-4c6b-841e-d233c31f3e8f\") " pod="openshift-must-gather-4fmnl/crc-debug-dvpmq" Jan 28 07:43:27 crc kubenswrapper[4642]: I0128 07:43:27.749494 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2srn9\" (UniqueName: \"kubernetes.io/projected/951f31ba-2ab1-4c6b-841e-d233c31f3e8f-kube-api-access-2srn9\") pod \"crc-debug-dvpmq\" (UID: \"951f31ba-2ab1-4c6b-841e-d233c31f3e8f\") " pod="openshift-must-gather-4fmnl/crc-debug-dvpmq" Jan 28 07:43:27 crc kubenswrapper[4642]: I0128 07:43:27.937291 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4fmnl/crc-debug-dvpmq" Jan 28 07:43:28 crc kubenswrapper[4642]: I0128 07:43:28.416500 4642 generic.go:334] "Generic (PLEG): container finished" podID="951f31ba-2ab1-4c6b-841e-d233c31f3e8f" containerID="998f33c51594b6a84f453edf26ba175ba80648b1ab37a8e1b05db30f5ecdb6bc" exitCode=0 Jan 28 07:43:28 crc kubenswrapper[4642]: I0128 07:43:28.416716 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4fmnl/crc-debug-dvpmq" event={"ID":"951f31ba-2ab1-4c6b-841e-d233c31f3e8f","Type":"ContainerDied","Data":"998f33c51594b6a84f453edf26ba175ba80648b1ab37a8e1b05db30f5ecdb6bc"} Jan 28 07:43:28 crc kubenswrapper[4642]: I0128 07:43:28.416741 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4fmnl/crc-debug-dvpmq" event={"ID":"951f31ba-2ab1-4c6b-841e-d233c31f3e8f","Type":"ContainerStarted","Data":"ac329af91f478de1703381a42fb1ba0773ea4a00af65148197e4f1f545cbc58f"} Jan 28 07:43:28 crc kubenswrapper[4642]: I0128 07:43:28.787809 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-4fmnl/crc-debug-dvpmq"] Jan 28 07:43:28 crc kubenswrapper[4642]: I0128 07:43:28.793768 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-4fmnl/crc-debug-dvpmq"] Jan 28 07:43:29 crc kubenswrapper[4642]: I0128 07:43:29.502136 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4fmnl/crc-debug-dvpmq" Jan 28 07:43:29 crc kubenswrapper[4642]: I0128 07:43:29.560844 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/951f31ba-2ab1-4c6b-841e-d233c31f3e8f-host\") pod \"951f31ba-2ab1-4c6b-841e-d233c31f3e8f\" (UID: \"951f31ba-2ab1-4c6b-841e-d233c31f3e8f\") " Jan 28 07:43:29 crc kubenswrapper[4642]: I0128 07:43:29.561013 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2srn9\" (UniqueName: \"kubernetes.io/projected/951f31ba-2ab1-4c6b-841e-d233c31f3e8f-kube-api-access-2srn9\") pod \"951f31ba-2ab1-4c6b-841e-d233c31f3e8f\" (UID: \"951f31ba-2ab1-4c6b-841e-d233c31f3e8f\") " Jan 28 07:43:29 crc kubenswrapper[4642]: I0128 07:43:29.561254 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/951f31ba-2ab1-4c6b-841e-d233c31f3e8f-host" (OuterVolumeSpecName: "host") pod "951f31ba-2ab1-4c6b-841e-d233c31f3e8f" (UID: "951f31ba-2ab1-4c6b-841e-d233c31f3e8f"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 07:43:29 crc kubenswrapper[4642]: I0128 07:43:29.561347 4642 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/951f31ba-2ab1-4c6b-841e-d233c31f3e8f-host\") on node \"crc\" DevicePath \"\"" Jan 28 07:43:29 crc kubenswrapper[4642]: I0128 07:43:29.565470 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/951f31ba-2ab1-4c6b-841e-d233c31f3e8f-kube-api-access-2srn9" (OuterVolumeSpecName: "kube-api-access-2srn9") pod "951f31ba-2ab1-4c6b-841e-d233c31f3e8f" (UID: "951f31ba-2ab1-4c6b-841e-d233c31f3e8f"). InnerVolumeSpecName "kube-api-access-2srn9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:43:29 crc kubenswrapper[4642]: I0128 07:43:29.662638 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2srn9\" (UniqueName: \"kubernetes.io/projected/951f31ba-2ab1-4c6b-841e-d233c31f3e8f-kube-api-access-2srn9\") on node \"crc\" DevicePath \"\"" Jan 28 07:43:29 crc kubenswrapper[4642]: I0128 07:43:29.909466 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-4fmnl/crc-debug-zgdrd"] Jan 28 07:43:29 crc kubenswrapper[4642]: E0128 07:43:29.909764 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="951f31ba-2ab1-4c6b-841e-d233c31f3e8f" containerName="container-00" Jan 28 07:43:29 crc kubenswrapper[4642]: I0128 07:43:29.909776 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="951f31ba-2ab1-4c6b-841e-d233c31f3e8f" containerName="container-00" Jan 28 07:43:29 crc kubenswrapper[4642]: I0128 07:43:29.909935 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="951f31ba-2ab1-4c6b-841e-d233c31f3e8f" containerName="container-00" Jan 28 07:43:29 crc kubenswrapper[4642]: I0128 07:43:29.910434 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4fmnl/crc-debug-zgdrd" Jan 28 07:43:29 crc kubenswrapper[4642]: I0128 07:43:29.967495 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mj485\" (UniqueName: \"kubernetes.io/projected/1cb0905a-6060-4cbc-bfee-d5d339a201a6-kube-api-access-mj485\") pod \"crc-debug-zgdrd\" (UID: \"1cb0905a-6060-4cbc-bfee-d5d339a201a6\") " pod="openshift-must-gather-4fmnl/crc-debug-zgdrd" Jan 28 07:43:29 crc kubenswrapper[4642]: I0128 07:43:29.967542 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1cb0905a-6060-4cbc-bfee-d5d339a201a6-host\") pod \"crc-debug-zgdrd\" (UID: \"1cb0905a-6060-4cbc-bfee-d5d339a201a6\") " pod="openshift-must-gather-4fmnl/crc-debug-zgdrd" Jan 28 07:43:30 crc kubenswrapper[4642]: I0128 07:43:30.069210 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1cb0905a-6060-4cbc-bfee-d5d339a201a6-host\") pod \"crc-debug-zgdrd\" (UID: \"1cb0905a-6060-4cbc-bfee-d5d339a201a6\") " pod="openshift-must-gather-4fmnl/crc-debug-zgdrd" Jan 28 07:43:30 crc kubenswrapper[4642]: I0128 07:43:30.069478 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mj485\" (UniqueName: \"kubernetes.io/projected/1cb0905a-6060-4cbc-bfee-d5d339a201a6-kube-api-access-mj485\") pod \"crc-debug-zgdrd\" (UID: \"1cb0905a-6060-4cbc-bfee-d5d339a201a6\") " pod="openshift-must-gather-4fmnl/crc-debug-zgdrd" Jan 28 07:43:30 crc kubenswrapper[4642]: I0128 07:43:30.069780 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1cb0905a-6060-4cbc-bfee-d5d339a201a6-host\") pod \"crc-debug-zgdrd\" (UID: \"1cb0905a-6060-4cbc-bfee-d5d339a201a6\") " pod="openshift-must-gather-4fmnl/crc-debug-zgdrd" Jan 28 07:43:30 crc kubenswrapper[4642]: I0128 07:43:30.089845 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mj485\" (UniqueName: \"kubernetes.io/projected/1cb0905a-6060-4cbc-bfee-d5d339a201a6-kube-api-access-mj485\") pod \"crc-debug-zgdrd\" (UID: \"1cb0905a-6060-4cbc-bfee-d5d339a201a6\") " pod="openshift-must-gather-4fmnl/crc-debug-zgdrd" Jan 28 07:43:30 crc kubenswrapper[4642]: I0128 07:43:30.224421 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4fmnl/crc-debug-zgdrd" Jan 28 07:43:30 crc kubenswrapper[4642]: W0128 07:43:30.252020 4642 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1cb0905a_6060_4cbc_bfee_d5d339a201a6.slice/crio-a2337af38215e263b9246c3e131f3341125ff6fc2eacdbdff2e13488dd247f6f WatchSource:0}: Error finding container a2337af38215e263b9246c3e131f3341125ff6fc2eacdbdff2e13488dd247f6f: Status 404 returned error can't find the container with id a2337af38215e263b9246c3e131f3341125ff6fc2eacdbdff2e13488dd247f6f Jan 28 07:43:30 crc kubenswrapper[4642]: I0128 07:43:30.430123 4642 generic.go:334] "Generic (PLEG): container finished" podID="1cb0905a-6060-4cbc-bfee-d5d339a201a6" containerID="d03e0ceac1d4810e24670d0b2f5fdfec233763d8471dbd6a4c12fe20fea39551" exitCode=0 Jan 28 07:43:30 crc kubenswrapper[4642]: I0128 07:43:30.430207 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4fmnl/crc-debug-zgdrd" event={"ID":"1cb0905a-6060-4cbc-bfee-d5d339a201a6","Type":"ContainerDied","Data":"d03e0ceac1d4810e24670d0b2f5fdfec233763d8471dbd6a4c12fe20fea39551"} Jan 28 07:43:30 crc kubenswrapper[4642]: I0128 07:43:30.430503 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4fmnl/crc-debug-zgdrd" event={"ID":"1cb0905a-6060-4cbc-bfee-d5d339a201a6","Type":"ContainerStarted","Data":"a2337af38215e263b9246c3e131f3341125ff6fc2eacdbdff2e13488dd247f6f"} Jan 28 07:43:30 crc kubenswrapper[4642]: I0128 07:43:30.431430 4642 scope.go:117] "RemoveContainer" containerID="998f33c51594b6a84f453edf26ba175ba80648b1ab37a8e1b05db30f5ecdb6bc" Jan 28 07:43:30 crc kubenswrapper[4642]: I0128 07:43:30.431516 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4fmnl/crc-debug-dvpmq" Jan 28 07:43:30 crc kubenswrapper[4642]: I0128 07:43:30.458598 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-4fmnl/crc-debug-zgdrd"] Jan 28 07:43:30 crc kubenswrapper[4642]: I0128 07:43:30.466570 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-4fmnl/crc-debug-zgdrd"] Jan 28 07:43:31 crc kubenswrapper[4642]: I0128 07:43:31.105567 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="951f31ba-2ab1-4c6b-841e-d233c31f3e8f" path="/var/lib/kubelet/pods/951f31ba-2ab1-4c6b-841e-d233c31f3e8f/volumes" Jan 28 07:43:31 crc kubenswrapper[4642]: I0128 07:43:31.504068 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4fmnl/crc-debug-zgdrd" Jan 28 07:43:31 crc kubenswrapper[4642]: I0128 07:43:31.690072 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1cb0905a-6060-4cbc-bfee-d5d339a201a6-host\") pod \"1cb0905a-6060-4cbc-bfee-d5d339a201a6\" (UID: \"1cb0905a-6060-4cbc-bfee-d5d339a201a6\") " Jan 28 07:43:31 crc kubenswrapper[4642]: I0128 07:43:31.690343 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mj485\" (UniqueName: \"kubernetes.io/projected/1cb0905a-6060-4cbc-bfee-d5d339a201a6-kube-api-access-mj485\") pod \"1cb0905a-6060-4cbc-bfee-d5d339a201a6\" (UID: \"1cb0905a-6060-4cbc-bfee-d5d339a201a6\") " Jan 28 07:43:31 crc kubenswrapper[4642]: I0128 07:43:31.690181 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1cb0905a-6060-4cbc-bfee-d5d339a201a6-host" (OuterVolumeSpecName: "host") pod "1cb0905a-6060-4cbc-bfee-d5d339a201a6" (UID: "1cb0905a-6060-4cbc-bfee-d5d339a201a6"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 07:43:31 crc kubenswrapper[4642]: I0128 07:43:31.700345 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cb0905a-6060-4cbc-bfee-d5d339a201a6-kube-api-access-mj485" (OuterVolumeSpecName: "kube-api-access-mj485") pod "1cb0905a-6060-4cbc-bfee-d5d339a201a6" (UID: "1cb0905a-6060-4cbc-bfee-d5d339a201a6"). InnerVolumeSpecName "kube-api-access-mj485". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:43:31 crc kubenswrapper[4642]: I0128 07:43:31.791441 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mj485\" (UniqueName: \"kubernetes.io/projected/1cb0905a-6060-4cbc-bfee-d5d339a201a6-kube-api-access-mj485\") on node \"crc\" DevicePath \"\"" Jan 28 07:43:31 crc kubenswrapper[4642]: I0128 07:43:31.791468 4642 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1cb0905a-6060-4cbc-bfee-d5d339a201a6-host\") on node \"crc\" DevicePath \"\"" Jan 28 07:43:32 crc kubenswrapper[4642]: I0128 07:43:32.445530 4642 scope.go:117] "RemoveContainer" containerID="d03e0ceac1d4810e24670d0b2f5fdfec233763d8471dbd6a4c12fe20fea39551" Jan 28 07:43:32 crc kubenswrapper[4642]: I0128 07:43:32.446080 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4fmnl/crc-debug-zgdrd" Jan 28 07:43:33 crc kubenswrapper[4642]: I0128 07:43:33.106321 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1cb0905a-6060-4cbc-bfee-d5d339a201a6" path="/var/lib/kubelet/pods/1cb0905a-6060-4cbc-bfee-d5d339a201a6/volumes" Jan 28 07:43:38 crc kubenswrapper[4642]: I0128 07:43:38.199840 4642 patch_prober.go:28] interesting pod/machine-config-daemon-hdsmf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 07:43:38 crc kubenswrapper[4642]: I0128 07:43:38.200215 4642 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 07:43:47 crc kubenswrapper[4642]: I0128 07:43:47.845953 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-65b856d58d-bqqcd_9e8d2ea1-c13f-48c7-8481-284676407f2b/barbican-api/0.log" Jan 28 07:43:47 crc kubenswrapper[4642]: I0128 07:43:47.924567 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-65b856d58d-bqqcd_9e8d2ea1-c13f-48c7-8481-284676407f2b/barbican-api-log/0.log" Jan 28 07:43:48 crc kubenswrapper[4642]: I0128 07:43:48.020995 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6dcc495f5d-rjtzn_14fef196-27d3-4eb2-8347-80de82035a9d/barbican-keystone-listener/0.log" Jan 28 07:43:48 crc kubenswrapper[4642]: I0128 07:43:48.030101 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6dcc495f5d-rjtzn_14fef196-27d3-4eb2-8347-80de82035a9d/barbican-keystone-listener-log/0.log" Jan 28 07:43:48 crc kubenswrapper[4642]: I0128 07:43:48.087542 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6d89d45779-ghhkc_b7df992e-0fe2-4c0b-b217-10bc93d786ac/barbican-worker/0.log" Jan 28 07:43:48 crc kubenswrapper[4642]: I0128 07:43:48.153979 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6d89d45779-ghhkc_b7df992e-0fe2-4c0b-b217-10bc93d786ac/barbican-worker-log/0.log" Jan 28 07:43:48 crc kubenswrapper[4642]: I0128 07:43:48.231507 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-2624h_71799297-5b25-4d16-97d8-5cd3b6e9c52e/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 07:43:48 crc kubenswrapper[4642]: I0128 07:43:48.331154 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f050f220-7652-43cd-8de4-fe0f291f46cc/ceilometer-central-agent/0.log" Jan 28 07:43:48 crc kubenswrapper[4642]: I0128 07:43:48.353265 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f050f220-7652-43cd-8de4-fe0f291f46cc/ceilometer-notification-agent/0.log" Jan 28 07:43:48 crc kubenswrapper[4642]: I0128 07:43:48.396059 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f050f220-7652-43cd-8de4-fe0f291f46cc/proxy-httpd/0.log" Jan 28 07:43:48 crc kubenswrapper[4642]: I0128 07:43:48.458718 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f050f220-7652-43cd-8de4-fe0f291f46cc/sg-core/0.log" Jan 28 07:43:48 crc kubenswrapper[4642]: I0128 07:43:48.516850 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_7cb1c066-b293-4f15-8056-5422fe062a98/cinder-api/0.log" Jan 28 07:43:48 crc kubenswrapper[4642]: I0128 07:43:48.543549 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_7cb1c066-b293-4f15-8056-5422fe062a98/cinder-api-log/0.log" Jan 28 07:43:48 crc kubenswrapper[4642]: I0128 07:43:48.673118 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_57740542-145e-4f7e-a313-ef87683e27cd/probe/0.log" Jan 28 07:43:48 crc kubenswrapper[4642]: I0128 07:43:48.677465 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_57740542-145e-4f7e-a313-ef87683e27cd/cinder-scheduler/0.log" Jan 28 07:43:48 crc kubenswrapper[4642]: I0128 07:43:48.805123 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-xqnkj_a99427f8-9376-4fe9-81ed-cfe6740f4581/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 07:43:48 crc kubenswrapper[4642]: I0128 07:43:48.826783 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-k65rv_e8972141-a9ad-40b1-abb7-5e0fbdf8feda/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 07:43:48 crc kubenswrapper[4642]: I0128 07:43:48.934336 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-774c5cf667-7hfrh_592db514-d1a6-421d-87f4-60ab08a05885/init/0.log" Jan 28 07:43:49 crc kubenswrapper[4642]: I0128 07:43:49.059836 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-774c5cf667-7hfrh_592db514-d1a6-421d-87f4-60ab08a05885/init/0.log" Jan 28 07:43:49 crc kubenswrapper[4642]: I0128 07:43:49.069123 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-5s2mm_f6bd9dfb-a07e-4082-ab80-e7de0f582617/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 07:43:49 crc kubenswrapper[4642]: I0128 07:43:49.226579 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-774c5cf667-7hfrh_592db514-d1a6-421d-87f4-60ab08a05885/dnsmasq-dns/0.log" Jan 28 07:43:49 crc kubenswrapper[4642]: I0128 07:43:49.323876 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_e2b7c0c6-df21-4342-9aec-f6b7ba5188be/glance-httpd/0.log" Jan 28 07:43:49 crc kubenswrapper[4642]: I0128 07:43:49.335540 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_e2b7c0c6-df21-4342-9aec-f6b7ba5188be/glance-log/0.log" Jan 28 07:43:49 crc kubenswrapper[4642]: I0128 07:43:49.463608 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_16c4c401-8d2d-479c-bbb2-75b0f3ac300a/glance-log/0.log" Jan 28 07:43:49 crc kubenswrapper[4642]: I0128 07:43:49.487453 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_16c4c401-8d2d-479c-bbb2-75b0f3ac300a/glance-httpd/0.log" Jan 28 07:43:49 crc kubenswrapper[4642]: I0128 07:43:49.506123 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-j5rjr_a86d82be-6640-4441-a938-230f7beded20/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 07:43:49 crc kubenswrapper[4642]: I0128 07:43:49.665452 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-rtjpp_8ce13f76-42f1-46cb-a43b-cdb1acaf6cd8/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 07:43:49 crc kubenswrapper[4642]: I0128 07:43:49.806750 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-f77bb558-ws68h_83dd211e-6375-4640-921f-c26d8181e31b/keystone-api/0.log" Jan 28 07:43:49 crc kubenswrapper[4642]: I0128 07:43:49.819177 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_36fd1727-1fc5-4cf4-91a2-4d2a01c1d7c9/kube-state-metrics/0.log" Jan 28 07:43:49 crc kubenswrapper[4642]: I0128 07:43:49.955964 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-5snh2_4fe048e8-d571-4c2a-a306-3b1e9fdc1798/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 07:43:50 crc kubenswrapper[4642]: I0128 07:43:50.251789 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7476bb99fc-vvh9d_49e0885f-27b0-4197-9ff0-95732b63bf51/neutron-httpd/0.log" Jan 28 07:43:50 crc kubenswrapper[4642]: I0128 07:43:50.271726 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7476bb99fc-vvh9d_49e0885f-27b0-4197-9ff0-95732b63bf51/neutron-api/0.log" Jan 28 07:43:50 crc kubenswrapper[4642]: I0128 07:43:50.462246 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-jd58d_29a8a74b-b7e6-4315-93a4-cde0bdc10ae9/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 07:43:50 crc kubenswrapper[4642]: I0128 07:43:50.830634 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_fafb88b9-f909-4a9c-92af-63b0428e44e8/nova-api-log/0.log" Jan 28 07:43:50 crc kubenswrapper[4642]: I0128 07:43:50.970523 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_5b79b1b9-2072-44eb-ab2f-977a02871f54/nova-cell0-conductor-conductor/0.log" Jan 28 07:43:50 crc kubenswrapper[4642]: I0128 07:43:50.976913 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_fafb88b9-f909-4a9c-92af-63b0428e44e8/nova-api-api/0.log" Jan 28 07:43:51 crc kubenswrapper[4642]: I0128 07:43:51.070363 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_e5b728d1-49f3-4652-b330-89eb118ee26e/nova-cell1-conductor-conductor/0.log" Jan 28 07:43:51 crc kubenswrapper[4642]: I0128 07:43:51.191652 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_8befd04d-7f83-44b3-8136-94b85511b14f/nova-cell1-novncproxy-novncproxy/0.log" Jan 28 07:43:51 crc kubenswrapper[4642]: I0128 07:43:51.242261 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-jkwgg_57077b74-e1c4-4ab3-b414-1301bacf7e3c/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 07:43:51 crc kubenswrapper[4642]: I0128 07:43:51.466673 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_fca45030-caaf-4344-8a7c-5440a27f8e57/nova-metadata-log/0.log" Jan 28 07:43:51 crc kubenswrapper[4642]: I0128 07:43:51.620177 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_68213c74-0be2-4d55-8f7c-7f5991da4f75/nova-scheduler-scheduler/0.log" Jan 28 07:43:51 crc kubenswrapper[4642]: I0128 07:43:51.669421 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_c03a521e-dd32-4a74-b452-512fe8bdae8e/mysql-bootstrap/0.log" Jan 28 07:43:51 crc kubenswrapper[4642]: I0128 07:43:51.815940 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_c03a521e-dd32-4a74-b452-512fe8bdae8e/galera/0.log" Jan 28 07:43:51 crc kubenswrapper[4642]: I0128 07:43:51.826036 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_c03a521e-dd32-4a74-b452-512fe8bdae8e/mysql-bootstrap/0.log" Jan 28 07:43:51 crc kubenswrapper[4642]: I0128 07:43:51.966558 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_602638e1-0a19-4a7f-a752-50b0e228a7da/mysql-bootstrap/0.log" Jan 28 07:43:52 crc kubenswrapper[4642]: I0128 07:43:52.172202 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_602638e1-0a19-4a7f-a752-50b0e228a7da/mysql-bootstrap/0.log" Jan 28 07:43:52 crc kubenswrapper[4642]: I0128 07:43:52.197332 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_fca45030-caaf-4344-8a7c-5440a27f8e57/nova-metadata-metadata/0.log" Jan 28 07:43:52 crc kubenswrapper[4642]: I0128 07:43:52.214303 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_602638e1-0a19-4a7f-a752-50b0e228a7da/galera/0.log" Jan 28 07:43:52 crc kubenswrapper[4642]: I0128 07:43:52.455247 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_c754de0a-7ee3-416f-988d-d0eb4829ea99/openstackclient/0.log" Jan 28 07:43:52 crc kubenswrapper[4642]: I0128 07:43:52.531275 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-9d4kj_29b93c34-de22-48ac-80da-b79048401506/ovn-controller/0.log" Jan 28 07:43:52 crc kubenswrapper[4642]: I0128 07:43:52.648987 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-xmwjf_9c18095f-18c4-435f-a2cc-216a62127faa/openstack-network-exporter/0.log" Jan 28 07:43:52 crc kubenswrapper[4642]: I0128 07:43:52.709273 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-nvphd_e0307f10-0ff0-4421-91a1-34ff47b17d16/ovsdb-server-init/0.log" Jan 28 07:43:52 crc kubenswrapper[4642]: I0128 07:43:52.864573 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-nvphd_e0307f10-0ff0-4421-91a1-34ff47b17d16/ovsdb-server-init/0.log" Jan 28 07:43:52 crc kubenswrapper[4642]: I0128 07:43:52.898796 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-nvphd_e0307f10-0ff0-4421-91a1-34ff47b17d16/ovsdb-server/0.log" Jan 28 07:43:52 crc kubenswrapper[4642]: I0128 07:43:52.923359 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-nvphd_e0307f10-0ff0-4421-91a1-34ff47b17d16/ovs-vswitchd/0.log" Jan 28 07:43:53 crc kubenswrapper[4642]: I0128 07:43:53.063292 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-h6cx4_2de02231-7ff5-4fea-8660-09a3a907adbe/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 07:43:53 crc kubenswrapper[4642]: I0128 07:43:53.108921 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_cc7a3db8-5279-4295-a18a-59749e31d9a4/ovn-northd/0.log" Jan 28 07:43:53 crc kubenswrapper[4642]: I0128 07:43:53.115922 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_cc7a3db8-5279-4295-a18a-59749e31d9a4/openstack-network-exporter/0.log" Jan 28 07:43:53 crc kubenswrapper[4642]: I0128 07:43:53.293615 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_f7f96797-56a8-4fc5-a520-cfaecf44c4a0/openstack-network-exporter/0.log" Jan 28 07:43:53 crc kubenswrapper[4642]: I0128 07:43:53.309659 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_f7f96797-56a8-4fc5-a520-cfaecf44c4a0/ovsdbserver-nb/0.log" Jan 28 07:43:53 crc kubenswrapper[4642]: I0128 07:43:53.428792 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_08e72283-7898-4b33-a2ef-5ebe2a319fe8/openstack-network-exporter/0.log" Jan 28 07:43:53 crc kubenswrapper[4642]: I0128 07:43:53.441438 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_08e72283-7898-4b33-a2ef-5ebe2a319fe8/ovsdbserver-sb/0.log" Jan 28 07:43:53 crc kubenswrapper[4642]: I0128 07:43:53.561703 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-59fd947774-hzkdn_e08591ac-7a27-4fc3-aaf0-b6957a9d94b5/placement-api/0.log" Jan 28 07:43:53 crc kubenswrapper[4642]: I0128 07:43:53.647867 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-59fd947774-hzkdn_e08591ac-7a27-4fc3-aaf0-b6957a9d94b5/placement-log/0.log" Jan 28 07:43:53 crc kubenswrapper[4642]: I0128 07:43:53.704720 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_830d2eb5-3d8a-4b74-833e-758894985129/setup-container/0.log" Jan 28 07:43:53 crc kubenswrapper[4642]: I0128 07:43:53.874580 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_830d2eb5-3d8a-4b74-833e-758894985129/setup-container/0.log" Jan 28 07:43:53 crc kubenswrapper[4642]: I0128 07:43:53.874910 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_830d2eb5-3d8a-4b74-833e-758894985129/rabbitmq/0.log" Jan 28 07:43:53 crc kubenswrapper[4642]: I0128 07:43:53.936020 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_62614395-0b52-4d39-865d-c42587ac034b/setup-container/0.log" Jan 28 07:43:54 crc kubenswrapper[4642]: I0128 07:43:54.090722 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_62614395-0b52-4d39-865d-c42587ac034b/setup-container/0.log" Jan 28 07:43:54 crc kubenswrapper[4642]: I0128 07:43:54.109624 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-w7l2s_2e6a86f6-9d82-44b5-8f3d-cd0d0520462b/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 07:43:54 crc kubenswrapper[4642]: I0128 07:43:54.163436 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_62614395-0b52-4d39-865d-c42587ac034b/rabbitmq/0.log" Jan 28 07:43:54 crc kubenswrapper[4642]: I0128 07:43:54.306659 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-4xwd4_0571eda4-a4be-4e57-93f6-b31928d2bdd3/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 07:43:54 crc kubenswrapper[4642]: I0128 07:43:54.319432 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-92kzl_f8485ba5-af89-41de-82ac-61f80fdf4831/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 07:43:54 crc kubenswrapper[4642]: I0128 07:43:54.441036 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-4lgl6_826495f0-3162-41a2-bbf2-f95814348f47/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 07:43:54 crc kubenswrapper[4642]: I0128 07:43:54.508466 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-9sx26_00779b3c-2623-48a8-88e3-72355cdcf9f9/ssh-known-hosts-edpm-deployment/0.log" Jan 28 07:43:54 crc kubenswrapper[4642]: I0128 07:43:54.672654 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6f4ff4554c-l99rf_31037f93-2b83-4bd0-bcdf-62c0a973432a/proxy-server/0.log" Jan 28 07:43:54 crc kubenswrapper[4642]: I0128 07:43:54.735317 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6f4ff4554c-l99rf_31037f93-2b83-4bd0-bcdf-62c0a973432a/proxy-httpd/0.log" Jan 28 07:43:54 crc kubenswrapper[4642]: I0128 07:43:54.754143 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-pp9sd_9c8f1362-c01b-4533-b2fa-a7cbfb573175/swift-ring-rebalance/0.log" Jan 28 07:43:54 crc kubenswrapper[4642]: I0128 07:43:54.897175 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4dd91859-662e-4131-a376-57998c03d752/account-auditor/0.log" Jan 28 07:43:54 crc kubenswrapper[4642]: I0128 07:43:54.922273 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4dd91859-662e-4131-a376-57998c03d752/account-reaper/0.log" Jan 28 07:43:54 crc kubenswrapper[4642]: I0128 07:43:54.928456 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4dd91859-662e-4131-a376-57998c03d752/account-replicator/0.log" Jan 28 07:43:55 crc kubenswrapper[4642]: I0128 07:43:55.070417 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4dd91859-662e-4131-a376-57998c03d752/container-auditor/0.log" Jan 28 07:43:55 crc kubenswrapper[4642]: I0128 07:43:55.071930 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4dd91859-662e-4131-a376-57998c03d752/account-server/0.log" Jan 28 07:43:55 crc kubenswrapper[4642]: I0128 07:43:55.108915 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4dd91859-662e-4131-a376-57998c03d752/container-replicator/0.log" Jan 28 07:43:55 crc kubenswrapper[4642]: I0128 07:43:55.110169 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4dd91859-662e-4131-a376-57998c03d752/container-server/0.log" Jan 28 07:43:55 crc kubenswrapper[4642]: I0128 07:43:55.236776 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4dd91859-662e-4131-a376-57998c03d752/object-auditor/0.log" Jan 28 07:43:55 crc kubenswrapper[4642]: I0128 07:43:55.266103 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4dd91859-662e-4131-a376-57998c03d752/container-updater/0.log" Jan 28 07:43:55 crc kubenswrapper[4642]: I0128 07:43:55.277074 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4dd91859-662e-4131-a376-57998c03d752/object-expirer/0.log" Jan 28 07:43:55 crc kubenswrapper[4642]: I0128 07:43:55.457981 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4dd91859-662e-4131-a376-57998c03d752/object-replicator/0.log" Jan 28 07:43:55 crc kubenswrapper[4642]: I0128 07:43:55.539627 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4dd91859-662e-4131-a376-57998c03d752/object-updater/0.log" Jan 28 07:43:55 crc kubenswrapper[4642]: I0128 07:43:55.558357 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4dd91859-662e-4131-a376-57998c03d752/object-server/0.log" Jan 28 07:43:55 crc kubenswrapper[4642]: I0128 07:43:55.607372 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4dd91859-662e-4131-a376-57998c03d752/rsync/0.log" Jan 28 07:43:55 crc kubenswrapper[4642]: I0128 07:43:55.637948 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4dd91859-662e-4131-a376-57998c03d752/swift-recon-cron/0.log" Jan 28 07:43:55 crc kubenswrapper[4642]: I0128 07:43:55.761537 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-xb7sb_3d342237-d10d-4315-a659-c8f91ecc6d5d/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 07:43:55 crc kubenswrapper[4642]: I0128 07:43:55.836520 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_7e694181-faba-42ea-a552-04cdb4a7536d/tempest-tests-tempest-tests-runner/0.log" Jan 28 07:43:55 crc kubenswrapper[4642]: I0128 07:43:55.885502 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_57fc1bb3-928f-4db7-81b2-6fe911be8403/test-operator-logs-container/0.log" Jan 28 07:43:56 crc kubenswrapper[4642]: I0128 07:43:56.033194 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-h2225_8032b59a-f024-4eb0-93d7-d26a77889a96/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 07:44:03 crc kubenswrapper[4642]: I0128 07:44:03.024909 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_59611c4a-ee6f-4f16-9804-aba66d47d908/memcached/0.log" Jan 28 07:44:03 crc kubenswrapper[4642]: I0128 07:44:03.981608 4642 scope.go:117] "RemoveContainer" containerID="afae51055b23c9ded43a8b5b4b0a5ec69a1b7684d11466aaa1012703de2c964b" Jan 28 07:44:08 crc kubenswrapper[4642]: I0128 07:44:08.199376 4642 patch_prober.go:28] interesting pod/machine-config-daemon-hdsmf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 07:44:08 crc kubenswrapper[4642]: I0128 07:44:08.199756 4642 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 07:44:08 crc kubenswrapper[4642]: I0128 07:44:08.199793 4642 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" Jan 28 07:44:08 crc kubenswrapper[4642]: I0128 07:44:08.200512 4642 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ac735d129b9c30db88be66b981b03419f68aa61de1ef5f25e471c18bc7e1d026"} pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 07:44:08 crc kubenswrapper[4642]: I0128 07:44:08.200558 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" containerName="machine-config-daemon" containerID="cri-o://ac735d129b9c30db88be66b981b03419f68aa61de1ef5f25e471c18bc7e1d026" gracePeriod=600 Jan 28 07:44:08 crc kubenswrapper[4642]: I0128 07:44:08.702942 4642 generic.go:334] "Generic (PLEG): container finished" podID="338ae955-434d-40bd-8519-580badf3e175" containerID="ac735d129b9c30db88be66b981b03419f68aa61de1ef5f25e471c18bc7e1d026" exitCode=0 Jan 28 07:44:08 crc kubenswrapper[4642]: I0128 07:44:08.703278 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" event={"ID":"338ae955-434d-40bd-8519-580badf3e175","Type":"ContainerDied","Data":"ac735d129b9c30db88be66b981b03419f68aa61de1ef5f25e471c18bc7e1d026"} Jan 28 07:44:08 crc kubenswrapper[4642]: I0128 07:44:08.703304 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" event={"ID":"338ae955-434d-40bd-8519-580badf3e175","Type":"ContainerStarted","Data":"26d147d0901f656ab56ada364063322de3b63cd44a4dedbd8dd59c3d8b67e6eb"} Jan 28 07:44:08 crc kubenswrapper[4642]: I0128 07:44:08.703319 4642 scope.go:117] "RemoveContainer" containerID="0dcd78f9e568ada3af5f90732ccb64d686cd1d0ba16db578037a8ae26e1efe3d" Jan 28 07:44:13 crc kubenswrapper[4642]: I0128 07:44:13.009057 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7f86f8796f-r6l8l_8f714147-0e51-40d4-bc83-a1bcd90da40f/manager/0.log" Jan 28 07:44:13 crc kubenswrapper[4642]: I0128 07:44:13.117978 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c63724877025f1d18ba1fa29f8076dfd209b6bb3b67e44a6aa3755fab2rpf9j_ca36e19a-c862-47bc-b335-0f3d55dc2d4c/util/0.log" Jan 28 07:44:13 crc kubenswrapper[4642]: I0128 07:44:13.228742 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c63724877025f1d18ba1fa29f8076dfd209b6bb3b67e44a6aa3755fab2rpf9j_ca36e19a-c862-47bc-b335-0f3d55dc2d4c/pull/0.log" Jan 28 07:44:13 crc kubenswrapper[4642]: I0128 07:44:13.242422 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c63724877025f1d18ba1fa29f8076dfd209b6bb3b67e44a6aa3755fab2rpf9j_ca36e19a-c862-47bc-b335-0f3d55dc2d4c/util/0.log" Jan 28 07:44:13 crc kubenswrapper[4642]: I0128 07:44:13.266070 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c63724877025f1d18ba1fa29f8076dfd209b6bb3b67e44a6aa3755fab2rpf9j_ca36e19a-c862-47bc-b335-0f3d55dc2d4c/pull/0.log" Jan 28 07:44:13 crc kubenswrapper[4642]: I0128 07:44:13.382036 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c63724877025f1d18ba1fa29f8076dfd209b6bb3b67e44a6aa3755fab2rpf9j_ca36e19a-c862-47bc-b335-0f3d55dc2d4c/extract/0.log" Jan 28 07:44:13 crc kubenswrapper[4642]: I0128 07:44:13.387133 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c63724877025f1d18ba1fa29f8076dfd209b6bb3b67e44a6aa3755fab2rpf9j_ca36e19a-c862-47bc-b335-0f3d55dc2d4c/pull/0.log" Jan 28 07:44:13 crc kubenswrapper[4642]: I0128 07:44:13.399637 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c63724877025f1d18ba1fa29f8076dfd209b6bb3b67e44a6aa3755fab2rpf9j_ca36e19a-c862-47bc-b335-0f3d55dc2d4c/util/0.log" Jan 28 07:44:13 crc kubenswrapper[4642]: I0128 07:44:13.523421 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-7478f7dbf9-ppss4_e7c99a85-efe2-41d4-8682-b91441ed42bf/manager/0.log" Jan 28 07:44:13 crc kubenswrapper[4642]: I0128 07:44:13.529328 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-cv9ph_8ce8250d-808a-4044-9473-ef4de236ea47/manager/0.log" Jan 28 07:44:13 crc kubenswrapper[4642]: I0128 07:44:13.688529 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-wvkg2_3b826964-4d30-4419-85ff-e4c4fab25d5f/manager/0.log" Jan 28 07:44:13 crc kubenswrapper[4642]: I0128 07:44:13.698558 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-r2p4j_926efdce-a7f6-465b-b4e8-752d78e79cae/manager/0.log" Jan 28 07:44:13 crc kubenswrapper[4642]: I0128 07:44:13.824170 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-xqxpn_5af1bfbf-97ed-4ac2-b688-60b50d0800f0/manager/0.log" Jan 28 07:44:14 crc kubenswrapper[4642]: I0128 07:44:14.018011 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-598f7747c9-n42jz_fe0506df-e213-4430-a075-3e4a25ae3bf8/manager/0.log" Jan 28 07:44:14 crc kubenswrapper[4642]: I0128 07:44:14.023491 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-694cf4f878-g9qq4_33d74ff8-8576-4acc-8233-df91f8c11cbd/manager/0.log" Jan 28 07:44:14 crc kubenswrapper[4642]: I0128 07:44:14.073354 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-jfxhp_fd2f775c-8111-4523-b235-1e61f428b03e/manager/0.log" Jan 28 07:44:14 crc kubenswrapper[4642]: I0128 07:44:14.165658 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-snqv5_d0b658bf-5e42-4af9-93ce-b6e0b03b1db2/manager/0.log" Jan 28 07:44:14 crc kubenswrapper[4642]: I0128 07:44:14.245546 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-5n97g_c56780c4-c549-4261-807d-c85fa6bbb166/manager/0.log" Jan 28 07:44:14 crc kubenswrapper[4642]: I0128 07:44:14.336897 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78d58447c5-nxs2v_955adb33-713e-4988-a885-8c26474165e5/manager/0.log" Jan 28 07:44:14 crc kubenswrapper[4642]: I0128 07:44:14.417313 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-7bdb645866-gxt6x_7c0247c0-e28d-4914-8d63-d90f9ad06fe3/manager/0.log" Jan 28 07:44:14 crc kubenswrapper[4642]: I0128 07:44:14.489503 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5f4cd88d46-4xmct_43c6d7b6-0086-4de0-b6d6-1a313d0c7214/manager/0.log" Jan 28 07:44:14 crc kubenswrapper[4642]: I0128 07:44:14.549403 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-5b5d4999dch6b8n_e5eb1461-1a4f-403d-bc4f-c05d36ad23e8/manager/0.log" Jan 28 07:44:14 crc kubenswrapper[4642]: I0128 07:44:14.703392 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-554f878768-rqjln_c27f0ead-ebcd-4c83-ad72-311bcacff990/operator/0.log" Jan 28 07:44:14 crc kubenswrapper[4642]: I0128 07:44:14.846633 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-j2f6p_68d33b51-456a-4363-83ec-7f60de722a77/registry-server/0.log" Jan 28 07:44:14 crc kubenswrapper[4642]: I0128 07:44:14.965016 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-6f75f45d54-gpvnm_ef130a26-1119-48ca-87c7-9def2d39f0b5/manager/0.log" Jan 28 07:44:15 crc kubenswrapper[4642]: I0128 07:44:15.049024 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-79d5ccc684-lfdnj_d1e9a5df-6796-4bdb-8412-f2f832aeebd3/manager/0.log" Jan 28 07:44:15 crc kubenswrapper[4642]: I0128 07:44:15.182577 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-8kkj6_1bbb1fbc-a22c-4a90-b15a-abf791757ef2/operator/0.log" Jan 28 07:44:15 crc kubenswrapper[4642]: I0128 07:44:15.376476 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-x62jq_be43dd0d-944f-4d01-8e8f-22adc9306708/manager/0.log" Jan 28 07:44:15 crc kubenswrapper[4642]: I0128 07:44:15.488179 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-85cd9769bb-g5765_65108034-33b6-4b00-8bc0-6dbf2955510c/manager/0.log" Jan 28 07:44:15 crc kubenswrapper[4642]: I0128 07:44:15.567465 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-9f67d7-9kg2t_a453bbb9-176c-413b-82dd-294ecb3bdb2b/manager/0.log" Jan 28 07:44:15 crc kubenswrapper[4642]: I0128 07:44:15.572153 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-8n8j8_79a5daf5-be64-4759-bbb6-6d3850ff574e/manager/0.log" Jan 28 07:44:15 crc kubenswrapper[4642]: I0128 07:44:15.660164 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-vrkkm_ffd25d2c-380e-4a54-a2af-ca488f438da7/manager/0.log" Jan 28 07:44:28 crc kubenswrapper[4642]: I0128 07:44:28.130820 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-8w4vg_28e7930c-b0c7-4ef7-975d-fe130a30089c/control-plane-machine-set-operator/0.log" Jan 28 07:44:28 crc kubenswrapper[4642]: I0128 07:44:28.268945 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-ttt2d_536f8472-158f-45c2-a0f1-b6799b6bdbdd/kube-rbac-proxy/0.log" Jan 28 07:44:28 crc kubenswrapper[4642]: I0128 07:44:28.275208 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-ttt2d_536f8472-158f-45c2-a0f1-b6799b6bdbdd/machine-api-operator/0.log" Jan 28 07:44:36 crc kubenswrapper[4642]: I0128 07:44:36.209816 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-l8bb4_e8183488-30bb-4dad-affe-d8ac650f1396/cert-manager-controller/0.log" Jan 28 07:44:36 crc kubenswrapper[4642]: I0128 07:44:36.267597 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-qv4ls_130b06ad-fdbf-4c37-b60e-4a6893a00984/cert-manager-cainjector/0.log" Jan 28 07:44:36 crc kubenswrapper[4642]: I0128 07:44:36.326580 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-2kkdr_f0d82d56-7c08-4a56-9d8d-14f1b372c248/cert-manager-webhook/0.log" Jan 28 07:44:37 crc kubenswrapper[4642]: I0128 07:44:37.479761 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dnwgs"] Jan 28 07:44:37 crc kubenswrapper[4642]: E0128 07:44:37.480303 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cb0905a-6060-4cbc-bfee-d5d339a201a6" containerName="container-00" Jan 28 07:44:37 crc kubenswrapper[4642]: I0128 07:44:37.480316 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cb0905a-6060-4cbc-bfee-d5d339a201a6" containerName="container-00" Jan 28 07:44:37 crc kubenswrapper[4642]: I0128 07:44:37.480457 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cb0905a-6060-4cbc-bfee-d5d339a201a6" containerName="container-00" Jan 28 07:44:37 crc kubenswrapper[4642]: I0128 07:44:37.481526 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dnwgs" Jan 28 07:44:37 crc kubenswrapper[4642]: I0128 07:44:37.488235 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dnwgs"] Jan 28 07:44:37 crc kubenswrapper[4642]: I0128 07:44:37.577942 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d369055-6c84-46df-bd09-9591ace9a132-utilities\") pod \"community-operators-dnwgs\" (UID: \"1d369055-6c84-46df-bd09-9591ace9a132\") " pod="openshift-marketplace/community-operators-dnwgs" Jan 28 07:44:37 crc kubenswrapper[4642]: I0128 07:44:37.578148 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d369055-6c84-46df-bd09-9591ace9a132-catalog-content\") pod \"community-operators-dnwgs\" (UID: \"1d369055-6c84-46df-bd09-9591ace9a132\") " pod="openshift-marketplace/community-operators-dnwgs" Jan 28 07:44:37 crc kubenswrapper[4642]: I0128 07:44:37.578294 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9tnz\" (UniqueName: \"kubernetes.io/projected/1d369055-6c84-46df-bd09-9591ace9a132-kube-api-access-s9tnz\") pod \"community-operators-dnwgs\" (UID: \"1d369055-6c84-46df-bd09-9591ace9a132\") " pod="openshift-marketplace/community-operators-dnwgs" Jan 28 07:44:37 crc kubenswrapper[4642]: I0128 07:44:37.679737 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d369055-6c84-46df-bd09-9591ace9a132-catalog-content\") pod \"community-operators-dnwgs\" (UID: \"1d369055-6c84-46df-bd09-9591ace9a132\") " pod="openshift-marketplace/community-operators-dnwgs" Jan 28 07:44:37 crc kubenswrapper[4642]: I0128 07:44:37.679835 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9tnz\" (UniqueName: \"kubernetes.io/projected/1d369055-6c84-46df-bd09-9591ace9a132-kube-api-access-s9tnz\") pod \"community-operators-dnwgs\" (UID: \"1d369055-6c84-46df-bd09-9591ace9a132\") " pod="openshift-marketplace/community-operators-dnwgs" Jan 28 07:44:37 crc kubenswrapper[4642]: I0128 07:44:37.679926 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d369055-6c84-46df-bd09-9591ace9a132-utilities\") pod \"community-operators-dnwgs\" (UID: \"1d369055-6c84-46df-bd09-9591ace9a132\") " pod="openshift-marketplace/community-operators-dnwgs" Jan 28 07:44:37 crc kubenswrapper[4642]: I0128 07:44:37.680127 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d369055-6c84-46df-bd09-9591ace9a132-catalog-content\") pod \"community-operators-dnwgs\" (UID: \"1d369055-6c84-46df-bd09-9591ace9a132\") " pod="openshift-marketplace/community-operators-dnwgs" Jan 28 07:44:37 crc kubenswrapper[4642]: I0128 07:44:37.680305 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d369055-6c84-46df-bd09-9591ace9a132-utilities\") pod \"community-operators-dnwgs\" (UID: \"1d369055-6c84-46df-bd09-9591ace9a132\") " pod="openshift-marketplace/community-operators-dnwgs" Jan 28 07:44:37 crc kubenswrapper[4642]: I0128 07:44:37.696616 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9tnz\" (UniqueName: \"kubernetes.io/projected/1d369055-6c84-46df-bd09-9591ace9a132-kube-api-access-s9tnz\") pod \"community-operators-dnwgs\" (UID: \"1d369055-6c84-46df-bd09-9591ace9a132\") " pod="openshift-marketplace/community-operators-dnwgs" Jan 28 07:44:37 crc kubenswrapper[4642]: I0128 07:44:37.795897 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dnwgs" Jan 28 07:44:38 crc kubenswrapper[4642]: I0128 07:44:38.238261 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dnwgs"] Jan 28 07:44:38 crc kubenswrapper[4642]: I0128 07:44:38.883165 4642 generic.go:334] "Generic (PLEG): container finished" podID="1d369055-6c84-46df-bd09-9591ace9a132" containerID="e82adab1d9412c0477fe4efb1ca7996486029740b5fe9163256ea29375f68881" exitCode=0 Jan 28 07:44:38 crc kubenswrapper[4642]: I0128 07:44:38.883246 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dnwgs" event={"ID":"1d369055-6c84-46df-bd09-9591ace9a132","Type":"ContainerDied","Data":"e82adab1d9412c0477fe4efb1ca7996486029740b5fe9163256ea29375f68881"} Jan 28 07:44:38 crc kubenswrapper[4642]: I0128 07:44:38.883410 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dnwgs" event={"ID":"1d369055-6c84-46df-bd09-9591ace9a132","Type":"ContainerStarted","Data":"d316574f3692addd399a3a31edbaf4635879ca18ca6f4b81cd6830526051cdb5"} Jan 28 07:44:39 crc kubenswrapper[4642]: I0128 07:44:39.891021 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dnwgs" event={"ID":"1d369055-6c84-46df-bd09-9591ace9a132","Type":"ContainerStarted","Data":"923f313f812a913d1f7f23a6354e43ac6e5a7ca891700d744b715261c0065ea0"} Jan 28 07:44:40 crc kubenswrapper[4642]: I0128 07:44:40.899172 4642 generic.go:334] "Generic (PLEG): container finished" podID="1d369055-6c84-46df-bd09-9591ace9a132" containerID="923f313f812a913d1f7f23a6354e43ac6e5a7ca891700d744b715261c0065ea0" exitCode=0 Jan 28 07:44:40 crc kubenswrapper[4642]: I0128 07:44:40.899293 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dnwgs" event={"ID":"1d369055-6c84-46df-bd09-9591ace9a132","Type":"ContainerDied","Data":"923f313f812a913d1f7f23a6354e43ac6e5a7ca891700d744b715261c0065ea0"} Jan 28 07:44:41 crc kubenswrapper[4642]: I0128 07:44:41.906673 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dnwgs" event={"ID":"1d369055-6c84-46df-bd09-9591ace9a132","Type":"ContainerStarted","Data":"82c43e090a96d478bc2392dbcc207a8faaee8059d91d4ad82d15e487a25b91d0"} Jan 28 07:44:41 crc kubenswrapper[4642]: I0128 07:44:41.926491 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dnwgs" podStartSLOduration=2.459639187 podStartE2EDuration="4.926478699s" podCreationTimestamp="2026-01-28 07:44:37 +0000 UTC" firstStartedPulling="2026-01-28 07:44:38.884774405 +0000 UTC m=+3402.116863214" lastFinishedPulling="2026-01-28 07:44:41.351613917 +0000 UTC m=+3404.583702726" observedRunningTime="2026-01-28 07:44:41.922970393 +0000 UTC m=+3405.155059202" watchObservedRunningTime="2026-01-28 07:44:41.926478699 +0000 UTC m=+3405.158567509" Jan 28 07:44:44 crc kubenswrapper[4642]: I0128 07:44:44.620338 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-mghfw_41dcadcf-4728-4cba-9997-5e76250477e6/nmstate-console-plugin/0.log" Jan 28 07:44:44 crc kubenswrapper[4642]: I0128 07:44:44.734965 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-nbnj6_e311b34d-bd2e-4096-bfd4-734999821b7e/nmstate-handler/0.log" Jan 28 07:44:44 crc kubenswrapper[4642]: I0128 07:44:44.787341 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-v4jdx_24cc2707-e7fa-4112-83cd-549fede20a62/kube-rbac-proxy/0.log" Jan 28 07:44:44 crc kubenswrapper[4642]: I0128 07:44:44.823077 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-v4jdx_24cc2707-e7fa-4112-83cd-549fede20a62/nmstate-metrics/0.log" Jan 28 07:44:44 crc kubenswrapper[4642]: I0128 07:44:44.900146 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-89bz2_b97164bc-f5f3-489d-b0f2-c33fdf700a20/nmstate-operator/0.log" Jan 28 07:44:44 crc kubenswrapper[4642]: I0128 07:44:44.949685 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-pmhhc_e8385e4f-aa98-4f3c-9712-0ee8951e1322/nmstate-webhook/0.log" Jan 28 07:44:47 crc kubenswrapper[4642]: I0128 07:44:47.796751 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dnwgs" Jan 28 07:44:47 crc kubenswrapper[4642]: I0128 07:44:47.796972 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dnwgs" Jan 28 07:44:47 crc kubenswrapper[4642]: I0128 07:44:47.828675 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dnwgs" Jan 28 07:44:47 crc kubenswrapper[4642]: I0128 07:44:47.971324 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dnwgs" Jan 28 07:44:48 crc kubenswrapper[4642]: I0128 07:44:48.052964 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dnwgs"] Jan 28 07:44:49 crc kubenswrapper[4642]: I0128 07:44:49.954458 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dnwgs" podUID="1d369055-6c84-46df-bd09-9591ace9a132" containerName="registry-server" containerID="cri-o://82c43e090a96d478bc2392dbcc207a8faaee8059d91d4ad82d15e487a25b91d0" gracePeriod=2 Jan 28 07:44:50 crc kubenswrapper[4642]: I0128 07:44:50.309665 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dnwgs" Jan 28 07:44:50 crc kubenswrapper[4642]: I0128 07:44:50.381841 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d369055-6c84-46df-bd09-9591ace9a132-utilities\") pod \"1d369055-6c84-46df-bd09-9591ace9a132\" (UID: \"1d369055-6c84-46df-bd09-9591ace9a132\") " Jan 28 07:44:50 crc kubenswrapper[4642]: I0128 07:44:50.381906 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s9tnz\" (UniqueName: \"kubernetes.io/projected/1d369055-6c84-46df-bd09-9591ace9a132-kube-api-access-s9tnz\") pod \"1d369055-6c84-46df-bd09-9591ace9a132\" (UID: \"1d369055-6c84-46df-bd09-9591ace9a132\") " Jan 28 07:44:50 crc kubenswrapper[4642]: I0128 07:44:50.381998 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d369055-6c84-46df-bd09-9591ace9a132-catalog-content\") pod \"1d369055-6c84-46df-bd09-9591ace9a132\" (UID: \"1d369055-6c84-46df-bd09-9591ace9a132\") " Jan 28 07:44:50 crc kubenswrapper[4642]: I0128 07:44:50.382481 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d369055-6c84-46df-bd09-9591ace9a132-utilities" (OuterVolumeSpecName: "utilities") pod "1d369055-6c84-46df-bd09-9591ace9a132" (UID: "1d369055-6c84-46df-bd09-9591ace9a132"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 07:44:50 crc kubenswrapper[4642]: I0128 07:44:50.386265 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d369055-6c84-46df-bd09-9591ace9a132-kube-api-access-s9tnz" (OuterVolumeSpecName: "kube-api-access-s9tnz") pod "1d369055-6c84-46df-bd09-9591ace9a132" (UID: "1d369055-6c84-46df-bd09-9591ace9a132"). InnerVolumeSpecName "kube-api-access-s9tnz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:44:50 crc kubenswrapper[4642]: I0128 07:44:50.416081 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d369055-6c84-46df-bd09-9591ace9a132-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d369055-6c84-46df-bd09-9591ace9a132" (UID: "1d369055-6c84-46df-bd09-9591ace9a132"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 07:44:50 crc kubenswrapper[4642]: I0128 07:44:50.484625 4642 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d369055-6c84-46df-bd09-9591ace9a132-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 07:44:50 crc kubenswrapper[4642]: I0128 07:44:50.484657 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s9tnz\" (UniqueName: \"kubernetes.io/projected/1d369055-6c84-46df-bd09-9591ace9a132-kube-api-access-s9tnz\") on node \"crc\" DevicePath \"\"" Jan 28 07:44:50 crc kubenswrapper[4642]: I0128 07:44:50.484667 4642 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d369055-6c84-46df-bd09-9591ace9a132-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 07:44:50 crc kubenswrapper[4642]: I0128 07:44:50.962228 4642 generic.go:334] "Generic (PLEG): container finished" podID="1d369055-6c84-46df-bd09-9591ace9a132" containerID="82c43e090a96d478bc2392dbcc207a8faaee8059d91d4ad82d15e487a25b91d0" exitCode=0 Jan 28 07:44:50 crc kubenswrapper[4642]: I0128 07:44:50.962272 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dnwgs" event={"ID":"1d369055-6c84-46df-bd09-9591ace9a132","Type":"ContainerDied","Data":"82c43e090a96d478bc2392dbcc207a8faaee8059d91d4ad82d15e487a25b91d0"} Jan 28 07:44:50 crc kubenswrapper[4642]: I0128 07:44:50.962298 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dnwgs" event={"ID":"1d369055-6c84-46df-bd09-9591ace9a132","Type":"ContainerDied","Data":"d316574f3692addd399a3a31edbaf4635879ca18ca6f4b81cd6830526051cdb5"} Jan 28 07:44:50 crc kubenswrapper[4642]: I0128 07:44:50.962316 4642 scope.go:117] "RemoveContainer" containerID="82c43e090a96d478bc2392dbcc207a8faaee8059d91d4ad82d15e487a25b91d0" Jan 28 07:44:50 crc kubenswrapper[4642]: I0128 07:44:50.963051 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dnwgs" Jan 28 07:44:50 crc kubenswrapper[4642]: I0128 07:44:50.985893 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dnwgs"] Jan 28 07:44:50 crc kubenswrapper[4642]: I0128 07:44:50.987202 4642 scope.go:117] "RemoveContainer" containerID="923f313f812a913d1f7f23a6354e43ac6e5a7ca891700d744b715261c0065ea0" Jan 28 07:44:50 crc kubenswrapper[4642]: I0128 07:44:50.992549 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dnwgs"] Jan 28 07:44:51 crc kubenswrapper[4642]: I0128 07:44:51.021370 4642 scope.go:117] "RemoveContainer" containerID="e82adab1d9412c0477fe4efb1ca7996486029740b5fe9163256ea29375f68881" Jan 28 07:44:51 crc kubenswrapper[4642]: I0128 07:44:51.034506 4642 scope.go:117] "RemoveContainer" containerID="82c43e090a96d478bc2392dbcc207a8faaee8059d91d4ad82d15e487a25b91d0" Jan 28 07:44:51 crc kubenswrapper[4642]: E0128 07:44:51.034878 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82c43e090a96d478bc2392dbcc207a8faaee8059d91d4ad82d15e487a25b91d0\": container with ID starting with 82c43e090a96d478bc2392dbcc207a8faaee8059d91d4ad82d15e487a25b91d0 not found: ID does not exist" containerID="82c43e090a96d478bc2392dbcc207a8faaee8059d91d4ad82d15e487a25b91d0" Jan 28 07:44:51 crc kubenswrapper[4642]: I0128 07:44:51.034914 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82c43e090a96d478bc2392dbcc207a8faaee8059d91d4ad82d15e487a25b91d0"} err="failed to get container status \"82c43e090a96d478bc2392dbcc207a8faaee8059d91d4ad82d15e487a25b91d0\": rpc error: code = NotFound desc = could not find container \"82c43e090a96d478bc2392dbcc207a8faaee8059d91d4ad82d15e487a25b91d0\": container with ID starting with 82c43e090a96d478bc2392dbcc207a8faaee8059d91d4ad82d15e487a25b91d0 not found: ID does not exist" Jan 28 07:44:51 crc kubenswrapper[4642]: I0128 07:44:51.034939 4642 scope.go:117] "RemoveContainer" containerID="923f313f812a913d1f7f23a6354e43ac6e5a7ca891700d744b715261c0065ea0" Jan 28 07:44:51 crc kubenswrapper[4642]: E0128 07:44:51.035279 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"923f313f812a913d1f7f23a6354e43ac6e5a7ca891700d744b715261c0065ea0\": container with ID starting with 923f313f812a913d1f7f23a6354e43ac6e5a7ca891700d744b715261c0065ea0 not found: ID does not exist" containerID="923f313f812a913d1f7f23a6354e43ac6e5a7ca891700d744b715261c0065ea0" Jan 28 07:44:51 crc kubenswrapper[4642]: I0128 07:44:51.035303 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"923f313f812a913d1f7f23a6354e43ac6e5a7ca891700d744b715261c0065ea0"} err="failed to get container status \"923f313f812a913d1f7f23a6354e43ac6e5a7ca891700d744b715261c0065ea0\": rpc error: code = NotFound desc = could not find container \"923f313f812a913d1f7f23a6354e43ac6e5a7ca891700d744b715261c0065ea0\": container with ID starting with 923f313f812a913d1f7f23a6354e43ac6e5a7ca891700d744b715261c0065ea0 not found: ID does not exist" Jan 28 07:44:51 crc kubenswrapper[4642]: I0128 07:44:51.035316 4642 scope.go:117] "RemoveContainer" containerID="e82adab1d9412c0477fe4efb1ca7996486029740b5fe9163256ea29375f68881" Jan 28 07:44:51 crc kubenswrapper[4642]: E0128 07:44:51.035650 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e82adab1d9412c0477fe4efb1ca7996486029740b5fe9163256ea29375f68881\": container with ID starting with e82adab1d9412c0477fe4efb1ca7996486029740b5fe9163256ea29375f68881 not found: ID does not exist" containerID="e82adab1d9412c0477fe4efb1ca7996486029740b5fe9163256ea29375f68881" Jan 28 07:44:51 crc kubenswrapper[4642]: I0128 07:44:51.035676 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e82adab1d9412c0477fe4efb1ca7996486029740b5fe9163256ea29375f68881"} err="failed to get container status \"e82adab1d9412c0477fe4efb1ca7996486029740b5fe9163256ea29375f68881\": rpc error: code = NotFound desc = could not find container \"e82adab1d9412c0477fe4efb1ca7996486029740b5fe9163256ea29375f68881\": container with ID starting with e82adab1d9412c0477fe4efb1ca7996486029740b5fe9163256ea29375f68881 not found: ID does not exist" Jan 28 07:44:51 crc kubenswrapper[4642]: I0128 07:44:51.106511 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d369055-6c84-46df-bd09-9591ace9a132" path="/var/lib/kubelet/pods/1d369055-6c84-46df-bd09-9591ace9a132/volumes" Jan 28 07:45:00 crc kubenswrapper[4642]: I0128 07:45:00.131427 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493105-hwp2j"] Jan 28 07:45:00 crc kubenswrapper[4642]: E0128 07:45:00.132063 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d369055-6c84-46df-bd09-9591ace9a132" containerName="extract-utilities" Jan 28 07:45:00 crc kubenswrapper[4642]: I0128 07:45:00.132075 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d369055-6c84-46df-bd09-9591ace9a132" containerName="extract-utilities" Jan 28 07:45:00 crc kubenswrapper[4642]: E0128 07:45:00.132100 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d369055-6c84-46df-bd09-9591ace9a132" containerName="registry-server" Jan 28 07:45:00 crc kubenswrapper[4642]: I0128 07:45:00.132106 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d369055-6c84-46df-bd09-9591ace9a132" containerName="registry-server" Jan 28 07:45:00 crc kubenswrapper[4642]: E0128 07:45:00.132123 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d369055-6c84-46df-bd09-9591ace9a132" containerName="extract-content" Jan 28 07:45:00 crc kubenswrapper[4642]: I0128 07:45:00.132130 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d369055-6c84-46df-bd09-9591ace9a132" containerName="extract-content" Jan 28 07:45:00 crc kubenswrapper[4642]: I0128 07:45:00.132289 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d369055-6c84-46df-bd09-9591ace9a132" containerName="registry-server" Jan 28 07:45:00 crc kubenswrapper[4642]: I0128 07:45:00.132788 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493105-hwp2j" Jan 28 07:45:00 crc kubenswrapper[4642]: I0128 07:45:00.134565 4642 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 28 07:45:00 crc kubenswrapper[4642]: I0128 07:45:00.141140 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493105-hwp2j"] Jan 28 07:45:00 crc kubenswrapper[4642]: I0128 07:45:00.143155 4642 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 28 07:45:00 crc kubenswrapper[4642]: I0128 07:45:00.230910 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpjlc\" (UniqueName: \"kubernetes.io/projected/295cc9fb-9445-44e7-9a2f-1b2230e16f1e-kube-api-access-xpjlc\") pod \"collect-profiles-29493105-hwp2j\" (UID: \"295cc9fb-9445-44e7-9a2f-1b2230e16f1e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493105-hwp2j" Jan 28 07:45:00 crc kubenswrapper[4642]: I0128 07:45:00.231100 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/295cc9fb-9445-44e7-9a2f-1b2230e16f1e-config-volume\") pod \"collect-profiles-29493105-hwp2j\" (UID: \"295cc9fb-9445-44e7-9a2f-1b2230e16f1e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493105-hwp2j" Jan 28 07:45:00 crc kubenswrapper[4642]: I0128 07:45:00.231196 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/295cc9fb-9445-44e7-9a2f-1b2230e16f1e-secret-volume\") pod \"collect-profiles-29493105-hwp2j\" (UID: \"295cc9fb-9445-44e7-9a2f-1b2230e16f1e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493105-hwp2j" Jan 28 07:45:00 crc kubenswrapper[4642]: I0128 07:45:00.331873 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpjlc\" (UniqueName: \"kubernetes.io/projected/295cc9fb-9445-44e7-9a2f-1b2230e16f1e-kube-api-access-xpjlc\") pod \"collect-profiles-29493105-hwp2j\" (UID: \"295cc9fb-9445-44e7-9a2f-1b2230e16f1e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493105-hwp2j" Jan 28 07:45:00 crc kubenswrapper[4642]: I0128 07:45:00.332001 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/295cc9fb-9445-44e7-9a2f-1b2230e16f1e-config-volume\") pod \"collect-profiles-29493105-hwp2j\" (UID: \"295cc9fb-9445-44e7-9a2f-1b2230e16f1e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493105-hwp2j" Jan 28 07:45:00 crc kubenswrapper[4642]: I0128 07:45:00.332077 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/295cc9fb-9445-44e7-9a2f-1b2230e16f1e-secret-volume\") pod \"collect-profiles-29493105-hwp2j\" (UID: \"295cc9fb-9445-44e7-9a2f-1b2230e16f1e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493105-hwp2j" Jan 28 07:45:00 crc kubenswrapper[4642]: I0128 07:45:00.332766 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/295cc9fb-9445-44e7-9a2f-1b2230e16f1e-config-volume\") pod \"collect-profiles-29493105-hwp2j\" (UID: \"295cc9fb-9445-44e7-9a2f-1b2230e16f1e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493105-hwp2j" Jan 28 07:45:00 crc kubenswrapper[4642]: I0128 07:45:00.336736 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/295cc9fb-9445-44e7-9a2f-1b2230e16f1e-secret-volume\") pod \"collect-profiles-29493105-hwp2j\" (UID: \"295cc9fb-9445-44e7-9a2f-1b2230e16f1e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493105-hwp2j" Jan 28 07:45:00 crc kubenswrapper[4642]: I0128 07:45:00.345331 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpjlc\" (UniqueName: \"kubernetes.io/projected/295cc9fb-9445-44e7-9a2f-1b2230e16f1e-kube-api-access-xpjlc\") pod \"collect-profiles-29493105-hwp2j\" (UID: \"295cc9fb-9445-44e7-9a2f-1b2230e16f1e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493105-hwp2j" Jan 28 07:45:00 crc kubenswrapper[4642]: I0128 07:45:00.446271 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493105-hwp2j" Jan 28 07:45:00 crc kubenswrapper[4642]: I0128 07:45:00.802681 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493105-hwp2j"] Jan 28 07:45:01 crc kubenswrapper[4642]: I0128 07:45:01.024924 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493105-hwp2j" event={"ID":"295cc9fb-9445-44e7-9a2f-1b2230e16f1e","Type":"ContainerStarted","Data":"060339323c0b1fb5b51dfd61c5ceb6c85356e59f7f1b22e7650dbd25ec8b7424"} Jan 28 07:45:01 crc kubenswrapper[4642]: I0128 07:45:01.024968 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493105-hwp2j" event={"ID":"295cc9fb-9445-44e7-9a2f-1b2230e16f1e","Type":"ContainerStarted","Data":"0fdd913c38b21a1c921c0eabdae4c743469bfd5572e8caeeafa2629512baecac"} Jan 28 07:45:01 crc kubenswrapper[4642]: I0128 07:45:01.038721 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29493105-hwp2j" podStartSLOduration=1.038707454 podStartE2EDuration="1.038707454s" podCreationTimestamp="2026-01-28 07:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 07:45:01.034637822 +0000 UTC m=+3424.266726630" watchObservedRunningTime="2026-01-28 07:45:01.038707454 +0000 UTC m=+3424.270796263" Jan 28 07:45:02 crc kubenswrapper[4642]: I0128 07:45:02.032666 4642 generic.go:334] "Generic (PLEG): container finished" podID="295cc9fb-9445-44e7-9a2f-1b2230e16f1e" containerID="060339323c0b1fb5b51dfd61c5ceb6c85356e59f7f1b22e7650dbd25ec8b7424" exitCode=0 Jan 28 07:45:02 crc kubenswrapper[4642]: I0128 07:45:02.032768 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493105-hwp2j" event={"ID":"295cc9fb-9445-44e7-9a2f-1b2230e16f1e","Type":"ContainerDied","Data":"060339323c0b1fb5b51dfd61c5ceb6c85356e59f7f1b22e7650dbd25ec8b7424"} Jan 28 07:45:02 crc kubenswrapper[4642]: I0128 07:45:02.386642 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-bwswz_cb4d1f9b-a7f7-4bc5-89e1-3c175cbdaee2/kube-rbac-proxy/0.log" Jan 28 07:45:02 crc kubenswrapper[4642]: I0128 07:45:02.480699 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-bwswz_cb4d1f9b-a7f7-4bc5-89e1-3c175cbdaee2/controller/0.log" Jan 28 07:45:02 crc kubenswrapper[4642]: I0128 07:45:02.527524 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c9xns_7eff0229-6d46-439f-9e3b-b1382d2615ee/cp-frr-files/0.log" Jan 28 07:45:02 crc kubenswrapper[4642]: I0128 07:45:02.665776 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c9xns_7eff0229-6d46-439f-9e3b-b1382d2615ee/cp-frr-files/0.log" Jan 28 07:45:02 crc kubenswrapper[4642]: I0128 07:45:02.682734 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c9xns_7eff0229-6d46-439f-9e3b-b1382d2615ee/cp-metrics/0.log" Jan 28 07:45:02 crc kubenswrapper[4642]: I0128 07:45:02.704601 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c9xns_7eff0229-6d46-439f-9e3b-b1382d2615ee/cp-reloader/0.log" Jan 28 07:45:02 crc kubenswrapper[4642]: I0128 07:45:02.711436 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c9xns_7eff0229-6d46-439f-9e3b-b1382d2615ee/cp-reloader/0.log" Jan 28 07:45:02 crc kubenswrapper[4642]: I0128 07:45:02.814996 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c9xns_7eff0229-6d46-439f-9e3b-b1382d2615ee/cp-frr-files/0.log" Jan 28 07:45:02 crc kubenswrapper[4642]: I0128 07:45:02.819933 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c9xns_7eff0229-6d46-439f-9e3b-b1382d2615ee/cp-reloader/0.log" Jan 28 07:45:02 crc kubenswrapper[4642]: I0128 07:45:02.831176 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c9xns_7eff0229-6d46-439f-9e3b-b1382d2615ee/cp-metrics/0.log" Jan 28 07:45:02 crc kubenswrapper[4642]: I0128 07:45:02.852412 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c9xns_7eff0229-6d46-439f-9e3b-b1382d2615ee/cp-metrics/0.log" Jan 28 07:45:02 crc kubenswrapper[4642]: I0128 07:45:02.967933 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c9xns_7eff0229-6d46-439f-9e3b-b1382d2615ee/cp-frr-files/0.log" Jan 28 07:45:02 crc kubenswrapper[4642]: I0128 07:45:02.974447 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c9xns_7eff0229-6d46-439f-9e3b-b1382d2615ee/cp-reloader/0.log" Jan 28 07:45:02 crc kubenswrapper[4642]: I0128 07:45:02.977719 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c9xns_7eff0229-6d46-439f-9e3b-b1382d2615ee/cp-metrics/0.log" Jan 28 07:45:02 crc kubenswrapper[4642]: I0128 07:45:02.992533 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c9xns_7eff0229-6d46-439f-9e3b-b1382d2615ee/controller/0.log" Jan 28 07:45:03 crc kubenswrapper[4642]: I0128 07:45:03.104866 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c9xns_7eff0229-6d46-439f-9e3b-b1382d2615ee/frr-metrics/0.log" Jan 28 07:45:03 crc kubenswrapper[4642]: I0128 07:45:03.130658 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c9xns_7eff0229-6d46-439f-9e3b-b1382d2615ee/kube-rbac-proxy/0.log" Jan 28 07:45:03 crc kubenswrapper[4642]: I0128 07:45:03.140053 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c9xns_7eff0229-6d46-439f-9e3b-b1382d2615ee/kube-rbac-proxy-frr/0.log" Jan 28 07:45:03 crc kubenswrapper[4642]: I0128 07:45:03.293223 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c9xns_7eff0229-6d46-439f-9e3b-b1382d2615ee/reloader/0.log" Jan 28 07:45:03 crc kubenswrapper[4642]: I0128 07:45:03.299000 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-g79zx_6b78a60e-9afd-4252-98b0-a1ba76c8e54c/frr-k8s-webhook-server/0.log" Jan 28 07:45:03 crc kubenswrapper[4642]: I0128 07:45:03.319069 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493105-hwp2j" Jan 28 07:45:03 crc kubenswrapper[4642]: I0128 07:45:03.373568 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xpjlc\" (UniqueName: \"kubernetes.io/projected/295cc9fb-9445-44e7-9a2f-1b2230e16f1e-kube-api-access-xpjlc\") pod \"295cc9fb-9445-44e7-9a2f-1b2230e16f1e\" (UID: \"295cc9fb-9445-44e7-9a2f-1b2230e16f1e\") " Jan 28 07:45:03 crc kubenswrapper[4642]: I0128 07:45:03.373714 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/295cc9fb-9445-44e7-9a2f-1b2230e16f1e-secret-volume\") pod \"295cc9fb-9445-44e7-9a2f-1b2230e16f1e\" (UID: \"295cc9fb-9445-44e7-9a2f-1b2230e16f1e\") " Jan 28 07:45:03 crc kubenswrapper[4642]: I0128 07:45:03.373811 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/295cc9fb-9445-44e7-9a2f-1b2230e16f1e-config-volume\") pod \"295cc9fb-9445-44e7-9a2f-1b2230e16f1e\" (UID: \"295cc9fb-9445-44e7-9a2f-1b2230e16f1e\") " Jan 28 07:45:03 crc kubenswrapper[4642]: I0128 07:45:03.374480 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/295cc9fb-9445-44e7-9a2f-1b2230e16f1e-config-volume" (OuterVolumeSpecName: "config-volume") pod "295cc9fb-9445-44e7-9a2f-1b2230e16f1e" (UID: "295cc9fb-9445-44e7-9a2f-1b2230e16f1e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 07:45:03 crc kubenswrapper[4642]: I0128 07:45:03.377985 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/295cc9fb-9445-44e7-9a2f-1b2230e16f1e-kube-api-access-xpjlc" (OuterVolumeSpecName: "kube-api-access-xpjlc") pod "295cc9fb-9445-44e7-9a2f-1b2230e16f1e" (UID: "295cc9fb-9445-44e7-9a2f-1b2230e16f1e"). InnerVolumeSpecName "kube-api-access-xpjlc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:45:03 crc kubenswrapper[4642]: I0128 07:45:03.379232 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/295cc9fb-9445-44e7-9a2f-1b2230e16f1e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "295cc9fb-9445-44e7-9a2f-1b2230e16f1e" (UID: "295cc9fb-9445-44e7-9a2f-1b2230e16f1e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 07:45:03 crc kubenswrapper[4642]: I0128 07:45:03.470873 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-85fb65d6bf-4cwxd_4b4ddf14-3402-4717-8cd7-9858e01a1bc2/manager/0.log" Jan 28 07:45:03 crc kubenswrapper[4642]: I0128 07:45:03.475229 4642 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/295cc9fb-9445-44e7-9a2f-1b2230e16f1e-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 28 07:45:03 crc kubenswrapper[4642]: I0128 07:45:03.475253 4642 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/295cc9fb-9445-44e7-9a2f-1b2230e16f1e-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 07:45:03 crc kubenswrapper[4642]: I0128 07:45:03.475263 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xpjlc\" (UniqueName: \"kubernetes.io/projected/295cc9fb-9445-44e7-9a2f-1b2230e16f1e-kube-api-access-xpjlc\") on node \"crc\" DevicePath \"\"" Jan 28 07:45:03 crc kubenswrapper[4642]: I0128 07:45:03.658068 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-765f49f7c6-dglx5_1fcc6dc0-d8c3-47a9-965d-dec1320015c6/webhook-server/0.log" Jan 28 07:45:03 crc kubenswrapper[4642]: I0128 07:45:03.735340 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-6jjwb_78316030-2b3d-4a8a-b7ed-3ace14a05e80/kube-rbac-proxy/0.log" Jan 28 07:45:04 crc kubenswrapper[4642]: I0128 07:45:04.047088 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493105-hwp2j" event={"ID":"295cc9fb-9445-44e7-9a2f-1b2230e16f1e","Type":"ContainerDied","Data":"0fdd913c38b21a1c921c0eabdae4c743469bfd5572e8caeeafa2629512baecac"} Jan 28 07:45:04 crc kubenswrapper[4642]: I0128 07:45:04.047322 4642 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0fdd913c38b21a1c921c0eabdae4c743469bfd5572e8caeeafa2629512baecac" Jan 28 07:45:04 crc kubenswrapper[4642]: I0128 07:45:04.047161 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493105-hwp2j" Jan 28 07:45:04 crc kubenswrapper[4642]: I0128 07:45:04.137854 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-6jjwb_78316030-2b3d-4a8a-b7ed-3ace14a05e80/speaker/0.log" Jan 28 07:45:04 crc kubenswrapper[4642]: I0128 07:45:04.177160 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c9xns_7eff0229-6d46-439f-9e3b-b1382d2615ee/frr/0.log" Jan 28 07:45:04 crc kubenswrapper[4642]: I0128 07:45:04.370221 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493060-9w4pg"] Jan 28 07:45:04 crc kubenswrapper[4642]: I0128 07:45:04.376134 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493060-9w4pg"] Jan 28 07:45:05 crc kubenswrapper[4642]: I0128 07:45:05.107269 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="febb5d9c-82a0-457d-8bc8-e84da755454d" path="/var/lib/kubelet/pods/febb5d9c-82a0-457d-8bc8-e84da755454d/volumes" Jan 28 07:45:12 crc kubenswrapper[4642]: I0128 07:45:12.252786 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx9p7q_fec6cae0-ef33-4521-b704-1fead4aca74b/util/0.log" Jan 28 07:45:12 crc kubenswrapper[4642]: I0128 07:45:12.424667 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx9p7q_fec6cae0-ef33-4521-b704-1fead4aca74b/util/0.log" Jan 28 07:45:12 crc kubenswrapper[4642]: I0128 07:45:12.438645 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx9p7q_fec6cae0-ef33-4521-b704-1fead4aca74b/pull/0.log" Jan 28 07:45:12 crc kubenswrapper[4642]: I0128 07:45:12.447252 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx9p7q_fec6cae0-ef33-4521-b704-1fead4aca74b/pull/0.log" Jan 28 07:45:12 crc kubenswrapper[4642]: I0128 07:45:12.544020 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx9p7q_fec6cae0-ef33-4521-b704-1fead4aca74b/util/0.log" Jan 28 07:45:12 crc kubenswrapper[4642]: I0128 07:45:12.562106 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx9p7q_fec6cae0-ef33-4521-b704-1fead4aca74b/pull/0.log" Jan 28 07:45:12 crc kubenswrapper[4642]: I0128 07:45:12.568630 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx9p7q_fec6cae0-ef33-4521-b704-1fead4aca74b/extract/0.log" Jan 28 07:45:12 crc kubenswrapper[4642]: I0128 07:45:12.670397 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137cnpp_4ac15d4c-285c-4cef-8de9-b532767c0a6b/util/0.log" Jan 28 07:45:12 crc kubenswrapper[4642]: I0128 07:45:12.796836 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137cnpp_4ac15d4c-285c-4cef-8de9-b532767c0a6b/util/0.log" Jan 28 07:45:12 crc kubenswrapper[4642]: I0128 07:45:12.801098 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137cnpp_4ac15d4c-285c-4cef-8de9-b532767c0a6b/pull/0.log" Jan 28 07:45:12 crc kubenswrapper[4642]: I0128 07:45:12.819864 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137cnpp_4ac15d4c-285c-4cef-8de9-b532767c0a6b/pull/0.log" Jan 28 07:45:12 crc kubenswrapper[4642]: I0128 07:45:12.901611 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137cnpp_4ac15d4c-285c-4cef-8de9-b532767c0a6b/util/0.log" Jan 28 07:45:12 crc kubenswrapper[4642]: I0128 07:45:12.923965 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137cnpp_4ac15d4c-285c-4cef-8de9-b532767c0a6b/extract/0.log" Jan 28 07:45:12 crc kubenswrapper[4642]: I0128 07:45:12.932044 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137cnpp_4ac15d4c-285c-4cef-8de9-b532767c0a6b/pull/0.log" Jan 28 07:45:13 crc kubenswrapper[4642]: I0128 07:45:13.028437 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-65j4l_04f55e4c-8e13-4147-87e5-c69535042a39/extract-utilities/0.log" Jan 28 07:45:13 crc kubenswrapper[4642]: I0128 07:45:13.150239 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-65j4l_04f55e4c-8e13-4147-87e5-c69535042a39/extract-content/0.log" Jan 28 07:45:13 crc kubenswrapper[4642]: I0128 07:45:13.150276 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-65j4l_04f55e4c-8e13-4147-87e5-c69535042a39/extract-content/0.log" Jan 28 07:45:13 crc kubenswrapper[4642]: I0128 07:45:13.151375 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-65j4l_04f55e4c-8e13-4147-87e5-c69535042a39/extract-utilities/0.log" Jan 28 07:45:13 crc kubenswrapper[4642]: I0128 07:45:13.269292 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-65j4l_04f55e4c-8e13-4147-87e5-c69535042a39/extract-utilities/0.log" Jan 28 07:45:13 crc kubenswrapper[4642]: I0128 07:45:13.270662 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-65j4l_04f55e4c-8e13-4147-87e5-c69535042a39/extract-content/0.log" Jan 28 07:45:13 crc kubenswrapper[4642]: I0128 07:45:13.457836 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qjvsc_70ce17da-66a5-4aed-90f5-3ed27538b630/extract-utilities/0.log" Jan 28 07:45:13 crc kubenswrapper[4642]: I0128 07:45:13.609816 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qjvsc_70ce17da-66a5-4aed-90f5-3ed27538b630/extract-content/0.log" Jan 28 07:45:13 crc kubenswrapper[4642]: I0128 07:45:13.638985 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qjvsc_70ce17da-66a5-4aed-90f5-3ed27538b630/extract-content/0.log" Jan 28 07:45:13 crc kubenswrapper[4642]: I0128 07:45:13.665014 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qjvsc_70ce17da-66a5-4aed-90f5-3ed27538b630/extract-utilities/0.log" Jan 28 07:45:13 crc kubenswrapper[4642]: I0128 07:45:13.681534 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-65j4l_04f55e4c-8e13-4147-87e5-c69535042a39/registry-server/0.log" Jan 28 07:45:13 crc kubenswrapper[4642]: I0128 07:45:13.747179 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qjvsc_70ce17da-66a5-4aed-90f5-3ed27538b630/extract-utilities/0.log" Jan 28 07:45:13 crc kubenswrapper[4642]: I0128 07:45:13.790008 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qjvsc_70ce17da-66a5-4aed-90f5-3ed27538b630/extract-content/0.log" Jan 28 07:45:13 crc kubenswrapper[4642]: I0128 07:45:13.927143 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-mfmj2_09fc0334-7203-49cf-958d-0c34a6dc1bdc/marketplace-operator/0.log" Jan 28 07:45:13 crc kubenswrapper[4642]: I0128 07:45:13.978383 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wvpcf"] Jan 28 07:45:13 crc kubenswrapper[4642]: E0128 07:45:13.978727 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="295cc9fb-9445-44e7-9a2f-1b2230e16f1e" containerName="collect-profiles" Jan 28 07:45:13 crc kubenswrapper[4642]: I0128 07:45:13.978744 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="295cc9fb-9445-44e7-9a2f-1b2230e16f1e" containerName="collect-profiles" Jan 28 07:45:13 crc kubenswrapper[4642]: I0128 07:45:13.978930 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="295cc9fb-9445-44e7-9a2f-1b2230e16f1e" containerName="collect-profiles" Jan 28 07:45:13 crc kubenswrapper[4642]: I0128 07:45:13.980543 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wvpcf" Jan 28 07:45:14 crc kubenswrapper[4642]: I0128 07:45:14.000622 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wvpcf"] Jan 28 07:45:14 crc kubenswrapper[4642]: I0128 07:45:14.019476 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pnx2g_01782f50-40a0-4a5d-ba1d-0fd6846cb642/extract-utilities/0.log" Jan 28 07:45:14 crc kubenswrapper[4642]: I0128 07:45:14.026156 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qjvsc_70ce17da-66a5-4aed-90f5-3ed27538b630/registry-server/0.log" Jan 28 07:45:14 crc kubenswrapper[4642]: I0128 07:45:14.040173 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb18bd7a-57f6-4d79-af4b-352a7733b9bf-catalog-content\") pod \"redhat-operators-wvpcf\" (UID: \"bb18bd7a-57f6-4d79-af4b-352a7733b9bf\") " pod="openshift-marketplace/redhat-operators-wvpcf" Jan 28 07:45:14 crc kubenswrapper[4642]: I0128 07:45:14.040261 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb18bd7a-57f6-4d79-af4b-352a7733b9bf-utilities\") pod \"redhat-operators-wvpcf\" (UID: \"bb18bd7a-57f6-4d79-af4b-352a7733b9bf\") " pod="openshift-marketplace/redhat-operators-wvpcf" Jan 28 07:45:14 crc kubenswrapper[4642]: I0128 07:45:14.040398 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzknl\" (UniqueName: \"kubernetes.io/projected/bb18bd7a-57f6-4d79-af4b-352a7733b9bf-kube-api-access-qzknl\") pod \"redhat-operators-wvpcf\" (UID: \"bb18bd7a-57f6-4d79-af4b-352a7733b9bf\") " pod="openshift-marketplace/redhat-operators-wvpcf" Jan 28 07:45:14 crc kubenswrapper[4642]: I0128 07:45:14.141869 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzknl\" (UniqueName: \"kubernetes.io/projected/bb18bd7a-57f6-4d79-af4b-352a7733b9bf-kube-api-access-qzknl\") pod \"redhat-operators-wvpcf\" (UID: \"bb18bd7a-57f6-4d79-af4b-352a7733b9bf\") " pod="openshift-marketplace/redhat-operators-wvpcf" Jan 28 07:45:14 crc kubenswrapper[4642]: I0128 07:45:14.141969 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb18bd7a-57f6-4d79-af4b-352a7733b9bf-catalog-content\") pod \"redhat-operators-wvpcf\" (UID: \"bb18bd7a-57f6-4d79-af4b-352a7733b9bf\") " pod="openshift-marketplace/redhat-operators-wvpcf" Jan 28 07:45:14 crc kubenswrapper[4642]: I0128 07:45:14.142041 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb18bd7a-57f6-4d79-af4b-352a7733b9bf-utilities\") pod \"redhat-operators-wvpcf\" (UID: \"bb18bd7a-57f6-4d79-af4b-352a7733b9bf\") " pod="openshift-marketplace/redhat-operators-wvpcf" Jan 28 07:45:14 crc kubenswrapper[4642]: I0128 07:45:14.142408 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb18bd7a-57f6-4d79-af4b-352a7733b9bf-utilities\") pod \"redhat-operators-wvpcf\" (UID: \"bb18bd7a-57f6-4d79-af4b-352a7733b9bf\") " pod="openshift-marketplace/redhat-operators-wvpcf" Jan 28 07:45:14 crc kubenswrapper[4642]: I0128 07:45:14.142731 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb18bd7a-57f6-4d79-af4b-352a7733b9bf-catalog-content\") pod \"redhat-operators-wvpcf\" (UID: \"bb18bd7a-57f6-4d79-af4b-352a7733b9bf\") " pod="openshift-marketplace/redhat-operators-wvpcf" Jan 28 07:45:14 crc kubenswrapper[4642]: I0128 07:45:14.160232 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzknl\" (UniqueName: \"kubernetes.io/projected/bb18bd7a-57f6-4d79-af4b-352a7733b9bf-kube-api-access-qzknl\") pod \"redhat-operators-wvpcf\" (UID: \"bb18bd7a-57f6-4d79-af4b-352a7733b9bf\") " pod="openshift-marketplace/redhat-operators-wvpcf" Jan 28 07:45:14 crc kubenswrapper[4642]: I0128 07:45:14.174558 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pnx2g_01782f50-40a0-4a5d-ba1d-0fd6846cb642/extract-utilities/0.log" Jan 28 07:45:14 crc kubenswrapper[4642]: I0128 07:45:14.199010 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pnx2g_01782f50-40a0-4a5d-ba1d-0fd6846cb642/extract-content/0.log" Jan 28 07:45:14 crc kubenswrapper[4642]: I0128 07:45:14.203464 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pnx2g_01782f50-40a0-4a5d-ba1d-0fd6846cb642/extract-content/0.log" Jan 28 07:45:14 crc kubenswrapper[4642]: I0128 07:45:14.299272 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wvpcf" Jan 28 07:45:14 crc kubenswrapper[4642]: I0128 07:45:14.382361 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pnx2g_01782f50-40a0-4a5d-ba1d-0fd6846cb642/extract-utilities/0.log" Jan 28 07:45:14 crc kubenswrapper[4642]: I0128 07:45:14.422334 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pnx2g_01782f50-40a0-4a5d-ba1d-0fd6846cb642/extract-content/0.log" Jan 28 07:45:14 crc kubenswrapper[4642]: I0128 07:45:14.496079 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pnx2g_01782f50-40a0-4a5d-ba1d-0fd6846cb642/registry-server/0.log" Jan 28 07:45:14 crc kubenswrapper[4642]: I0128 07:45:14.569518 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6gz7t_f07b2642-09a0-4490-bbe8-3e3a48e2a81a/extract-utilities/0.log" Jan 28 07:45:14 crc kubenswrapper[4642]: I0128 07:45:14.730951 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wvpcf"] Jan 28 07:45:14 crc kubenswrapper[4642]: I0128 07:45:14.741801 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6gz7t_f07b2642-09a0-4490-bbe8-3e3a48e2a81a/extract-content/0.log" Jan 28 07:45:14 crc kubenswrapper[4642]: I0128 07:45:14.741803 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6gz7t_f07b2642-09a0-4490-bbe8-3e3a48e2a81a/extract-utilities/0.log" Jan 28 07:45:14 crc kubenswrapper[4642]: I0128 07:45:14.772866 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6gz7t_f07b2642-09a0-4490-bbe8-3e3a48e2a81a/extract-content/0.log" Jan 28 07:45:14 crc kubenswrapper[4642]: I0128 07:45:14.860812 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6gz7t_f07b2642-09a0-4490-bbe8-3e3a48e2a81a/extract-utilities/0.log" Jan 28 07:45:14 crc kubenswrapper[4642]: I0128 07:45:14.929420 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6gz7t_f07b2642-09a0-4490-bbe8-3e3a48e2a81a/extract-content/0.log" Jan 28 07:45:15 crc kubenswrapper[4642]: I0128 07:45:15.130445 4642 generic.go:334] "Generic (PLEG): container finished" podID="bb18bd7a-57f6-4d79-af4b-352a7733b9bf" containerID="fa2dd667789af8484d7779cba2095353b0e41cb4f9e945f6540c72d421029f02" exitCode=0 Jan 28 07:45:15 crc kubenswrapper[4642]: I0128 07:45:15.130490 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wvpcf" event={"ID":"bb18bd7a-57f6-4d79-af4b-352a7733b9bf","Type":"ContainerDied","Data":"fa2dd667789af8484d7779cba2095353b0e41cb4f9e945f6540c72d421029f02"} Jan 28 07:45:15 crc kubenswrapper[4642]: I0128 07:45:15.130516 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wvpcf" event={"ID":"bb18bd7a-57f6-4d79-af4b-352a7733b9bf","Type":"ContainerStarted","Data":"3bc373aa76fadd8967c95972075ea0ffa0c14d146f57e19257960ecf1d3b272a"} Jan 28 07:45:15 crc kubenswrapper[4642]: I0128 07:45:15.425020 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6gz7t_f07b2642-09a0-4490-bbe8-3e3a48e2a81a/registry-server/0.log" Jan 28 07:45:16 crc kubenswrapper[4642]: I0128 07:45:16.138000 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wvpcf" event={"ID":"bb18bd7a-57f6-4d79-af4b-352a7733b9bf","Type":"ContainerStarted","Data":"aa19ac46fbe9caf98b7ba32dafb9ff8b5071e62154827371bda4d3d6a419d409"} Jan 28 07:45:17 crc kubenswrapper[4642]: I0128 07:45:17.145719 4642 generic.go:334] "Generic (PLEG): container finished" podID="bb18bd7a-57f6-4d79-af4b-352a7733b9bf" containerID="aa19ac46fbe9caf98b7ba32dafb9ff8b5071e62154827371bda4d3d6a419d409" exitCode=0 Jan 28 07:45:17 crc kubenswrapper[4642]: I0128 07:45:17.145805 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wvpcf" event={"ID":"bb18bd7a-57f6-4d79-af4b-352a7733b9bf","Type":"ContainerDied","Data":"aa19ac46fbe9caf98b7ba32dafb9ff8b5071e62154827371bda4d3d6a419d409"} Jan 28 07:45:18 crc kubenswrapper[4642]: I0128 07:45:18.154979 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wvpcf" event={"ID":"bb18bd7a-57f6-4d79-af4b-352a7733b9bf","Type":"ContainerStarted","Data":"dab10e9d55b76d66acbaef32fca92850afeee0ef3aae5192e077a7b6ba963c61"} Jan 28 07:45:18 crc kubenswrapper[4642]: I0128 07:45:18.171150 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wvpcf" podStartSLOduration=2.550712624 podStartE2EDuration="5.171137302s" podCreationTimestamp="2026-01-28 07:45:13 +0000 UTC" firstStartedPulling="2026-01-28 07:45:15.131977127 +0000 UTC m=+3438.364065936" lastFinishedPulling="2026-01-28 07:45:17.752401815 +0000 UTC m=+3440.984490614" observedRunningTime="2026-01-28 07:45:18.167323041 +0000 UTC m=+3441.399411850" watchObservedRunningTime="2026-01-28 07:45:18.171137302 +0000 UTC m=+3441.403226111" Jan 28 07:45:24 crc kubenswrapper[4642]: I0128 07:45:24.300293 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wvpcf" Jan 28 07:45:24 crc kubenswrapper[4642]: I0128 07:45:24.300568 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wvpcf" Jan 28 07:45:24 crc kubenswrapper[4642]: I0128 07:45:24.332492 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wvpcf" Jan 28 07:45:25 crc kubenswrapper[4642]: I0128 07:45:25.236372 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wvpcf" Jan 28 07:45:25 crc kubenswrapper[4642]: I0128 07:45:25.283740 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wvpcf"] Jan 28 07:45:27 crc kubenswrapper[4642]: I0128 07:45:27.217079 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wvpcf" podUID="bb18bd7a-57f6-4d79-af4b-352a7733b9bf" containerName="registry-server" containerID="cri-o://dab10e9d55b76d66acbaef32fca92850afeee0ef3aae5192e077a7b6ba963c61" gracePeriod=2 Jan 28 07:45:27 crc kubenswrapper[4642]: I0128 07:45:27.634373 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wvpcf" Jan 28 07:45:27 crc kubenswrapper[4642]: I0128 07:45:27.752768 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qzknl\" (UniqueName: \"kubernetes.io/projected/bb18bd7a-57f6-4d79-af4b-352a7733b9bf-kube-api-access-qzknl\") pod \"bb18bd7a-57f6-4d79-af4b-352a7733b9bf\" (UID: \"bb18bd7a-57f6-4d79-af4b-352a7733b9bf\") " Jan 28 07:45:27 crc kubenswrapper[4642]: I0128 07:45:27.753316 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb18bd7a-57f6-4d79-af4b-352a7733b9bf-catalog-content\") pod \"bb18bd7a-57f6-4d79-af4b-352a7733b9bf\" (UID: \"bb18bd7a-57f6-4d79-af4b-352a7733b9bf\") " Jan 28 07:45:27 crc kubenswrapper[4642]: I0128 07:45:27.758391 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb18bd7a-57f6-4d79-af4b-352a7733b9bf-utilities\") pod \"bb18bd7a-57f6-4d79-af4b-352a7733b9bf\" (UID: \"bb18bd7a-57f6-4d79-af4b-352a7733b9bf\") " Jan 28 07:45:27 crc kubenswrapper[4642]: I0128 07:45:27.758887 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb18bd7a-57f6-4d79-af4b-352a7733b9bf-utilities" (OuterVolumeSpecName: "utilities") pod "bb18bd7a-57f6-4d79-af4b-352a7733b9bf" (UID: "bb18bd7a-57f6-4d79-af4b-352a7733b9bf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 07:45:27 crc kubenswrapper[4642]: I0128 07:45:27.759883 4642 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb18bd7a-57f6-4d79-af4b-352a7733b9bf-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 07:45:27 crc kubenswrapper[4642]: I0128 07:45:27.761641 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb18bd7a-57f6-4d79-af4b-352a7733b9bf-kube-api-access-qzknl" (OuterVolumeSpecName: "kube-api-access-qzknl") pod "bb18bd7a-57f6-4d79-af4b-352a7733b9bf" (UID: "bb18bd7a-57f6-4d79-af4b-352a7733b9bf"). InnerVolumeSpecName "kube-api-access-qzknl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:45:27 crc kubenswrapper[4642]: I0128 07:45:27.831435 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb18bd7a-57f6-4d79-af4b-352a7733b9bf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bb18bd7a-57f6-4d79-af4b-352a7733b9bf" (UID: "bb18bd7a-57f6-4d79-af4b-352a7733b9bf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 07:45:27 crc kubenswrapper[4642]: I0128 07:45:27.861738 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qzknl\" (UniqueName: \"kubernetes.io/projected/bb18bd7a-57f6-4d79-af4b-352a7733b9bf-kube-api-access-qzknl\") on node \"crc\" DevicePath \"\"" Jan 28 07:45:27 crc kubenswrapper[4642]: I0128 07:45:27.861766 4642 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb18bd7a-57f6-4d79-af4b-352a7733b9bf-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 07:45:28 crc kubenswrapper[4642]: I0128 07:45:28.225634 4642 generic.go:334] "Generic (PLEG): container finished" podID="bb18bd7a-57f6-4d79-af4b-352a7733b9bf" containerID="dab10e9d55b76d66acbaef32fca92850afeee0ef3aae5192e077a7b6ba963c61" exitCode=0 Jan 28 07:45:28 crc kubenswrapper[4642]: I0128 07:45:28.225681 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wvpcf" event={"ID":"bb18bd7a-57f6-4d79-af4b-352a7733b9bf","Type":"ContainerDied","Data":"dab10e9d55b76d66acbaef32fca92850afeee0ef3aae5192e077a7b6ba963c61"} Jan 28 07:45:28 crc kubenswrapper[4642]: I0128 07:45:28.225710 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wvpcf" event={"ID":"bb18bd7a-57f6-4d79-af4b-352a7733b9bf","Type":"ContainerDied","Data":"3bc373aa76fadd8967c95972075ea0ffa0c14d146f57e19257960ecf1d3b272a"} Jan 28 07:45:28 crc kubenswrapper[4642]: I0128 07:45:28.225726 4642 scope.go:117] "RemoveContainer" containerID="dab10e9d55b76d66acbaef32fca92850afeee0ef3aae5192e077a7b6ba963c61" Jan 28 07:45:28 crc kubenswrapper[4642]: I0128 07:45:28.225844 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wvpcf" Jan 28 07:45:28 crc kubenswrapper[4642]: I0128 07:45:28.249315 4642 scope.go:117] "RemoveContainer" containerID="aa19ac46fbe9caf98b7ba32dafb9ff8b5071e62154827371bda4d3d6a419d409" Jan 28 07:45:28 crc kubenswrapper[4642]: I0128 07:45:28.263316 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wvpcf"] Jan 28 07:45:28 crc kubenswrapper[4642]: I0128 07:45:28.271384 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wvpcf"] Jan 28 07:45:28 crc kubenswrapper[4642]: I0128 07:45:28.279315 4642 scope.go:117] "RemoveContainer" containerID="fa2dd667789af8484d7779cba2095353b0e41cb4f9e945f6540c72d421029f02" Jan 28 07:45:28 crc kubenswrapper[4642]: I0128 07:45:28.307739 4642 scope.go:117] "RemoveContainer" containerID="dab10e9d55b76d66acbaef32fca92850afeee0ef3aae5192e077a7b6ba963c61" Jan 28 07:45:28 crc kubenswrapper[4642]: E0128 07:45:28.309300 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dab10e9d55b76d66acbaef32fca92850afeee0ef3aae5192e077a7b6ba963c61\": container with ID starting with dab10e9d55b76d66acbaef32fca92850afeee0ef3aae5192e077a7b6ba963c61 not found: ID does not exist" containerID="dab10e9d55b76d66acbaef32fca92850afeee0ef3aae5192e077a7b6ba963c61" Jan 28 07:45:28 crc kubenswrapper[4642]: I0128 07:45:28.309339 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dab10e9d55b76d66acbaef32fca92850afeee0ef3aae5192e077a7b6ba963c61"} err="failed to get container status \"dab10e9d55b76d66acbaef32fca92850afeee0ef3aae5192e077a7b6ba963c61\": rpc error: code = NotFound desc = could not find container \"dab10e9d55b76d66acbaef32fca92850afeee0ef3aae5192e077a7b6ba963c61\": container with ID starting with dab10e9d55b76d66acbaef32fca92850afeee0ef3aae5192e077a7b6ba963c61 not found: ID does not exist" Jan 28 07:45:28 crc kubenswrapper[4642]: I0128 07:45:28.309360 4642 scope.go:117] "RemoveContainer" containerID="aa19ac46fbe9caf98b7ba32dafb9ff8b5071e62154827371bda4d3d6a419d409" Jan 28 07:45:28 crc kubenswrapper[4642]: E0128 07:45:28.309784 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa19ac46fbe9caf98b7ba32dafb9ff8b5071e62154827371bda4d3d6a419d409\": container with ID starting with aa19ac46fbe9caf98b7ba32dafb9ff8b5071e62154827371bda4d3d6a419d409 not found: ID does not exist" containerID="aa19ac46fbe9caf98b7ba32dafb9ff8b5071e62154827371bda4d3d6a419d409" Jan 28 07:45:28 crc kubenswrapper[4642]: I0128 07:45:28.309816 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa19ac46fbe9caf98b7ba32dafb9ff8b5071e62154827371bda4d3d6a419d409"} err="failed to get container status \"aa19ac46fbe9caf98b7ba32dafb9ff8b5071e62154827371bda4d3d6a419d409\": rpc error: code = NotFound desc = could not find container \"aa19ac46fbe9caf98b7ba32dafb9ff8b5071e62154827371bda4d3d6a419d409\": container with ID starting with aa19ac46fbe9caf98b7ba32dafb9ff8b5071e62154827371bda4d3d6a419d409 not found: ID does not exist" Jan 28 07:45:28 crc kubenswrapper[4642]: I0128 07:45:28.309841 4642 scope.go:117] "RemoveContainer" containerID="fa2dd667789af8484d7779cba2095353b0e41cb4f9e945f6540c72d421029f02" Jan 28 07:45:28 crc kubenswrapper[4642]: E0128 07:45:28.310180 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa2dd667789af8484d7779cba2095353b0e41cb4f9e945f6540c72d421029f02\": container with ID starting with fa2dd667789af8484d7779cba2095353b0e41cb4f9e945f6540c72d421029f02 not found: ID does not exist" containerID="fa2dd667789af8484d7779cba2095353b0e41cb4f9e945f6540c72d421029f02" Jan 28 07:45:28 crc kubenswrapper[4642]: I0128 07:45:28.310249 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa2dd667789af8484d7779cba2095353b0e41cb4f9e945f6540c72d421029f02"} err="failed to get container status \"fa2dd667789af8484d7779cba2095353b0e41cb4f9e945f6540c72d421029f02\": rpc error: code = NotFound desc = could not find container \"fa2dd667789af8484d7779cba2095353b0e41cb4f9e945f6540c72d421029f02\": container with ID starting with fa2dd667789af8484d7779cba2095353b0e41cb4f9e945f6540c72d421029f02 not found: ID does not exist" Jan 28 07:45:29 crc kubenswrapper[4642]: I0128 07:45:29.106067 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb18bd7a-57f6-4d79-af4b-352a7733b9bf" path="/var/lib/kubelet/pods/bb18bd7a-57f6-4d79-af4b-352a7733b9bf/volumes" Jan 28 07:45:30 crc kubenswrapper[4642]: E0128 07:45:30.195863 4642 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 192.168.25.248:42504->192.168.25.248:37973: write tcp 192.168.25.248:42504->192.168.25.248:37973: write: broken pipe Jan 28 07:46:04 crc kubenswrapper[4642]: I0128 07:46:04.077201 4642 scope.go:117] "RemoveContainer" containerID="d6f045c17d53905b1878be61c8d0d08769d628230ad4db0e4f63d9a4237f5596" Jan 28 07:46:08 crc kubenswrapper[4642]: I0128 07:46:08.199379 4642 patch_prober.go:28] interesting pod/machine-config-daemon-hdsmf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 07:46:08 crc kubenswrapper[4642]: I0128 07:46:08.200465 4642 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 07:46:30 crc kubenswrapper[4642]: I0128 07:46:30.636127 4642 generic.go:334] "Generic (PLEG): container finished" podID="1937621d-44bd-461a-9387-71399215fb23" containerID="bc61c0908dd7ccae29805f46638aacedfb4870497d5aa46027069dffd6212415" exitCode=0 Jan 28 07:46:30 crc kubenswrapper[4642]: I0128 07:46:30.636268 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4fmnl/must-gather-m4hsk" event={"ID":"1937621d-44bd-461a-9387-71399215fb23","Type":"ContainerDied","Data":"bc61c0908dd7ccae29805f46638aacedfb4870497d5aa46027069dffd6212415"} Jan 28 07:46:30 crc kubenswrapper[4642]: I0128 07:46:30.640275 4642 scope.go:117] "RemoveContainer" containerID="bc61c0908dd7ccae29805f46638aacedfb4870497d5aa46027069dffd6212415" Jan 28 07:46:31 crc kubenswrapper[4642]: I0128 07:46:31.303720 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-4fmnl_must-gather-m4hsk_1937621d-44bd-461a-9387-71399215fb23/gather/0.log" Jan 28 07:46:38 crc kubenswrapper[4642]: I0128 07:46:38.199890 4642 patch_prober.go:28] interesting pod/machine-config-daemon-hdsmf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 07:46:38 crc kubenswrapper[4642]: I0128 07:46:38.200421 4642 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 07:46:40 crc kubenswrapper[4642]: I0128 07:46:40.533070 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-4fmnl/must-gather-m4hsk"] Jan 28 07:46:40 crc kubenswrapper[4642]: I0128 07:46:40.533560 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-4fmnl/must-gather-m4hsk" podUID="1937621d-44bd-461a-9387-71399215fb23" containerName="copy" containerID="cri-o://3d2a9e55adede0899c649a7cbf88b64f734ced12e5aff9eb82e978d55c217746" gracePeriod=2 Jan 28 07:46:40 crc kubenswrapper[4642]: I0128 07:46:40.540468 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-4fmnl/must-gather-m4hsk"] Jan 28 07:46:40 crc kubenswrapper[4642]: I0128 07:46:40.703630 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-4fmnl_must-gather-m4hsk_1937621d-44bd-461a-9387-71399215fb23/copy/0.log" Jan 28 07:46:40 crc kubenswrapper[4642]: I0128 07:46:40.703951 4642 generic.go:334] "Generic (PLEG): container finished" podID="1937621d-44bd-461a-9387-71399215fb23" containerID="3d2a9e55adede0899c649a7cbf88b64f734ced12e5aff9eb82e978d55c217746" exitCode=143 Jan 28 07:46:40 crc kubenswrapper[4642]: I0128 07:46:40.882124 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-4fmnl_must-gather-m4hsk_1937621d-44bd-461a-9387-71399215fb23/copy/0.log" Jan 28 07:46:40 crc kubenswrapper[4642]: I0128 07:46:40.882439 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4fmnl/must-gather-m4hsk" Jan 28 07:46:41 crc kubenswrapper[4642]: I0128 07:46:41.075467 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rgz5v\" (UniqueName: \"kubernetes.io/projected/1937621d-44bd-461a-9387-71399215fb23-kube-api-access-rgz5v\") pod \"1937621d-44bd-461a-9387-71399215fb23\" (UID: \"1937621d-44bd-461a-9387-71399215fb23\") " Jan 28 07:46:41 crc kubenswrapper[4642]: I0128 07:46:41.075727 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/1937621d-44bd-461a-9387-71399215fb23-must-gather-output\") pod \"1937621d-44bd-461a-9387-71399215fb23\" (UID: \"1937621d-44bd-461a-9387-71399215fb23\") " Jan 28 07:46:41 crc kubenswrapper[4642]: I0128 07:46:41.080312 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1937621d-44bd-461a-9387-71399215fb23-kube-api-access-rgz5v" (OuterVolumeSpecName: "kube-api-access-rgz5v") pod "1937621d-44bd-461a-9387-71399215fb23" (UID: "1937621d-44bd-461a-9387-71399215fb23"). InnerVolumeSpecName "kube-api-access-rgz5v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:46:41 crc kubenswrapper[4642]: I0128 07:46:41.177858 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rgz5v\" (UniqueName: \"kubernetes.io/projected/1937621d-44bd-461a-9387-71399215fb23-kube-api-access-rgz5v\") on node \"crc\" DevicePath \"\"" Jan 28 07:46:41 crc kubenswrapper[4642]: I0128 07:46:41.187887 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1937621d-44bd-461a-9387-71399215fb23-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "1937621d-44bd-461a-9387-71399215fb23" (UID: "1937621d-44bd-461a-9387-71399215fb23"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 07:46:41 crc kubenswrapper[4642]: I0128 07:46:41.279637 4642 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/1937621d-44bd-461a-9387-71399215fb23-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 28 07:46:41 crc kubenswrapper[4642]: I0128 07:46:41.711971 4642 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-4fmnl_must-gather-m4hsk_1937621d-44bd-461a-9387-71399215fb23/copy/0.log" Jan 28 07:46:41 crc kubenswrapper[4642]: I0128 07:46:41.712358 4642 scope.go:117] "RemoveContainer" containerID="3d2a9e55adede0899c649a7cbf88b64f734ced12e5aff9eb82e978d55c217746" Jan 28 07:46:41 crc kubenswrapper[4642]: I0128 07:46:41.712404 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4fmnl/must-gather-m4hsk" Jan 28 07:46:41 crc kubenswrapper[4642]: I0128 07:46:41.726176 4642 scope.go:117] "RemoveContainer" containerID="bc61c0908dd7ccae29805f46638aacedfb4870497d5aa46027069dffd6212415" Jan 28 07:46:43 crc kubenswrapper[4642]: I0128 07:46:43.106215 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1937621d-44bd-461a-9387-71399215fb23" path="/var/lib/kubelet/pods/1937621d-44bd-461a-9387-71399215fb23/volumes" Jan 28 07:47:08 crc kubenswrapper[4642]: I0128 07:47:08.199864 4642 patch_prober.go:28] interesting pod/machine-config-daemon-hdsmf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 07:47:08 crc kubenswrapper[4642]: I0128 07:47:08.200296 4642 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 07:47:08 crc kubenswrapper[4642]: I0128 07:47:08.200334 4642 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" Jan 28 07:47:08 crc kubenswrapper[4642]: I0128 07:47:08.200867 4642 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"26d147d0901f656ab56ada364063322de3b63cd44a4dedbd8dd59c3d8b67e6eb"} pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 07:47:08 crc kubenswrapper[4642]: I0128 07:47:08.200916 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" containerName="machine-config-daemon" containerID="cri-o://26d147d0901f656ab56ada364063322de3b63cd44a4dedbd8dd59c3d8b67e6eb" gracePeriod=600 Jan 28 07:47:08 crc kubenswrapper[4642]: E0128 07:47:08.317418 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdsmf_openshift-machine-config-operator(338ae955-434d-40bd-8519-580badf3e175)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" Jan 28 07:47:08 crc kubenswrapper[4642]: I0128 07:47:08.870069 4642 generic.go:334] "Generic (PLEG): container finished" podID="338ae955-434d-40bd-8519-580badf3e175" containerID="26d147d0901f656ab56ada364063322de3b63cd44a4dedbd8dd59c3d8b67e6eb" exitCode=0 Jan 28 07:47:08 crc kubenswrapper[4642]: I0128 07:47:08.870107 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" event={"ID":"338ae955-434d-40bd-8519-580badf3e175","Type":"ContainerDied","Data":"26d147d0901f656ab56ada364063322de3b63cd44a4dedbd8dd59c3d8b67e6eb"} Jan 28 07:47:08 crc kubenswrapper[4642]: I0128 07:47:08.870146 4642 scope.go:117] "RemoveContainer" containerID="ac735d129b9c30db88be66b981b03419f68aa61de1ef5f25e471c18bc7e1d026" Jan 28 07:47:08 crc kubenswrapper[4642]: I0128 07:47:08.870800 4642 scope.go:117] "RemoveContainer" containerID="26d147d0901f656ab56ada364063322de3b63cd44a4dedbd8dd59c3d8b67e6eb" Jan 28 07:47:08 crc kubenswrapper[4642]: E0128 07:47:08.871341 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdsmf_openshift-machine-config-operator(338ae955-434d-40bd-8519-580badf3e175)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" Jan 28 07:47:24 crc kubenswrapper[4642]: I0128 07:47:24.098389 4642 scope.go:117] "RemoveContainer" containerID="26d147d0901f656ab56ada364063322de3b63cd44a4dedbd8dd59c3d8b67e6eb" Jan 28 07:47:24 crc kubenswrapper[4642]: E0128 07:47:24.098983 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdsmf_openshift-machine-config-operator(338ae955-434d-40bd-8519-580badf3e175)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" Jan 28 07:47:36 crc kubenswrapper[4642]: I0128 07:47:36.782993 4642 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-86pqc"] Jan 28 07:47:36 crc kubenswrapper[4642]: E0128 07:47:36.783649 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb18bd7a-57f6-4d79-af4b-352a7733b9bf" containerName="extract-content" Jan 28 07:47:36 crc kubenswrapper[4642]: I0128 07:47:36.783660 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb18bd7a-57f6-4d79-af4b-352a7733b9bf" containerName="extract-content" Jan 28 07:47:36 crc kubenswrapper[4642]: E0128 07:47:36.783676 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1937621d-44bd-461a-9387-71399215fb23" containerName="gather" Jan 28 07:47:36 crc kubenswrapper[4642]: I0128 07:47:36.783681 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="1937621d-44bd-461a-9387-71399215fb23" containerName="gather" Jan 28 07:47:36 crc kubenswrapper[4642]: E0128 07:47:36.783690 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1937621d-44bd-461a-9387-71399215fb23" containerName="copy" Jan 28 07:47:36 crc kubenswrapper[4642]: I0128 07:47:36.783695 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="1937621d-44bd-461a-9387-71399215fb23" containerName="copy" Jan 28 07:47:36 crc kubenswrapper[4642]: E0128 07:47:36.783701 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb18bd7a-57f6-4d79-af4b-352a7733b9bf" containerName="extract-utilities" Jan 28 07:47:36 crc kubenswrapper[4642]: I0128 07:47:36.783706 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb18bd7a-57f6-4d79-af4b-352a7733b9bf" containerName="extract-utilities" Jan 28 07:47:36 crc kubenswrapper[4642]: E0128 07:47:36.783717 4642 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb18bd7a-57f6-4d79-af4b-352a7733b9bf" containerName="registry-server" Jan 28 07:47:36 crc kubenswrapper[4642]: I0128 07:47:36.783723 4642 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb18bd7a-57f6-4d79-af4b-352a7733b9bf" containerName="registry-server" Jan 28 07:47:36 crc kubenswrapper[4642]: I0128 07:47:36.783885 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="1937621d-44bd-461a-9387-71399215fb23" containerName="gather" Jan 28 07:47:36 crc kubenswrapper[4642]: I0128 07:47:36.783905 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="1937621d-44bd-461a-9387-71399215fb23" containerName="copy" Jan 28 07:47:36 crc kubenswrapper[4642]: I0128 07:47:36.783922 4642 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb18bd7a-57f6-4d79-af4b-352a7733b9bf" containerName="registry-server" Jan 28 07:47:36 crc kubenswrapper[4642]: I0128 07:47:36.784982 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-86pqc" Jan 28 07:47:36 crc kubenswrapper[4642]: I0128 07:47:36.791455 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-86pqc"] Jan 28 07:47:36 crc kubenswrapper[4642]: I0128 07:47:36.809667 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74381c54-51b4-404e-aef9-4c7607649f6f-catalog-content\") pod \"certified-operators-86pqc\" (UID: \"74381c54-51b4-404e-aef9-4c7607649f6f\") " pod="openshift-marketplace/certified-operators-86pqc" Jan 28 07:47:36 crc kubenswrapper[4642]: I0128 07:47:36.809724 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74381c54-51b4-404e-aef9-4c7607649f6f-utilities\") pod \"certified-operators-86pqc\" (UID: \"74381c54-51b4-404e-aef9-4c7607649f6f\") " pod="openshift-marketplace/certified-operators-86pqc" Jan 28 07:47:36 crc kubenswrapper[4642]: I0128 07:47:36.810037 4642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbnq8\" (UniqueName: \"kubernetes.io/projected/74381c54-51b4-404e-aef9-4c7607649f6f-kube-api-access-nbnq8\") pod \"certified-operators-86pqc\" (UID: \"74381c54-51b4-404e-aef9-4c7607649f6f\") " pod="openshift-marketplace/certified-operators-86pqc" Jan 28 07:47:36 crc kubenswrapper[4642]: I0128 07:47:36.911347 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbnq8\" (UniqueName: \"kubernetes.io/projected/74381c54-51b4-404e-aef9-4c7607649f6f-kube-api-access-nbnq8\") pod \"certified-operators-86pqc\" (UID: \"74381c54-51b4-404e-aef9-4c7607649f6f\") " pod="openshift-marketplace/certified-operators-86pqc" Jan 28 07:47:36 crc kubenswrapper[4642]: I0128 07:47:36.911456 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74381c54-51b4-404e-aef9-4c7607649f6f-catalog-content\") pod \"certified-operators-86pqc\" (UID: \"74381c54-51b4-404e-aef9-4c7607649f6f\") " pod="openshift-marketplace/certified-operators-86pqc" Jan 28 07:47:36 crc kubenswrapper[4642]: I0128 07:47:36.911480 4642 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74381c54-51b4-404e-aef9-4c7607649f6f-utilities\") pod \"certified-operators-86pqc\" (UID: \"74381c54-51b4-404e-aef9-4c7607649f6f\") " pod="openshift-marketplace/certified-operators-86pqc" Jan 28 07:47:36 crc kubenswrapper[4642]: I0128 07:47:36.911999 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74381c54-51b4-404e-aef9-4c7607649f6f-utilities\") pod \"certified-operators-86pqc\" (UID: \"74381c54-51b4-404e-aef9-4c7607649f6f\") " pod="openshift-marketplace/certified-operators-86pqc" Jan 28 07:47:36 crc kubenswrapper[4642]: I0128 07:47:36.912060 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74381c54-51b4-404e-aef9-4c7607649f6f-catalog-content\") pod \"certified-operators-86pqc\" (UID: \"74381c54-51b4-404e-aef9-4c7607649f6f\") " pod="openshift-marketplace/certified-operators-86pqc" Jan 28 07:47:36 crc kubenswrapper[4642]: I0128 07:47:36.927485 4642 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbnq8\" (UniqueName: \"kubernetes.io/projected/74381c54-51b4-404e-aef9-4c7607649f6f-kube-api-access-nbnq8\") pod \"certified-operators-86pqc\" (UID: \"74381c54-51b4-404e-aef9-4c7607649f6f\") " pod="openshift-marketplace/certified-operators-86pqc" Jan 28 07:47:37 crc kubenswrapper[4642]: I0128 07:47:37.098087 4642 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-86pqc" Jan 28 07:47:37 crc kubenswrapper[4642]: I0128 07:47:37.106061 4642 scope.go:117] "RemoveContainer" containerID="26d147d0901f656ab56ada364063322de3b63cd44a4dedbd8dd59c3d8b67e6eb" Jan 28 07:47:37 crc kubenswrapper[4642]: E0128 07:47:37.106392 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdsmf_openshift-machine-config-operator(338ae955-434d-40bd-8519-580badf3e175)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" Jan 28 07:47:37 crc kubenswrapper[4642]: I0128 07:47:37.505846 4642 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-86pqc"] Jan 28 07:47:38 crc kubenswrapper[4642]: I0128 07:47:38.049750 4642 generic.go:334] "Generic (PLEG): container finished" podID="74381c54-51b4-404e-aef9-4c7607649f6f" containerID="afb9788c29a3e2121571e1394916bcda3e0e51e5934d605cca76ff1aa4f1e4ac" exitCode=0 Jan 28 07:47:38 crc kubenswrapper[4642]: I0128 07:47:38.049843 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-86pqc" event={"ID":"74381c54-51b4-404e-aef9-4c7607649f6f","Type":"ContainerDied","Data":"afb9788c29a3e2121571e1394916bcda3e0e51e5934d605cca76ff1aa4f1e4ac"} Jan 28 07:47:38 crc kubenswrapper[4642]: I0128 07:47:38.049983 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-86pqc" event={"ID":"74381c54-51b4-404e-aef9-4c7607649f6f","Type":"ContainerStarted","Data":"5c0b5027f7225eeb70254c4640b5cccaa1e9e148ccc54283a2e8d4e315add981"} Jan 28 07:47:38 crc kubenswrapper[4642]: I0128 07:47:38.051320 4642 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 07:47:39 crc kubenswrapper[4642]: I0128 07:47:39.057277 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-86pqc" event={"ID":"74381c54-51b4-404e-aef9-4c7607649f6f","Type":"ContainerStarted","Data":"0f2c9bc7e1fe862f8d3d140d476416da8f926b6e54eee9a833a7f5c6be50e183"} Jan 28 07:47:40 crc kubenswrapper[4642]: I0128 07:47:40.064974 4642 generic.go:334] "Generic (PLEG): container finished" podID="74381c54-51b4-404e-aef9-4c7607649f6f" containerID="0f2c9bc7e1fe862f8d3d140d476416da8f926b6e54eee9a833a7f5c6be50e183" exitCode=0 Jan 28 07:47:40 crc kubenswrapper[4642]: I0128 07:47:40.065076 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-86pqc" event={"ID":"74381c54-51b4-404e-aef9-4c7607649f6f","Type":"ContainerDied","Data":"0f2c9bc7e1fe862f8d3d140d476416da8f926b6e54eee9a833a7f5c6be50e183"} Jan 28 07:47:41 crc kubenswrapper[4642]: I0128 07:47:41.073417 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-86pqc" event={"ID":"74381c54-51b4-404e-aef9-4c7607649f6f","Type":"ContainerStarted","Data":"a780c8b8dbee6b53130fe6e9b1f7120969df589136b41326abcf47931bc87be5"} Jan 28 07:47:41 crc kubenswrapper[4642]: I0128 07:47:41.088594 4642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-86pqc" podStartSLOduration=2.5994957359999997 podStartE2EDuration="5.088566839s" podCreationTimestamp="2026-01-28 07:47:36 +0000 UTC" firstStartedPulling="2026-01-28 07:47:38.051070902 +0000 UTC m=+3581.283159711" lastFinishedPulling="2026-01-28 07:47:40.540142005 +0000 UTC m=+3583.772230814" observedRunningTime="2026-01-28 07:47:41.088415635 +0000 UTC m=+3584.320504444" watchObservedRunningTime="2026-01-28 07:47:41.088566839 +0000 UTC m=+3584.320655648" Jan 28 07:47:47 crc kubenswrapper[4642]: I0128 07:47:47.105468 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-86pqc" Jan 28 07:47:47 crc kubenswrapper[4642]: I0128 07:47:47.105864 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-86pqc" Jan 28 07:47:47 crc kubenswrapper[4642]: I0128 07:47:47.136788 4642 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-86pqc" Jan 28 07:47:48 crc kubenswrapper[4642]: I0128 07:47:48.149170 4642 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-86pqc" Jan 28 07:47:48 crc kubenswrapper[4642]: I0128 07:47:48.763425 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-86pqc"] Jan 28 07:47:50 crc kubenswrapper[4642]: I0128 07:47:50.127016 4642 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-86pqc" podUID="74381c54-51b4-404e-aef9-4c7607649f6f" containerName="registry-server" containerID="cri-o://a780c8b8dbee6b53130fe6e9b1f7120969df589136b41326abcf47931bc87be5" gracePeriod=2 Jan 28 07:47:50 crc kubenswrapper[4642]: I0128 07:47:50.490677 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-86pqc" Jan 28 07:47:50 crc kubenswrapper[4642]: I0128 07:47:50.531631 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nbnq8\" (UniqueName: \"kubernetes.io/projected/74381c54-51b4-404e-aef9-4c7607649f6f-kube-api-access-nbnq8\") pod \"74381c54-51b4-404e-aef9-4c7607649f6f\" (UID: \"74381c54-51b4-404e-aef9-4c7607649f6f\") " Jan 28 07:47:50 crc kubenswrapper[4642]: I0128 07:47:50.532246 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74381c54-51b4-404e-aef9-4c7607649f6f-utilities\") pod \"74381c54-51b4-404e-aef9-4c7607649f6f\" (UID: \"74381c54-51b4-404e-aef9-4c7607649f6f\") " Jan 28 07:47:50 crc kubenswrapper[4642]: I0128 07:47:50.532348 4642 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74381c54-51b4-404e-aef9-4c7607649f6f-catalog-content\") pod \"74381c54-51b4-404e-aef9-4c7607649f6f\" (UID: \"74381c54-51b4-404e-aef9-4c7607649f6f\") " Jan 28 07:47:50 crc kubenswrapper[4642]: I0128 07:47:50.532968 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74381c54-51b4-404e-aef9-4c7607649f6f-utilities" (OuterVolumeSpecName: "utilities") pod "74381c54-51b4-404e-aef9-4c7607649f6f" (UID: "74381c54-51b4-404e-aef9-4c7607649f6f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 07:47:50 crc kubenswrapper[4642]: I0128 07:47:50.538254 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74381c54-51b4-404e-aef9-4c7607649f6f-kube-api-access-nbnq8" (OuterVolumeSpecName: "kube-api-access-nbnq8") pod "74381c54-51b4-404e-aef9-4c7607649f6f" (UID: "74381c54-51b4-404e-aef9-4c7607649f6f"). InnerVolumeSpecName "kube-api-access-nbnq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 07:47:50 crc kubenswrapper[4642]: I0128 07:47:50.569710 4642 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74381c54-51b4-404e-aef9-4c7607649f6f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "74381c54-51b4-404e-aef9-4c7607649f6f" (UID: "74381c54-51b4-404e-aef9-4c7607649f6f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 07:47:50 crc kubenswrapper[4642]: I0128 07:47:50.635384 4642 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nbnq8\" (UniqueName: \"kubernetes.io/projected/74381c54-51b4-404e-aef9-4c7607649f6f-kube-api-access-nbnq8\") on node \"crc\" DevicePath \"\"" Jan 28 07:47:50 crc kubenswrapper[4642]: I0128 07:47:50.635413 4642 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74381c54-51b4-404e-aef9-4c7607649f6f-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 07:47:50 crc kubenswrapper[4642]: I0128 07:47:50.635424 4642 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74381c54-51b4-404e-aef9-4c7607649f6f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 07:47:51 crc kubenswrapper[4642]: I0128 07:47:51.135014 4642 generic.go:334] "Generic (PLEG): container finished" podID="74381c54-51b4-404e-aef9-4c7607649f6f" containerID="a780c8b8dbee6b53130fe6e9b1f7120969df589136b41326abcf47931bc87be5" exitCode=0 Jan 28 07:47:51 crc kubenswrapper[4642]: I0128 07:47:51.135057 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-86pqc" event={"ID":"74381c54-51b4-404e-aef9-4c7607649f6f","Type":"ContainerDied","Data":"a780c8b8dbee6b53130fe6e9b1f7120969df589136b41326abcf47931bc87be5"} Jan 28 07:47:51 crc kubenswrapper[4642]: I0128 07:47:51.135066 4642 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-86pqc" Jan 28 07:47:51 crc kubenswrapper[4642]: I0128 07:47:51.135090 4642 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-86pqc" event={"ID":"74381c54-51b4-404e-aef9-4c7607649f6f","Type":"ContainerDied","Data":"5c0b5027f7225eeb70254c4640b5cccaa1e9e148ccc54283a2e8d4e315add981"} Jan 28 07:47:51 crc kubenswrapper[4642]: I0128 07:47:51.135112 4642 scope.go:117] "RemoveContainer" containerID="a780c8b8dbee6b53130fe6e9b1f7120969df589136b41326abcf47931bc87be5" Jan 28 07:47:51 crc kubenswrapper[4642]: I0128 07:47:51.151936 4642 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-86pqc"] Jan 28 07:47:51 crc kubenswrapper[4642]: I0128 07:47:51.153343 4642 scope.go:117] "RemoveContainer" containerID="0f2c9bc7e1fe862f8d3d140d476416da8f926b6e54eee9a833a7f5c6be50e183" Jan 28 07:47:51 crc kubenswrapper[4642]: I0128 07:47:51.158179 4642 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-86pqc"] Jan 28 07:47:51 crc kubenswrapper[4642]: I0128 07:47:51.175960 4642 scope.go:117] "RemoveContainer" containerID="afb9788c29a3e2121571e1394916bcda3e0e51e5934d605cca76ff1aa4f1e4ac" Jan 28 07:47:51 crc kubenswrapper[4642]: I0128 07:47:51.198636 4642 scope.go:117] "RemoveContainer" containerID="a780c8b8dbee6b53130fe6e9b1f7120969df589136b41326abcf47931bc87be5" Jan 28 07:47:51 crc kubenswrapper[4642]: E0128 07:47:51.199081 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a780c8b8dbee6b53130fe6e9b1f7120969df589136b41326abcf47931bc87be5\": container with ID starting with a780c8b8dbee6b53130fe6e9b1f7120969df589136b41326abcf47931bc87be5 not found: ID does not exist" containerID="a780c8b8dbee6b53130fe6e9b1f7120969df589136b41326abcf47931bc87be5" Jan 28 07:47:51 crc kubenswrapper[4642]: I0128 07:47:51.199173 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a780c8b8dbee6b53130fe6e9b1f7120969df589136b41326abcf47931bc87be5"} err="failed to get container status \"a780c8b8dbee6b53130fe6e9b1f7120969df589136b41326abcf47931bc87be5\": rpc error: code = NotFound desc = could not find container \"a780c8b8dbee6b53130fe6e9b1f7120969df589136b41326abcf47931bc87be5\": container with ID starting with a780c8b8dbee6b53130fe6e9b1f7120969df589136b41326abcf47931bc87be5 not found: ID does not exist" Jan 28 07:47:51 crc kubenswrapper[4642]: I0128 07:47:51.199296 4642 scope.go:117] "RemoveContainer" containerID="0f2c9bc7e1fe862f8d3d140d476416da8f926b6e54eee9a833a7f5c6be50e183" Jan 28 07:47:51 crc kubenswrapper[4642]: E0128 07:47:51.199620 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f2c9bc7e1fe862f8d3d140d476416da8f926b6e54eee9a833a7f5c6be50e183\": container with ID starting with 0f2c9bc7e1fe862f8d3d140d476416da8f926b6e54eee9a833a7f5c6be50e183 not found: ID does not exist" containerID="0f2c9bc7e1fe862f8d3d140d476416da8f926b6e54eee9a833a7f5c6be50e183" Jan 28 07:47:51 crc kubenswrapper[4642]: I0128 07:47:51.199732 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f2c9bc7e1fe862f8d3d140d476416da8f926b6e54eee9a833a7f5c6be50e183"} err="failed to get container status \"0f2c9bc7e1fe862f8d3d140d476416da8f926b6e54eee9a833a7f5c6be50e183\": rpc error: code = NotFound desc = could not find container \"0f2c9bc7e1fe862f8d3d140d476416da8f926b6e54eee9a833a7f5c6be50e183\": container with ID starting with 0f2c9bc7e1fe862f8d3d140d476416da8f926b6e54eee9a833a7f5c6be50e183 not found: ID does not exist" Jan 28 07:47:51 crc kubenswrapper[4642]: I0128 07:47:51.199818 4642 scope.go:117] "RemoveContainer" containerID="afb9788c29a3e2121571e1394916bcda3e0e51e5934d605cca76ff1aa4f1e4ac" Jan 28 07:47:51 crc kubenswrapper[4642]: E0128 07:47:51.200115 4642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afb9788c29a3e2121571e1394916bcda3e0e51e5934d605cca76ff1aa4f1e4ac\": container with ID starting with afb9788c29a3e2121571e1394916bcda3e0e51e5934d605cca76ff1aa4f1e4ac not found: ID does not exist" containerID="afb9788c29a3e2121571e1394916bcda3e0e51e5934d605cca76ff1aa4f1e4ac" Jan 28 07:47:51 crc kubenswrapper[4642]: I0128 07:47:51.200294 4642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afb9788c29a3e2121571e1394916bcda3e0e51e5934d605cca76ff1aa4f1e4ac"} err="failed to get container status \"afb9788c29a3e2121571e1394916bcda3e0e51e5934d605cca76ff1aa4f1e4ac\": rpc error: code = NotFound desc = could not find container \"afb9788c29a3e2121571e1394916bcda3e0e51e5934d605cca76ff1aa4f1e4ac\": container with ID starting with afb9788c29a3e2121571e1394916bcda3e0e51e5934d605cca76ff1aa4f1e4ac not found: ID does not exist" Jan 28 07:47:52 crc kubenswrapper[4642]: I0128 07:47:52.099055 4642 scope.go:117] "RemoveContainer" containerID="26d147d0901f656ab56ada364063322de3b63cd44a4dedbd8dd59c3d8b67e6eb" Jan 28 07:47:52 crc kubenswrapper[4642]: E0128 07:47:52.099633 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdsmf_openshift-machine-config-operator(338ae955-434d-40bd-8519-580badf3e175)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" Jan 28 07:47:53 crc kubenswrapper[4642]: I0128 07:47:53.106308 4642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74381c54-51b4-404e-aef9-4c7607649f6f" path="/var/lib/kubelet/pods/74381c54-51b4-404e-aef9-4c7607649f6f/volumes" Jan 28 07:48:05 crc kubenswrapper[4642]: I0128 07:48:05.098483 4642 scope.go:117] "RemoveContainer" containerID="26d147d0901f656ab56ada364063322de3b63cd44a4dedbd8dd59c3d8b67e6eb" Jan 28 07:48:05 crc kubenswrapper[4642]: E0128 07:48:05.099318 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdsmf_openshift-machine-config-operator(338ae955-434d-40bd-8519-580badf3e175)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" Jan 28 07:48:18 crc kubenswrapper[4642]: I0128 07:48:18.099239 4642 scope.go:117] "RemoveContainer" containerID="26d147d0901f656ab56ada364063322de3b63cd44a4dedbd8dd59c3d8b67e6eb" Jan 28 07:48:18 crc kubenswrapper[4642]: E0128 07:48:18.100164 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdsmf_openshift-machine-config-operator(338ae955-434d-40bd-8519-580badf3e175)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" Jan 28 07:48:30 crc kubenswrapper[4642]: I0128 07:48:30.098482 4642 scope.go:117] "RemoveContainer" containerID="26d147d0901f656ab56ada364063322de3b63cd44a4dedbd8dd59c3d8b67e6eb" Jan 28 07:48:30 crc kubenswrapper[4642]: E0128 07:48:30.099296 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdsmf_openshift-machine-config-operator(338ae955-434d-40bd-8519-580badf3e175)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" Jan 28 07:48:43 crc kubenswrapper[4642]: I0128 07:48:43.098590 4642 scope.go:117] "RemoveContainer" containerID="26d147d0901f656ab56ada364063322de3b63cd44a4dedbd8dd59c3d8b67e6eb" Jan 28 07:48:43 crc kubenswrapper[4642]: E0128 07:48:43.099252 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdsmf_openshift-machine-config-operator(338ae955-434d-40bd-8519-580badf3e175)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" Jan 28 07:48:55 crc kubenswrapper[4642]: I0128 07:48:55.099587 4642 scope.go:117] "RemoveContainer" containerID="26d147d0901f656ab56ada364063322de3b63cd44a4dedbd8dd59c3d8b67e6eb" Jan 28 07:48:55 crc kubenswrapper[4642]: E0128 07:48:55.100219 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdsmf_openshift-machine-config-operator(338ae955-434d-40bd-8519-580badf3e175)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" Jan 28 07:49:09 crc kubenswrapper[4642]: I0128 07:49:09.098405 4642 scope.go:117] "RemoveContainer" containerID="26d147d0901f656ab56ada364063322de3b63cd44a4dedbd8dd59c3d8b67e6eb" Jan 28 07:49:09 crc kubenswrapper[4642]: E0128 07:49:09.099017 4642 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hdsmf_openshift-machine-config-operator(338ae955-434d-40bd-8519-580badf3e175)\"" pod="openshift-machine-config-operator/machine-config-daemon-hdsmf" podUID="338ae955-434d-40bd-8519-580badf3e175" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515136337600024451 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015136337600017366 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015136330221016501 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015136330221015451 5ustar corecore